2 minutes
From NGINX to Grafana: Setting Up Distributed Tracing in Kubernetes with OpenTelemetry and Tempo
This tutorial guides you through setting up distributed tracing for NGINX running in Kubernetes. You’ll configure OpenTelemetry to capture traces, send them to Tempo, and visualize them in Grafana.
Prerequisites
- A kubernetes cluster version 1.20+
- Helm
Add the Helm repositories
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Setting up Grafana
Installation
helm install grafana grafana/grafana -n monitoring --create-namespace
remember using a port-forward to access the grafana UI and the creds are store as secret in the monitoring namespace
Setting up Tempo
Installation
- We will install the distruibited version of Tempo. Create
values.yaml
for helm installation
server:
logLevel: debug
gateway:
enabled: true
ingress:
# -- Specifies whether an ingress for the gateway should be created
enabled: false
# Basic auth configuration
basicAuth:
# -- Enables basic authentication for the gateway
enabled: false
metaMonitoring:
# ServiceMonitor configuration
serviceMonitor:
# -- If enabled, ServiceMonitor resources for Prometheus Operator are created
enabled: false
traces:
otlp:
http:
# -- Enable Tempo to ingest Open Telemetry HTTP traces
enabled: true
grpc:
# -- Enable Tempo to ingest Open Telemetry GRPC traces
enabled: true
- Install Tempo
helm install tempo grafana/tempo-distributed --values values.yaml -n monitoring --debug
- Add Tempo to Grafana as a data source
- Open the Grafana UI
- Go to the
Data Sources
page - Search Tempo and select it
- point to the service
http://tempo-query-frontend:3100
- Save and test, it should appear a green msg saying “Successfully connected to Tempo data source.”
Setting up OpenTelemetry
Installation
- Create
values.yaml
for helm installation
config:
exporters:
debug: {}
otlphttp:
endpoint: http://tempo-gateway:80
tls:
insecure: true
extensions:
health_check: {}
processors:
batch: {}
memory_limiter:
check_interval: 5s
limit_percentage: 80
spike_limit_percentage: 25
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
service:
extensions:
- health_check
pipelines:
logs:
exporters:
- debug
processors:
- memory_limiter
- batch
receivers:
- otlp
metrics:
exporters:
- debug
processors:
- memory_limiter
- batch
receivers:
- otlp
traces:
exporters:
- otlphttp
processors:
- memory_limiter
- batch
receivers:
- otlp
telemetry:
metrics:
address: ${env:MY_POD_IP}:8888
image:
repository: otel/opentelemetry-collector-contrib
mode: deployment
presets:
kubernetesAttributes:
enabled: true
service:
enabled: true
- Install OpenTelemetry
helm install opentelemetry-collector entelemetry-collector open-telemetry/opentelemetry-collector --values values.yaml -n monitoring --debug
Setting up NGINX
Installation
helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx --create-namespace
Configuration
Edit the ingress ConfigMap
of nginx, add the following configuration to the data section:
data:
enable-opentelemetry: "true"
opentelemetry-config: /etc/nginx/opentelemetry.toml
opentelemetry-operation-name: HTTP $request_method $service_name $uri
otel-service-name: my-ingress-nginx
otlp-collector-host: opentelemetry-collector.monitoring.svc
remember restart the nginx pod after editing the configmap
Testing
- Make a request to any Ingress you have in your cluster and see the traces on Grafana.
References
- https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/opentelemetry/