Service Performance Monitoring is a high-level monitoring dashboard within Logz.io that enables you to monitor your tracing services and operations. This integration allows you to configure Service Performance Monitoring with OpenTelemetry collector and send spans and span metrics from your OpenTelemetry installation to Logz.io.
Log in to your Logz.io account and navigate to the current instructions page inside the Logz.io app. Install the pre-built dashboard to enhance the observability of your metrics.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.
Architecture overview
This integration is based on OpenTelemetry. It works as an add-on to existing OpenTelemetry installations. If you need to set up OpenTelemetry first, refer to our documentation on OpenTelemetry.
The integration includes:
- Configuring the OpenTelemetry collector to receive spans generated by your application instrumentation and send the spans and span metrics to Logz.io
On deployment, your OpenTelemetry instrumentation captures spans from your application and forwards them to the collector, which exports the spans and span metrics data to your Logz.io account.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Set up your locally hosted OpenTelemetry installation to send spans and span metrics to Logz.io
Before you begin, you’ll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- Service Performance Monitoring dashboard activated
- An active account with Logz.io
- A Logz.io span metrics account
Download and configure OpenTelemetry collector
Create a dedicated directory on the host of your application and download the OpenTelemetry collector that is relevant to the operating system of your host.
After downloading the collector, create a configuration file config.yaml
with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
The tail_sampling
defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add to the configuration file of your existing OpenTelemetry collector the parameters described in the next steps.
Add Logz.io exporter to your OpenTelemetry collector
Add the following parameters to the configuration file of your OpenTelemetry collector:
- Under the
receivers
list:
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
- Under the
exporters
list:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
- Under the
processors
list:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
- Under the
service: pipelines
list:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
An example configuration file looks as follows:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"
Start the collector
Run the following command:
<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. - Replace
<VERSION-NAME>
with the version name of the collector applicable to your system, e.g.otelcontribcol_darwin_amd64
.
Run the application
Run the application to generate traces.
Check Logz.io for your metrics
Give your metrics some time to get from your system to ours, and then open Tracing. Navigate to the Monitor tab to view the span metrics.
Set up your OpenTelemetry installation using containerized collector to send spans and span metrics to Logz.io
Before you begin, you’ll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- Service Performance Monitoring dashboard activated
- An active account with Logz.io
- A Logz.io span metrics account
The span metrics account name should include your tracing account name. For example, if your tracing account name is “tracing”, your metrics account could be named “tracing-metrics”.
Pull the Docker image for the OpenTelemetry collector
If you are already running a Logz.io Docker image logzio/otel-collector-traces, the new image logzio/otel-collector-spm will replace it.
In the same Docker network as your application:
docker pull otel/opentelemetry-collector-contrib:0.73.0
This integration only works with a contrib image.
Create a configuration file
Create a file config.yaml
with the following content:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
The tail_sampling
defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add to the configuration file of your existing OpenTelemetry collector the parameters described in the next steps.
Add Logz.io exporter to your OpenTelemetry collector
Add the following parameters to the configuration file of your OpenTelemetry collector:
- Under the
receivers
list:
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
- Under the
exporters
list:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
- Under the
processors
list:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
- Under the
service: pipelines
list:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
An example configuration file looks as follows:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"
Run the container
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.73.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.73.0
Optional parameters
If required, you can add the following optional parameters as environment variables when running the container:
Parameter | Description |
---|---|
LATENCY_HISTOGRAM_BUCKETS | Comma separated list of durations defining the latency histogram buckets. Default: 2ms , 8ms , 50ms , 100ms , 200ms , 500ms , 1s , 5s , 10s |
SPAN_METRICS_DIMENSIONS | Each metric will have at least the following dimensions that are common across all spans: Service name , Operation , Span kind , Status code . The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url . Each additional dimension is defined by a name from the span’s collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric. |
SPAN_METRICS_DIMENSIONS_CACHE_SIZE | The maximum items number of metric_key_to_dimensions_cache . Default: 10000 . |
AGGREGATION_TEMPORALITY | Defines the aggregation temporality of the generated metrics. One of either cumulative or delta . Default: cumulative . |
Run the application
Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.
Run the application to generate traces.
Check Logz.io for your metrics
Give your metrics some time to get from your system to ours, and then open Tracing. Navigate to the Monitor tab to view the span metrics.
Overview
You can use a Helm chart to ship metrics and span metrics from your OpenTelemetry installation to Logz.io. The Helm tool is used to manage packages of pre-configured Kubernetes resources that use charts.
logzio-otel-spm allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.
This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Logz.io helm charts are logzio-helm.
Standard configuration
Before you begin, you’ll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- Service Performance Monitoring dashboard activated
- An active account with Logz.io
Deploy the Helm chart
If you are already running a Helm chart logzio/otel-collector-traces or logzio/otel-collector-spm, the new image logzio-k8s-telemetry will replace it.
Add logzio-helm
repo as follows:
helm repo add logzio-helm https://logzio.github.io/logzio-helm/logzio-k8s-telemetry
helm repo update
Define the logzio-k8s-telemetry service dns
In most cases, the service name will be logzio-k8s-telemetry.default.svc.cluster.local
, where default
is the namespace where you deployed the helm chart and svc.cluster.name
is your cluster domain name.
If you are not sure what your cluster domain name is, you can run the following command to look it up:
kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"'
It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name.
Point your traces exporter to logzio-k8s-telemetry
In the instrumentation code of your application, point the exporter to the service name defined in the previous step, for example http://logzio-k8s-telemetry.default.svc.cluster.local:4317
.
Run the Helm deployment code
helm install \
--set logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio.tracing_token=<<TRACING-SHIPPING-TOKEN>> \
--set logzio.metrics_token=<<SPM-METRICS-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set spm.enabled=true \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
<<LOGZIO_ACCOUNT_REGION_CODE>>
- Your logz.io account region code. Defaults to “us”. Required only if your logz.io region is different than US East.
Check Logz.io for your traces
Give your traces some time to get from your system to ours, then open Logz.io.
Customizing Helm chart parameters
Configure customization options
You can use the following options to update the Helm chart parameters:
-
Specify parameters using the
--set key=value[,key=value]
argument tohelm install
. -
Edit the
values.yaml
. -
Overide default values with your own
my_values.yaml
and apply it in thehelm install
command.
If required, you can add the following optional parameters as environment variables:
Parameter | Description |
---|---|
config.processors.spanmetrics.latency_histogram_buckets | Comma separated list of durations defining the latency histogram buckets. Default: 2ms , 8ms , 50ms , 100ms , 200ms , 500ms , 1s , 5s , 10s |
config.processors.spanmetrics.dimensions | Each metric will have at least the following dimensions that are common across all spans: Service name , Operation , Span kind , Status code . The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url . Each additional dimension is defined by a name from the span’s collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric. |
config.processors.spanmetrics.dimensions_cache_size | The maximum items number of metric_key_to_dimensions_cache . Default: 10000 . |
config.processors.spanmetrics.aggregation_temporality | Defines the aggregation temporality of the generated metrics. One of either cumulative or delta . Default: cumulative . |
secrets.SamplingLatency | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. Default 500. |
secrets.SamplingProbability | Sampling percentage for the probabilistic policy. Default 10. |
Example
You can run the logzio-k8s-telemetry chart with your custom configuration file that takes precedence over the values.yaml
of the chart.
For example:
The collector will sample ALL traces where is some span with error with this example configuration.
baseCollectorConfig:
processors:
tail_sampling:
policies:
[
{
name: error-in-policy,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: slow-traces-policy,
type: latency,
latency: {threshold_ms: 400}
},
{
name: health-traces,
type: and,
and: {
and_sub_policy:
[
{
name: ping-operation,
type: string_attribute,
string_attribute: { key: http.url, values: [ /health ] }
},
{
name: main-service,
type: string_attribute,
string_attribute: { key: service.name, values: [ main-service ] }
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 1}
}
]
}
},
{
name: probability-policy,
type: probabilistic,
probabilistic: {sampling_percentage: 20}
}
]
helm install -f <PATH-TO>/my_values.yaml \
--set logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio.tracing_token=<<TRACING-SHIPPING-TOKEN>> \
--set logzio.metrics_token=<<SPM-METRICS-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set spm.enabled=true \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry
Replace <PATH-TO>
with the path to your custom values.yaml
file.
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Uninstalling the Chart
The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.
To uninstall the logzio-k8s-telemetry
deployment, use the following command:
helm uninstall logzio-k8s-telemetry
This section contains some guidelines for handling errors that you may encounter when trying to collect traces with OpenTelemetry.
- Architecture overview
- Set up your locally hosted OpenTelemetry installation to send spans and span metrics to Logz.io
- Set up your OpenTelemetry installation using containerized collector to send spans and span metrics to Logz.io
- Overview
- Problem: No traces are sent
Problem: No traces are sent
The code has been instrumented, but the traces are not being sent.
Possible cause - Collector not installed
The OpenTelemetry collector may not be installed on your system.
Suggested remedy
Check if you have an OpenTelemetry collector installed and configured to receive traces from your hosts.
Possible cause - Collector path not configured
If the collector is installed, it may not have the correct endpoint configured for the receiver.
Suggested remedy
-
Check that the configuration file of the collector lists the following endpoints:
receivers: otlp: protocols: grpc: endpoint: "0.0.0.0:4317" http: endpoint: "0.0.0.0:4318"
-
In the instrumentation code, make sure that the endpoint is specified correctly. Refer to our tracing documentation for more on this.
Possible cause - Traces not genereated
If the collector is installed and the endpoints are properly configured, the instrumentation code may be incorrect.
Suggested remedy
- Check if the instrumentation can output traces to a console exporter.
- Use a web-hook to check if the traces are going to the output.
- Use the metrics endpoint of the collector (http://<
>:8888/metrics) to see the number of spans received per receiver and the number of spans sent to the Logz.io exporter.
- Replace
<<COLLECTOR-HOST>>
with the address of your collector host, e.g.localhost
, if the collector is hosted locally.
If the above steps do not work, refer to our tracing documentation and re-instrument the application.
Possible cause - Wrong exporter/protocol/endpoint
If traces are generated but not send, the collector may be using incorrect exporter, protocol and/or endpoint.
The correct endpoints are:
receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"
Suggested remedy
-
Activate
debug
logs in the configuration file of the collector as follows:service: telemetry: logs: level: "debug"
Debug logs indicate the status code of the http/https post request.
If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.
If the post request is successful, there will be an additional log with the status code 200. If the post request failed for some reason, there would be another log with the reason for the failure.
Possible cause - Collector failure
If the debug
logs are sent, but the traces are still not generated, the collector logs need to be investigated.
Suggested remedy
-
On Linux and MacOS, see the logs for the collector:
journalctl | grep otelcol
To only see errors:
journalctl | grep otelcol | grep Error
-
Otherwise, navigate to the following URL - http://localhost:8888/metrics
This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.
Possible cause - Exporter failure
Traces may not be generated if the exporter is not configured properly.
Suggested remedy
If you are unable to export traces to a destination, this may be caused by the following:
- There is a network configuration issue
- The exporter configuration is incorrect
- The destination is unavailable
To investigate this issue:
- Make sure that the
exporters
andservice: pipelines
are configured correctly. - Check the collector logs as well as
zpages
for potential issues. - Check your network configuration, such as firewall, DNS, or proxy.
For example, those metrics can provide information about the exporter:
# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.
# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0
Possible cause - Receiver failure
Traces may not be generated if the receiver is not configured properly.
Suggested remedy
If you are unable to receive data, this may be caused by the following:
- There is a network configuration issue
- The receiver configuration is incorrect
- The receiver is defined in the receivers section, but not enabled in any pipelines
- The client configuration is incorrect
Those metrics can provide about the receiver:
# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.
# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34
# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.
# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0