Deploy this integration to enable instrumentation of your Go application using OpenTelemetry.

Architecture overview

This integration includes:

  • Installing the OpenTelemetry Go instrumentation packages on your application host
  • Installing the OpenTelemetry collector with Logz.io exporter
  • Running your Go application in conjunction with the OpenTelemetry instrumentation

On deployment, the Go instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Setup instrumentation for your locally hosted Go application and send traces to Logz.io

Before you begin, you’ll need:

  • A Go application without instrumentation
  • An active account with Logz.io
  • Port 4318 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in Logz.io.

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Download the general instrumentation packages

These packages are required to enable instrumentation for your code regardless of the type of application that you need to instrument.

To download these packages, run the following command from the application directory:

go get -u go.opentelemetry.io/otel
go get -u go.opentelemetry.io/otel/exporters/otlp
go get -u go.opentelemetry.io/otel
go get -u go.opentelemetry.io/otel/attribute
go get -u go.opentelemetry.io/otel/baggage
go get -u go.opentelemetry.io/otel/propagation
go get -u go.opentelemetry.io/otel/sdk/resource
go get -u go.opentelemetry.io/otel/sdk/trace
go get -u go.opentelemetry.io/otel/semconv/v1.4.0
go get -u go.opentelemetry.io/otel/trace
go get -u go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp

We recommend sending OTLP traces using HTTP. This is why we import the otlptracehttp package.

Download the application specific instrumentation packages

Depending on the type of your application, you need to download instrumentation packages specific to your application. For example, if your application is a HTTP server, you will need the opentelemetry.io/contrib/instrumentation/net/http/otelhttp package. The full list of all available packages can be found in the OpenTelemetry contrib directory.

The example below is given for a HTTP server application:

go get -u go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
Add the instrumentation to the import function

Add all the packages downloaded in the previous steps to the import function of your application.

The example below is given for a HTTP server application:

import (
	"context"
	"io"
	"log"
	"net/http"

	"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"

	"go.opentelemetry.io/otel"
	"go.opentelemetry.io/otel/attribute"
	"go.opentelemetry.io/otel/baggage"
	"go.opentelemetry.io/otel/propagation"
	"go.opentelemetry.io/otel/sdk/resource"
	sdktrace "go.opentelemetry.io/otel/sdk/trace"
	semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
	"go.opentelemetry.io/otel/trace"
	"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
)
Add the initProvider function

Add the initProvider function to the application code as follows:

func initProvider() func() {
	ctx := context.Background()

	res, err := resource.New(ctx,
		resource.WithAttributes(
			semconv.ServiceNameKey.String("test-service"),
		),
	)
	handleErr(err, "failed to create resource")

	traceExporter, err := otlptracehttp.New(ctx,
		otlptracehttp.WithInsecure(),
		otlptracehttp.WithEndpoint("localhost:4318"),
	)
	handleErr(err, "failed to create trace exporter")

	bsp := sdktrace.NewBatchSpanProcessor(traceExporter)
	tracerProvider := sdktrace.NewTracerProvider(
		sdktrace.WithSampler(sdktrace.AlwaysSample()),
		sdktrace.WithResource(res),
		sdktrace.WithSpanProcessor(bsp),
	)
	otel.SetTracerProvider(tracerProvider)
	otel.SetTextMapPropagator(propagation.TraceContext{})
	return func() {
		handleErr(tracerProvider.Shutdown(ctx), "failed to shutdown TracerProvider")
	}
}
Instrument the code in the main function

In the main function of your application, add the following code:

	shutdown := initProvider()
	defer shutdown()

After this, you need to declare the instrumentation according to your application. The example below is given for a HTTP server application. The HTTP handler instructs the tracer to create spans on each request.

uk := attribute.Key("username")

	helloHandler := func(w http.ResponseWriter, req *http.Request) {
		ctx := req.Context()
		span := trace.SpanFromContext(ctx)
		bag := baggage.FromContext(ctx)
		span.AddEvent("handling this...", trace.WithAttributes(uk.String(bag.Member("username").Value())))

		_, _ = io.WriteString(w, "Hello, world!\n")
	}

	otelHandler := otelhttp.NewHandler(http.HandlerFunc(helloHandler), "Hello")

	http.Handle("/hello", otelHandler)
	err := http.ListenAndServe(":7777", nil)
	if err != nil {
		panic(err)
	}
}
func handleErr(err error, message string) {
	if err != nil {
		log.Fatalf("%s: %v", message, err)
	}
Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your Go application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: "0.0.0.0:4317"
      http:
        endpoint: "0.0.0.0:4318"

exporters:
  logzio/traces:
    account_token: "<<TRACING-SHIPPING-TOKEN>>"
    region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

  logging:

processors:
  batch:
  tail_sampling:
    policies:
      [
        {
          name: policy-errors,
          type: status_code,
          status_code: {status_codes: [ERROR]}
        },
        {
          name: policy-slow,
          type: latency,
          latency: {threshold_ms: 1000}
        }, 
        {
          name: policy-random-ok,
          type: probabilistic,
          probabilistic: {sampling_percentage: 10}
        }        
      ]

extensions:
  pprof:
    endpoint: :1777
  zpages:
    endpoint: :55679
  health_check:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [tail_sampling, batch]
      exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Start the collector

Run the following command from the directory of your application file:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.
Run the application

Run the application to generate traces:

go run <YOUR-APPLICATION-FILE-NAME>.go
Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Setup instrumentation for your Go application using Docker and send traces to Logz.io

This integration enables you to instrument your Go application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.

Before you begin, you’ll need:

  • A Go application without instrumentation
  • An active account with Logz.io
  • Port 4317 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in Logz.io.
Download the general instrumentation packages

These packages are required to enable instrumentation for your code regardless of the type of application that you need to instrument.

To download these packages, run the following command from the application directory:

go get -u go.opentelemetry.io/otel
go get -u go.opentelemetry.io/otel/exporters/otlp
go get -u go.opentelemetry.io/otel
go get -u go.opentelemetry.io/otel/attribute
go get -u go.opentelemetry.io/otel/baggage
go get -u go.opentelemetry.io/otel/propagation
go get -u go.opentelemetry.io/otel/sdk/resource
go get -u go.opentelemetry.io/otel/sdk/trace
go get -u go.opentelemetry.io/otel/semconv/v1.4.0
go get -u go.opentelemetry.io/otel/trace
go get -u go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp

We recommend sending OTLP traces using HTTP. This is why we import the otlptracehttp package.

Download the application specific instrumentation packages

Depending on the type of your application, you need to download instrumentation packages specific to your application. For example, if your application is a HTTP server, you will need the opentelemetry.io/contrib/instrumentation/net/http/otelhttp package. The full list of all available packages can be found in the OpenTelemetry contrib directory.

The example below is given for a HTTP server application:

go get -u go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
Add the instrumentation to the import function

Add all the packages downloaded in the previous steps to the import function of your application.

The example below is given for a HTTP server application:

import (
	"context"
	"io"
	"log"
	"net/http"

	"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"

	"go.opentelemetry.io/otel"
	"go.opentelemetry.io/otel/attribute"
	"go.opentelemetry.io/otel/baggage"
	"go.opentelemetry.io/otel/propagation"
	"go.opentelemetry.io/otel/sdk/resource"
	sdktrace "go.opentelemetry.io/otel/sdk/trace"
	semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
	"go.opentelemetry.io/otel/trace"
	"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
)
Add the initProvider function

Add the initProvider function to the application code as follows:

func initProvider() func() {
	ctx := context.Background()

	res, err := resource.New(ctx,
		resource.WithAttributes(
			semconv.ServiceNameKey.String("test-service"),
		),
	)
	handleErr(err, "failed to create resource")

	traceExporter, err := otlptracehttp.New(ctx,
		otlptracehttp.WithInsecure(),
		otlptracehttp.WithEndpoint("localhost:4318"),
	)
	handleErr(err, "failed to create trace exporter")

	bsp := sdktrace.NewBatchSpanProcessor(traceExporter)
	tracerProvider := sdktrace.NewTracerProvider(
		sdktrace.WithSampler(sdktrace.AlwaysSample()),
		sdktrace.WithResource(res),
		sdktrace.WithSpanProcessor(bsp),
	)
	otel.SetTracerProvider(tracerProvider)
	otel.SetTextMapPropagator(propagation.TraceContext{})
	return func() {
		handleErr(tracerProvider.Shutdown(ctx), "failed to shutdown TracerProvider")
	}
}
Instrument the code in the main function

In the main function of your application, add the following code:

	shutdown := initProvider()
	defer shutdown()

After this, you need to declare the instrumentation according to your application. The example below is given for a HTTP server application. The HTTP handler instructs the tracer to create spans on each request.

uk := attribute.Key("username")

	helloHandler := func(w http.ResponseWriter, req *http.Request) {
		ctx := req.Context()
		span := trace.SpanFromContext(ctx)
		bag := baggage.FromContext(ctx)
		span.AddEvent("handling this...", trace.WithAttributes(uk.String(bag.Member("username").Value())))

		_, _ = io.WriteString(w, "Hello, world!\n")
	}

	otelHandler := otelhttp.NewHandler(http.HandlerFunc(helloHandler), "Hello")

	http.Handle("/hello", otelHandler)
	err := http.ListenAndServe(":7777", nil)
	if err != nil {
		panic(err)
	}
}
func handleErr(err error, message string) {
	if err != nil {
		log.Fatalf("%s: %v", message, err)
	}
Pull the Docker image for the OpenTelemetry collector
docker pull otel/opentelemetry-collector-contrib:0.78.0
Create a configuration file

Create a file config.yaml with the following content:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: "0.0.0.0:4317"
      http:
        endpoint: "0.0.0.0:4318"
  

exporters:
  logzio/traces:
    account_token: "<<TRACING-SHIPPING-TOKEN>>"
    region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

  logging:

processors:
  batch:
  tail_sampling:
    policies:
      [
        {
          name: policy-errors,
          type: status_code,
          status_code: {status_codes: [ERROR]}
        },
        {
          name: policy-slow,
          type: latency,
          latency: {threshold_ms: 1000}
        }, 
        {
          name: policy-random-ok,
          type: probabilistic,
          probabilistic: {sampling_percentage: 10}
        }        
      ]

extensions:
  pprof:
    endpoint: :1777
  zpages:
    endpoint: :55679
  health_check:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [tail_sampling, batch]
      exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

Parameter Description Default
threshold_ms Threshold for the spand latency - all traces slower than the threshold value will be filtered in. 1000
sampling_percentage Sampling percentage for the probabilistic policy. 10

If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:

  • Under the exporters list
  logzio/traces:
    account_token: <<TRACING-SHIPPING-TOKEN>>
    region: <<LOGZIO_ACCOUNT_REGION_CODE>>
  • Under the service list:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [tail_sampling, batch]
      exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

An example configuration file looks as follows:

receivers:  
  otlp:
    protocols:
      grpc:
      http:

exporters:
  logzio/traces:
    account_token: "<<TRACING-SHIPPING-TOKEN>>"
    region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

processors:
  batch:
  tail_sampling:
    policies:
      [
        {
          name: policy-errors,
          type: status_code,
          status_code: {status_codes: [ERROR]}
        },
        {
          name: policy-slow,
          type: latency,
          latency: {threshold_ms: 1000}
        }, 
        {
          name: policy-random-ok,
          type: probabilistic,
          probabilistic: {sampling_percentage: 10}
        }        
      ]

extensions:
  pprof:
    endpoint: :1777
  zpages:
    endpoint: :55679
  health_check:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [tail_sampling, batch]
      exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

Parameter Description Default
threshold_ms Threshold for the spand latency - all traces slower than the threshold value will be filtered in. 1000
sampling_percentage Sampling percentage for the probabilistic policy. 10
Run the container

Mount the config.yaml as volume to the docker run command and run it as follows.

Linux
docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows
docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Run the application

Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.

Run the application to generate traces:

go run <YOUR-APPLICATION-FILE-NAME>.go
Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Overview

You can use a Helm chart to ship Traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of pre-configured Kubernetes resources that use charts.

logzio-k8s-telemetry allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.

This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Logz.io helm charts are logzio-helm.

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Standard configuration

Deploy the Helm chart

Add logzio-helm repo as follows:

helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Run the Helm deployment code
helm install  \
--set config.exporters.logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set config.exporters.logzio.account_token=<<TRACING-SHIPPING-TOKEN>> \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

<<LOGZIO_ACCOUNT_REGION_CODE>> - Your Logz.io account region code. Available regions.

Define the logzio-k8s-telemetry service dns

In most cases, the service name will be logzio-k8s-telemetry.default.svc.cluster.local, where default is the namespace where you deployed the helm chart and svc.cluster.name is your cluster domain name.

If you are not sure what your cluster domain name is, you can run the following command to look it up:

kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"'

It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name.

Check Logz.io for your traces

Give your traces some time to get from your system to ours, then open Logz.io.

Customizing Helm chart parameters

Configure customization options

You can use the following options to update the Helm chart parameters:

  • Specify parameters using the --set key=value[,key=value] argument to helm install.

  • Edit the values.yaml.

  • Overide default values with your own my_values.yaml and apply it in the helm install command.

If required, you can add the following optional parameters as environment variables:

Parameter Description
secrets.SamplingLatency Threshold for the spand latency - all traces slower than the threshold value will be filtered in. Default 500.
secrets.SamplingProbability Sampling percentage for the probabilistic policy. Default 10.
Example

You can run the logzio-k8s-telemetry chart with your custom configuration file that takes precedence over the values.yaml of the chart.

For example:

The collector will sample ALL traces where is some span with error with this example configuration.

baseCollectorConfig:
  processors:
    tail_sampling:
      policies:
        [
          {
            name: error-in-policy,
            type: status_code,
            status_code: {status_codes: [ERROR]}
          },
          {
            name: slow-traces-policy,
            type: latency,
            latency: {threshold_ms: 400}
          },
          {
            name: health-traces,
            type: and,
            and: {
              and_sub_policy:
              [
                {
                  name: ping-operation,
                  type: string_attribute,
                  string_attribute: { key: http.url, values: [ /health ] }
                },
                {
                  name: main-service,
                  type: string_attribute,
                  string_attribute: { key: service.name, values: [ main-service ] }
                },
                {
                  name: probability-policy-1,
                  type: probabilistic,
                  probabilistic: {sampling_percentage: 1}
                }
              ]
            }
          },
          {
            name: probability-policy,
            type: probabilistic,
            probabilistic: {sampling_percentage: 20}
          }
        ] 
helm install -f <PATH-TO>/my_values.yaml \
--set logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio.tracing_token=<<TRACING-SHIPPING-TOKEN>> \
--set traces.enabled=true \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry

Replace <PATH-TO> with the path to your custom values.yaml file.

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Uninstalling the Chart

The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.

To uninstall the logzio-k8s-telemetry deployment, use the following command:

helm uninstall logzio-k8s-telemetry

This section contains some guidelines for handling errors that you may encounter when trying to collect traces with OpenTelemetry.

Problem: No traces are sent

The code has been instrumented, but the traces are not being sent.

Possible cause - Collector not installed

The OpenTelemetry collector may not be installed on your system.

Suggested remedy

Check if you have an OpenTelemetry collector installed and configured to receive traces from your hosts.

Possible cause - Collector path not configured

If the collector is installed, it may not have the correct endpoint configured for the receiver.

Suggested remedy

  1. Check that the configuration file of the collector lists the following endpoints:

    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
          http:
            endpoint: "0.0.0.0:4318"
    
  2. In the instrumentation code, make sure that the endpoint is specified correctly. Refer to our tracing documentation for more on this.

Possible cause - Traces not genereated

If the collector is installed and the endpoints are properly configured, the instrumentation code may be incorrect.

Suggested remedy

  1. Check if the instrumentation can output traces to a console exporter.
  2. Use a web-hook to check if the traces are going to the output.
  3. Use the metrics endpoint of the collector (http://<>:8888/metrics) to see the number of spans received per receiver and the number of spans sent to the Logz.io exporter.
  • Replace <<COLLECTOR-HOST>> with the address of your collector host, e.g. localhost, if the collector is hosted locally.

If the above steps do not work, refer to our tracing documentation and re-instrument the application.

Possible cause - Wrong exporter/protocol/endpoint

If traces are generated but not send, the collector may be using incorrect exporter, protocol and/or endpoint.

The correct endpoints are:

   receivers:
     otlp:
       protocols:
         grpc:
           endpoint: "<<COLLECTOR-URL>>:4317"
         http:
           endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"

Suggested remedy

  1. Activate debug logs in the configuration file of the collector as follows:

    service:
      telemetry:
        logs:
          level: "debug"
    

Debug logs indicate the status code of the http/https post request.

If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.

If the post request is successful, there will be an additional log with the status code 200. If the post request failed for some reason, there would be another log with the reason for the failure.

Possible cause - Collector failure

If the debug logs are sent, but the traces are still not generated, the collector logs need to be investigated.

Suggested remedy

  1. On Linux and MacOS, see the logs for the collector:

    journalctl | grep otelcol
    

    To only see errors:

    journalctl | grep otelcol | grep Error
    
  2. Otherwise, navigate to the following URL - http://localhost:8888/metrics

This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.

Possible cause - Exporter failure

Traces may not be generated if the exporter is not configured properly.

Suggested remedy

If you are unable to export traces to a destination, this may be caused by the following:

  • There is a network configuration issue
  • The exporter configuration is incorrect
  • The destination is unavailable

To investigate this issue:

  1. Make sure that the exporters and service: pipelines are configured correctly.
  2. Check the collector logs as well as zpages for potential issues.
  3. Check your network configuration, such as firewall, DNS, or proxy.

For example, those metrics can provide information about the exporter:

# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.

# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0

Possible cause - Receiver failure

Traces may not be generated if the receiver is not configured properly.

Suggested remedy

If you are unable to receive data, this may be caused by the following:

  • There is a network configuration issue
  • The receiver configuration is incorrect
  • The receiver is defined in the receivers section, but not enabled in any pipelines
  • The client configuration is incorrect

Those metrics can provide about the receiver:

# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.

# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34


# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.

# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0