Skip to content

Releases: open-telemetry/opentelemetry-operator

Release v0.114.0

27 Nov 15:21
54caa0e
Compare
Choose a tag to compare

0.114.0

💡 Enhancements 💡

  • collector: Create RBAC rules for the k8s_cluster receiver automatically. (#3427)

  • collector: Create RBAC rules for the k8sobjects receiver automatically. (#3429)

  • collector: Add a warning message when one created collector needs extra RBAC permissions and the service account doesn't have them. (#3432)

  • target allocator: Added allocation_fallback_strategy option as fallback strategy for per-node allocation strategy, can be enabled with feature flag operator.targetallocator.fallbackstrategy (#3477)

    If using per-node allocation strategy, targets that are not attached to a node will not
    be allocated. As the per-node strategy is required when running as a daemonset, it is
    not possible to assign some targets under a daemonset deployment.
    Feature flag operator.targetallocator.fallbackstrategy has been added and results in consistent-hashing
    being used as the fallback allocation strategy for "per-node" only at this time.

  • auto-instrumentation: updated node auto-instrumentation dependencies to the latest version (#3476)

    • auto-instrumentations-node to 0.53.0
    • exporter-metrics-otlp-grpc to 0.55.0
    • exporter-prometheus to 0.55.0
  • operator: Replace references to gcr.io/kubebuilder/kube-rbac-proxy with quay.io/brancz/kube-rbac-proxy (#3485)

🧰 Bug fixes 🧰

  • operator: Operator pod crashed if the Service Monitor for the operator metrics was created before by another operator pod. (#3446)

    Operator fails when the pod is restarted and the Service Monitor for operator metrics was already created by another operator pod.
    To fix this, the operator now sets the owner reference on the Service Monitor to itself and checks if the Service Monitor already exists.

  • auto-instrumentation: Bump base memory requirements for python and go (#3479)

Components

Release v0.113.1

27 Nov 18:04
6ae647a
Compare
Choose a tag to compare

0.113.1

This release fixes an important bug that caused the operator to crash when prometheus-operator CRDs were present in the cluster. See #3446 for details. This fix is also present in v0.114.0.

🧰 Bug fixes 🧰

  • operator: Operator pod crashed if the Service Monitor for the operator metrics was created before by another operator pod. (#3446)
    Operator fails when the pod is restarted and the Service Monitor for operator metrics was already created by another operator pod.
    To fix this, the operator now sets the owner reference on the Service Monitor to itself and checks if the Service Monitor already exists.

Components

Release v0.113.0

08 Nov 16:18
99b6c6f
Compare
Choose a tag to compare

0.113.0

💡 Enhancements 💡

  • operator: Programmatically create the ServiceMonitor for the operator metrics endpoint, ensuring correct namespace handling and dynamic configuration. (#3370)
    Previously, the ServiceMonitor was created statically from a manifest file, causing failures when the
    operator was deployed in a non-default namespace. This enhancement ensures automatic adjustment of the
    serverName and seamless metrics scraping.
  • collector: Create RBAC rules for the k8s_events receiver automatically. (#3420)
  • collector: Inject environment K8S_NODE_NAME environment variable for the Kubelet Stats Receiver. (#2779)
  • auto-instrumentation: add config for installing musl based auto-instrumentation for Python (#2264)
  • auto-instrumentation: Support http/json and http/protobuf via OTEL_EXPORTER_OTLP_PROTOCOL environment variable in addition to default grpc for exporting traces (#3412)
  • target allocator: enables support for pulling scrape config and probe CRDs in the target allocator (#1842)

🧰 Bug fixes 🧰

  • collector: Fix mutation of deployments, statefulsets, and daemonsets allowing to remove fields on update (#2947)

Components

Release v0.112.0

30 Oct 15:16
8adc2f5
Compare
Choose a tag to compare

0.112.0

💡 Enhancements 💡

  • auto-instrumentation: Support configuring Java auto-instrumentation when runtime configuration is provided from configmap or secret. (#1814)
    This change allows users to configure JAVA_TOOL_OPTIONS in config map or secret when the name of the variable is defined in the pod spec.
    The operator in this case set another JAVA_TOOL_OPTIONS that references the original value
    e.g. JAVA_TOOL_OPTIONS=$(JAVA_TOOL_OPTIONS) -javaagent:/otel-auto-instrumentation-java/javaagent.jar.

  • auto-instrumentation: Adds VolumeClaimTemplate field to Instrumentation spec to enable user-definable ephemeral volumes for auto-instrumentation. (#3267)

  • collector: Add support for persistentVolumeClaimRetentionPolicy field (#3305)

  • auto-instrumentation: build musl based auto-instrumentation in Python docker image (#2264)

  • auto-instrumentation: An empty line should come before the addition of Include ...opentemetry_agent.conf, as a protection measure against cases of httpd.conf w/o a blank last line (#3401)

  • collector: Add automatic RBAC creation for the kubeletstats receiver. (#3155)

  • auto-instrumentation: Add Nodejs auto-instrumentation image builds for linux/s390x,linux/ppc64le. (#3322)

🧰 Bug fixes 🧰

  • target allocator: Permission check fixed for the serviceaccount of the target allocator (#3380)
  • target allocator: Change docker image to run as non-root (#3378)

Components

Release v0.111.0

21 Oct 14:39
fec94c8
Compare
Choose a tag to compare

0.111.0

💡 Enhancements 💡

  • auto-instrumentation: set OTEL_LOGS_EXPORTER env var to otlp in python instrumentation (#3330)

  • collector: Expose the Collector telemetry endpoint by default. (#3361)

    The collector v0.111.0 changes the default binding of the telemetry metrics endpoint from 0.0.0.0 to localhost.
    To avoid any disruption we fallback to 0.0.0.0:{PORT} as default address.
    Details can be found here: opentelemetry-collector#11251

  • auto-instrumentation: Add support for specifying exporter TLS certificates in auto-instrumentation. (#3338)

    Now Instrumentation CR supports specifying TLS certificates for exporter:

    spec:
      exporter:
        endpoint: https://otel-collector:4317
        tls:
          secretName: otel-tls-certs
          configMapName: otel-ca-bundle
          # otel-ca-bundle
          ca_file: ca.crt
          # present in otel-tls-certs
          cert_file: tls.crt
          # present in otel-tls-certs
          key_file: tls.key
  • collector: Add native sidecar injection behind a feature gate which is disabled by default. (#2376)

    Native sidecars are supported since Kubernetes version 1.28 and are availabe by default since 1.29.
    To use native sidecars on Kubernetes v1.28 make sure the "SidecarContainers" feature gate on kubernetes is enabled.
    If native sidecars are available, the operator can be advised to use them by adding
    the --feature-gates=operator.sidecarcontainers.native to the Operator args.
    In the future this may will become availabe as deployment mode on the Collector CR. See #3356

  • target allocator, collector: Enable mTLS between the TA and collector for passing secrets in the scrape_config securely (#1669)

    This change enables mTLS between the collector and the target allocator (requires cert-manager).
    This is necessary for passing secrets securely from the TA to the collector for scraping endpoints that have authentication. Use the operator.targetallocator.mtls to enable this feature. See the target allocator documentation for more details.

🧰 Bug fixes 🧰

  • collector-webhook: Fixed validation of stabilizationWindowSeconds in autoscaler behaviour (#3345)

    The validation of stabilizationWindowSeconds in the autoscaler.behaviour.scale[Up|Down] incorrectly rejected 0 as an invalid value.
    This has been fixed to ensure that the value is validated correctly (should be >=0 and <=3600) and the error messsage has been updated to reflect this.

Components

Release v0.110.0

09 Oct 15:37
65b40cb
Compare
Choose a tag to compare

0.110.0

🛑 Breaking changes 🛑

  • auto-instrumentation: Enable multi instrumentation by default. (#3090)

    Starting with this release, the OpenTelemetry Operator now enables multi-instrumentation by default.
    This enhancement allows instrumentation of multiple containers in a pod with language-specific configurations.

    Key Changes:

    • Single Instrumentation (Default Behavior): If no container names are specified using the
      instrumentation.opentelemetry.io/container-names annotation, instrumentation will be applied to the first container in
      the pod spec by default. This only applies when single instrumentation injection is configured.
    • Multi-Container Pods: In scenarios where different containers in a pod use distinct technologies, users must specify the
      container(s) for instrumentation using language-specific annotations. Without this specification, the default behavior may
      not work as expected for multi-container environments.

    Compatibility:

    • Users already utilizing the instrumentation.opentelemetry.io/container-names annotation do not need to take any action.
      Their existing setup will continue to function as before.
    • Important: Users who attempt to configure both instrumentation.opentelemetry.io/container-names and language-specific annotations
      (for multi-instrumentation) simultaneously will encounter an error, as this configuration is not supported.
  • collector: Remove ComponentUseLocalHostAsDefaultHost collector feature gate. (#3306)

    This change may break setups where receiver endpoints are not explicitly configured to listen on e.g. 0.0.0.0.
    Change #3333 attempts to address this issue for a known set of components.
    The operator performs the adjustment for the following receivers:

    • otlp
    • skywalking
    • jaeger
    • loki
    • opencensus
    • zipkin
    • tcplog
    • udplog
    • fluentforward
    • statsd
    • awsxray/UDP
    • carbon
    • collectd
    • sapm
    • signalfx
    • splunk_hec
    • wavefront

💡 Enhancements 💡

  • auto-instrumentation, collector: Add a must gather utility to help troubleshoot (#3149)

    The new utility is available as part of a new container image.

    To use the image in a running OpenShift cluster, you need to run the following command:

    oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace opentelemetry-operator-system

    See the README for more details.

  • collector: set default address for all parsed receivers (#3126)

    This feature is enabled by default. It can be disabled by specifying
    --feature-gates=-operator.collector.default.config.

  • operator: Use 0.0.0.0 as otlp receiver default address (#3126)

  • collector: Add flag to disable components when operator runs on FIPS enabled cluster. (#3315)
    Flag --fips-disabled-components=receiver.otlp,exporter.otlp,processor.batch,extension.oidc can be used to disable
    components when operator runs on FIPS enabled cluster. The operator uses /proc/sys/crypto/fips_enabled to check
    if FIPS is enabled.

  • collector: Improves healthcheck parsing capabilities, allowing for future extensions to configure a healthcheck other than the v1 healthcheck extension. (#3184)

  • auto-instrumentation: Add support for k8s labels such as app.kubernetes.io/name for resource attributes (#3112)

    You can opt-in as follows:

    apiVersion: opentelemetry.io/v1alpha1
    kind: Instrumentation
    metadata:
      name: my-instrumentation
    spec:
      defaults:
        useLabelsForResourceAttributes: true

    The following labels are supported:

    • app.kubernetes.io/name becomes service.name
    • app.kubernetes.io/version becomes service.version
    • app.kubernetes.io/part-of becomes service.namespace
    • app.kubernetes.io/instance becomes service.instance.id

🧰 Bug fixes 🧰

  • auto-instrumentation: Fix ApacheHttpd, Nginx and SDK injectors to honour their container-names annotations. (#3313)

    This is a breaking change if anyone is accidentally using the enablement flag with container names for these 3 injectors.

Components

Release v0.109.0

21 Sep 22:02
f81ef33
Compare
Choose a tag to compare

0.109.0

🚩 Deprecations 🚩

  • operator: Deprecated label flag and introduced labels-filter flag to align the label filtering with the attribute filtering flag name. The label flag will be removed when #3236 issue is resolved. (#3218)

💡 Enhancements 💡

  • collector: adds test for memory utilization (#3283)
  • operator: Added reconciliation errors for webhook events. The webhooks run the manifest generators to check for any errors. (#2399)

Components

Release v0.108.0

05 Sep 17:18
e023705
Compare
Choose a tag to compare

0.108.0

💡 Enhancements 💡

  • auto-instrumentation: set OTEL_EXPORTER_OTLP_PROTOCOL instead of signal specific env vars in python instrumentation (#3165)
  • collector: Allow autoscaler targetCPUUtilization and TargetMemoryUtilization to be greater than 99 (#3258)
  • auto-instrumentation: Not ignore the instrumentation.opentelemetry.io/container-names annotation when the multi-instrumentation is enabled (#3090)
  • operator: Support for Kubernetes 1.31 version. (#3247)
  • target allocator: introduces the global field in the TA config to allow for setting scrape protocols (#3160)

🧰 Bug fixes 🧰

  • auto-instrumentation: Fix file copy for NGINX auto-instrumentation for non-root workloads. (#2726)

  • target allocator: Retrying failed namespace informer creation in promOperator CRD watcher, then exit if creation issue cannot be resolved (#3216)

  • target allocator: Rollback #3187 (#3242)
    This Rollsback 3187 which breaks TargetAllocator config for clusters with custom domains.

  • auto-instrumentation: Fixes a bug that was preventing auto instrumentation from getting correct images. (#3014)
    This PR removes the restriction on the operator to only upgrade manually applied CRDs. This meant
    that resources applied by helm were not upgraded at all. The solution was to remove the restriction
    we had on querying the label app.kubernetes.io/managed-by=opentelemetry-operator, thereby upgrading
    ALL CRDs in the cluster.

  • collector: Fixes a bug that was preventing upgrade patches from reliably applying. (#3074)
    A bug was discovered in the process of testing the PR that was failing to remove the environment
    variables introduced in the 0.104.0 upgrade. The fix was to take a deepcopy of the object and update that.

  • collector: Don't unnecessarily take ownership of PersistentVolumes and PersistentVolumeClaims (#3042)

  • awsxray-receiver: Switched the protocol of awsxray-receiver to UDP from TCP (#3261)

Components

Release v0.107.0

15 Aug 16:12
b40287d
Compare
Choose a tag to compare

0.107.0

💡 Enhancements 💡

  • instrumentation: introduced ability to set Otel resource attributes based on annotations for instrumentation (#2181)

    resource.opentelemetry.io/your-key: "your-value"

🧰 Bug fixes 🧰

  • collector: Fix example for labels-filter startup parameter --label. (#3201)

Components

Release v0.106.0

07 Aug 15:28
c839f73
Compare
Choose a tag to compare

0.106.0

🧰 Bug fixes 🧰

  • collector: Fixes a bug where the operator would default the PDB in the wrong place. (#3198)
  • operator: The OpenShift dashboard shown namespaces where PodMonitors or ServiceMonitors were created even if they were not associated to OpenTelemetry Collectors. (#3196)
    Now, the dashboard lists only those namespaces where there are OpenTelemetry Collectors.
  • operator: When there were multiple OpenTelemetry Collector, the dashboard doesn't allow to select them individually. (#3189)
  • target allocator: Fix collector to target allocator connection in clusters with proxy. (#3187)
    On clusters with global proxy the collector might fail to talk to target allocator
    because the endpoint is set to <ta-service-name>:port and therefore it will go to proxy
    and request might be forwarded to internet. Clusters with proxy configure NO_PROXY to .svc.cluster.local so
    the calls to this endpoint will not go through the proxy.

Components