Skip to content

Commit

Permalink
Merge branch 'main' into multi-docker
Browse files Browse the repository at this point in the history
  • Loading branch information
a-thaler authored Feb 4, 2025
2 parents eea19d1 + 56477f3 commit 4a37fc1
Show file tree
Hide file tree
Showing 54 changed files with 831 additions and 189 deletions.
2 changes: 1 addition & 1 deletion .env
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ ENV_GARDENER_MIN_NODES=1
ENV_GARDENER_MAX_NODES=2

## Dependencies
ENV_ISTIO_VERSION=1.13.1
ENV_ISTIO_VERSION=1.14.0
ENV_K3D_VERSION=v5.4.7
ENV_GORELEASER_VERSION=v1.23.0

Expand Down
12 changes: 12 additions & 0 deletions .k3d-kyma.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,15 @@ registries:
name: kyma
hostPort: '5001'

options:
k3s:
nodeLabels:
- label: topology.kubernetes.io/region=kyma-local
nodeFilters:
- server:*
- label: topology.kubernetes.io/zone=kyma-local
nodeFilters:
- server:*
- label: node.kubernetes.io/instance-type=local
nodeFilters:
- server:*
1 change: 1 addition & 0 deletions config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ rules:
- ""
resources:
- secrets
- configmaps
verbs:
- get
- list
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ data:
- filter/drop-if-input-source-runtime
- filter/drop-if-input-source-prometheus
- filter/drop-if-input-source-istio
- resource/insert-cluster-name
- resource/insert-cluster-attributes
- batch
exporters:
- otlp/load-test-1
Expand All @@ -58,7 +58,7 @@ data:
- filter/drop-if-input-source-runtime
- filter/drop-if-input-source-prometheus
- filter/drop-if-input-source-istio
- resource/insert-cluster-name
- resource/insert-cluster-attributes
- batch
exporters:
- otlp/load-test-2
Expand All @@ -85,7 +85,7 @@ data:
- filter/drop-if-input-source-runtime
- filter/drop-if-input-source-prometheus
- filter/drop-if-input-source-istio
- resource/insert-cluster-name
- resource/insert-cluster-attributes
- batch
exporters:
- otlp/load-test-3
Expand Down Expand Up @@ -165,7 +165,7 @@ data:
name: k8s.pod.uid
- sources:
- from: connection
resource/insert-cluster-name:
resource/insert-cluster-attributes:
attributes:
- action: insert
key: k8s.cluster.name
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ data:
- filter/drop-if-input-source-runtime
- filter/drop-if-input-source-prometheus
- filter/drop-if-input-source-istio
- resource/insert-cluster-name
- resource/insert-cluster-attributes
- batch
exporters:
- otlp/load-test-1
Expand Down Expand Up @@ -97,7 +97,7 @@ data:
name: k8s.pod.uid
- sources:
- from: connection
resource/insert-cluster-name:
resource/insert-cluster-attributes:
attributes:
- action: insert
key: k8s.cluster.name
Expand Down
4 changes: 3 additions & 1 deletion docs/contributor/releasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

## Release Process

This release process covers the steps to release new major and minor versions for the Kyma Telemetry module.
This release process covers the steps to release new major and minor versions for the Kyma Telemetry module.

Together with the module release, prepare a new release of the [opentelemetry-collector-components](https://github.com/kyma-project/opentelemetry-collector-components). You will need this later in the release process of the Telemetry Manager. The version of `opentelemetry-collector-components` will include the Telemetry Manager version as part of its version (`{CURRENT_OCC_VERSION}-{TELEMETRY_MANAGER_VERSION}`).

1. Verify that all issues in the [GitHub milestone](https://github.com/kyma-project/telemetry-manager/milestones) related to the version are closed.

Expand Down
18 changes: 9 additions & 9 deletions docs/user/integration/dynatrace/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ With the Kyma Telemetry module, you gain even more visibility by adding custom s
## Dynatrace Setup
There are different ways to deploy Dynatrace on Kubernetes. All [deployment options](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-container-platforms/kubernetes/get-started-with-kubernetes-monitoring/deployment-options-k8s) are based on the [Dynatrace Operator](https://github.com/Dynatrace/dynatrace-operator).
There are different ways to deploy Dynatrace on Kubernetes. All [deployment options](https://docs.dynatrace.com/docs/ingest-from/setup-on-k8s/deployment) are based on the [Dynatrace Operator](https://github.com/Dynatrace/dynatrace-operator).
1. Install Dynatrace with the namespace you prepared earlier.
> [!NOTE]
> By default, Dynatrace uses the classic full-stack injection. However, for better stability, we recommend using the [cloud-native fullstack injection](https://docs.dynatrace.com/docs/setup-and-configuration/setup-on-k8s/installation/cloud-native-fullstack).
> By default, Dynatrace used the classic full-stack injection. However, for better stability, we recommend using the [cloud-native fullstack injection](https://docs.dynatrace.com/docs/ingest-from/setup-on-k8s/guides/operation/migration/classic-to-cloud-native).
2. In the DynaKube resource, configure the correct `apiurl` of your environment.
Expand All @@ -76,7 +76,7 @@ There are different ways to deploy Dynatrace on Kubernetes. All [deployment opti
5. In the Dynatrace Hub, enable the **Istio Service Mesh** extension and annotate your services as outlined in the description.
6. If you have a workload exposing metrics in the Prometheus format, you can collect custom metrics in Prometheus format by [annotating the workload](https://docs.dynatrace.com/docs/platform-modules/infrastructure-monitoring/container-platform-monitoring/kubernetes-monitoring/monitor-prometheus-metrics). If the workload has an Istio sidecar, you must either weaken the mTLS setting for the metrics port by defining an [Istio PeerAuthentication](https://istio.io/latest/docs/reference/config/security/peer_authentication/#PeerAuthentication) or exclude the port from interception by the Istio proxy by placing an `traffic.sidecar.istio.io/excludeInboundPorts` annotaion on your Pod that lists the metrics port.
6. If you have a workload exposing metrics in the Prometheus format, you can collect custom metrics in Prometheus format by [annotating the workload](https://docs.dynatrace.com/docs/observe/infrastructure-monitoring/container-platform-monitoring/kubernetes-monitoring/monitor-prometheus-metrics#annotate-kubernetes-services). If the workload has an Istio sidecar, you must either weaken the mTLS setting for the metrics port by defining an [Istio PeerAuthentication](https://istio.io/latest/docs/reference/config/security/peer_authentication/#PeerAuthentication) or exclude the port from interception by the Istio proxy by placing an `traffic.sidecar.istio.io/excludeInboundPorts` annotaion on your Pod that lists the metrics port.
As a result, you see data arriving in your environment, advanced Kubernetes monitoring is possible, and Istio metrics are available.
Expand All @@ -86,14 +86,14 @@ Next, you set up the ingestion of custom span and Istio span data, and, optional
### Create Secret
1. To push custom metrics and spans to Dynatrace, set up a [dataIngestToken](https://docs.dynatrace.com/docs/manage/access-control/access-tokens).
1. To push custom metrics and spans to Dynatrace, set up a [dataIngestToken](https://docs.dynatrace.com/docs/manage/identity-access-management/access-tokens-and-oauth-clients/access-tokens/personal-access-token).
Follow the instructions in [Dynatrace: Generate an access token](https://docs.dynatrace.com/docs/manage/access-control/access-tokens#create-api-token) and select the following scopes:
Follow the instructions in [Dynatrace: Generate an access token](https://docs.dynatrace.com/docs/manage/identity-access-management/access-tokens-and-oauth-clients/access-tokens/personal-access-token#generate-personal-access-tokens) and select the following scopes:
- **Ingest metrics**
- **Ingest OpenTelemetry traces**
2. Create an [apiToken](https://docs.dynatrace.com/docs/manage/access-control/access-tokens) by selecting the template `Kubernetes: Dynatrace Operator`.
2. Create an [apiToken](https://docs.dynatrace.com/docs/manage/identity-access-management/access-tokens-and-oauth-clients/access-tokens/personal-access-token) by selecting the template `Kubernetes: Dynatrace Operator`.
3. To create a new Secret containing your access tokens, replace the `<API_TOKEN>` and `<DATA_INGEST_TOKEN>` placeholder with the `apiToken` and `dataIngestToken` you created, replace the `<API_URL>` placeholder with the Dynatrace endpoint, and run the following command:
Expand Down Expand Up @@ -169,7 +169,7 @@ There are several approaches to ingest custom metrics to Dynatrace, each with di
- Use a MetricPipeline to push metrics directly.
> [!NOTE]
> The Dynatrace OTLP API does [not support](https://docs.dynatrace.com/docs/extend-dynatrace/opentelemetry/getting-started/metrics/ingest/migration-guide-otlp-exporter#migrate-collector-configuration) the full OTLP specification and needs custom transformation. A MetricPipeline does not support these transformation features, so that only metrics can be ingested that don't hit the limitations. At the moment, metrics of type "Histogram" and "Summary" are not supported. Furthermore, "Sum"s must use "delta" aggregation temporality.
> The Dynatrace OTLP API does [not support](https://docs.dynatrace.com/docs/shortlink/opentelemetry-metrics-limitations#limitations) the full OTLP specification and needs custom transformation. A MetricPipeline does not support these transformation features, so that only metrics can be ingested that don't hit the limitations. At the moment, metrics of type "Histogram" and "Summary" are not supported. Furthermore, "Sum"s must use "delta" aggregation temporality.

Use this setup when your application pushes metrics to the telemetry metric service natively with OTLP, and if you have explicitly enabled "delta" aggregation temporality. You cannot enable additional inputs for the MetricPipeline.

Expand Down Expand Up @@ -202,7 +202,7 @@ There are several approaches to ingest custom metrics to Dynatrace, each with di
EOF
```
1. Start pushing metrics to the metric gateway using [delta aggregation temporality.](https://docs.dynatrace.com/docs/extend-dynatrace/opentelemetry/getting-started/metrics/limitations#aggregation-temporality)
1. Start pushing metrics to the metric gateway using [delta aggregation temporality.](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/getting-started/metrics/limitations#aggregation-temporality)
1. To find metrics from your Kyma cluster in the Dynatrace UI, go to **Observe & Explore** > **Metrics**.
Expand Down Expand Up @@ -240,7 +240,7 @@ There are several approaches to ingest custom metrics to Dynatrace, each with di
- Use the Dynatrace metric ingestion with Prometheus exporters.
Use the [Dynatrace annotation approach](https://docs.dynatrace.com/docs/platform-modules/infrastructure-monitoring/container-platform-monitoring/kubernetes-monitoring/monitor-prometheus-metrics), where the Dynatrace ActiveGate component running in your cluster scrapes workloads that are annotated with Dynatrace-specific annotations.
Use the [Dynatrace annotation approach](https://docs.dynatrace.com/docs/observe/infrastructure-monitoring/container-platform-monitoring/kubernetes-monitoring/monitor-prometheus-metrics), where the Dynatrace ActiveGate component running in your cluster scrapes workloads that are annotated with Dynatrace-specific annotations.
This approach works well with workloads that expose metrics in the typical Prometheus format when not running with Istio.
If you use Istio, you must disable Istio interception for the relevant metric port with the [traffic.istio.io/excludeInboundPorts](https://istio.io/latest/docs/reference/config/annotations/#TrafficExcludeInboundPorts) annotation. To collect Istio metrics from the envoys themselves, you need additional Dynatrace annotations for every workload.
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ require (
github.com/stretchr/testify v1.10.0
go.opentelemetry.io/collector/pdata v1.23.0
go.uber.org/zap v1.27.0
google.golang.org/protobuf v1.36.3
google.golang.org/protobuf v1.36.4
gopkg.in/yaml.v3 v3.0.1
istio.io/api v1.24.2
istio.io/client-go v1.24.2
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -182,8 +182,8 @@ google.golang.org/genproto/googleapis/rpc v0.0.0-20241015192408-796eee8c2d53 h1:
google.golang.org/genproto/googleapis/rpc v0.0.0-20241015192408-796eee8c2d53/go.mod h1:GX3210XPVPUjJbTUbvwI8f2IpZDMZuPJWDzDuebbviI=
google.golang.org/grpc v1.69.2 h1:U3S9QEtbXC0bYNvRtcoklF3xGtLViumSYxWykJS+7AU=
google.golang.org/grpc v1.69.2/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/protobuf v1.36.3 h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=
google.golang.org/protobuf v1.36.3/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM=
google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
Expand Down
20 changes: 20 additions & 0 deletions internal/otelcollector/config/gatewayprocs/k8s_attribute_proc.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,5 +51,25 @@ func extractLabels() []config.ExtractLabel {
Key: "app",
TagName: "kyma.app_name",
},
{
From: "node",
Key: "topology.kubernetes.io/region",
TagName: "cloud.region",
},
{
From: "node",
Key: "topology.kubernetes.io/zone",
TagName: "cloud.availability_zone",
},
{
From: "node",
Key: "node.kubernetes.io/instance-type",
TagName: "host.type",
},
{
From: "node",
Key: "kubernetes.io/arch",
TagName: "host.arch",
},
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,26 @@ func TestK8sAttributesProcessorConfig(t *testing.T) {
Key: "app",
TagName: "kyma.app_name",
},
{
From: "node",
Key: "topology.kubernetes.io/region",
TagName: "cloud.region",
},
{
From: "node",
Key: "topology.kubernetes.io/zone",
TagName: "cloud.availability_zone",
},
{
From: "node",
Key: "node.kubernetes.io/instance-type",
TagName: "host.type",
},
{
From: "node",
Key: "kubernetes.io/arch",
TagName: "host.arch",
},
}

config := K8sAttributesProcessorConfig()
Expand Down
21 changes: 19 additions & 2 deletions internal/otelcollector/config/gatewayprocs/resource_procs.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,30 @@ import (
"github.com/kyma-project/telemetry-manager/internal/otelcollector/config"
)

func InsertClusterNameProcessorConfig() *config.ResourceProcessor {
func InsertClusterAttributesProcessorConfig(clusterName, cloudProvider string) *config.ResourceProcessor {
if cloudProvider != "" {
return &config.ResourceProcessor{
Attributes: []config.AttributeAction{
{
Action: "insert",
Key: "k8s.cluster.name",
Value: clusterName,
},
{
Action: "insert",
Key: "cloud.provider",
Value: cloudProvider,
},
},
}
}

return &config.ResourceProcessor{
Attributes: []config.AttributeAction{
{
Action: "insert",
Key: "k8s.cluster.name",
Value: "${KUBERNETES_SERVICE_HOST}",
Value: clusterName,
},
},
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,16 @@ func TestInsertClusterNameProcessorConfig(t *testing.T) {
{
Action: "insert",
Key: "k8s.cluster.name",
Value: "${KUBERNETES_SERVICE_HOST}",
Value: "test-cluster",
},
{
Action: "insert",
Key: "cloud.provider",
Value: "test-cloud-provider",
},
}

config := InsertClusterNameProcessorConfig()
config := InsertClusterAttributesProcessorConfig("test-cluster", "test-cloud-provider")

require.ElementsMatch(expectedAttributeActions, config.Attributes, "Attributes should match")
}
Expand Down
6 changes: 3 additions & 3 deletions internal/otelcollector/config/log/gateway/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ type Receivers struct {
type Processors struct {
config.BaseProcessors `yaml:",inline"`

K8sAttributes *config.K8sAttributesProcessor `yaml:"k8sattributes,omitempty"`
InsertClusterName *config.ResourceProcessor `yaml:"resource/insert-cluster-name,omitempty"`
DropKymaAttributes *config.ResourceProcessor `yaml:"resource/drop-kyma-attributes,omitempty"`
K8sAttributes *config.K8sAttributesProcessor `yaml:"k8sattributes,omitempty"`
InsertClusterAttributes *config.ResourceProcessor `yaml:"resource/insert-cluster-attributes,omitempty"`
DropKymaAttributes *config.ResourceProcessor `yaml:"resource/drop-kyma-attributes,omitempty"`
}

type Exporters map[string]Exporter
Expand Down
11 changes: 8 additions & 3 deletions internal/otelcollector/config/log/gateway/config_builder.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,19 @@ type Builder struct {
Reader client.Reader
}

func (b *Builder) Build(ctx context.Context, pipelines []telemetryv1alpha1.LogPipeline) (*Config, otlpexporter.EnvVars, error) {
type BuildOptions struct {
ClusterName string
CloudProvider string
}

func (b *Builder) Build(ctx context.Context, pipelines []telemetryv1alpha1.LogPipeline, opts BuildOptions) (*Config, otlpexporter.EnvVars, error) {
cfg := &Config{
Base: config.Base{
Service: config.DefaultService(make(config.Pipelines)),
Extensions: config.DefaultExtensions(),
},
Receivers: makeReceiversConfig(),
Processors: makeProcessorsConfig(),
Processors: makeProcessorsConfig(opts),
Exporters: make(Exporters),
}

Expand Down Expand Up @@ -99,7 +104,7 @@ func makePipelineConfig(exporterIDs ...string) config.Pipeline {
Processors: []string{
"memory_limiter",
"k8sattributes",
"resource/insert-cluster-name",
"resource/insert-cluster-attributes",
"batch",
},
Exporters: exporterIDs,
Expand Down
Loading

0 comments on commit 4a37fc1

Please sign in to comment.