diff --git a/docs/sources/collect/choose-component.md b/docs/sources/collect/choose-component.md index 36a880d54c..383a0df610 100644 --- a/docs/sources/collect/choose-component.md +++ b/docs/sources/collect/choose-component.md @@ -6,7 +6,7 @@ menuTitle: Choose a component weight: 100 --- -# Choose a {{< param "FULL_PRODUCT_NAME" >}} component +# Choose a {{< param "FULL_PRODUCT_NAME" >}} component [Components][components] are the building blocks of {{< param "FULL_PRODUCT_NAME" >}}, and there is a [large number of them][components-ref]. The components you select and configure depend on the telemetry signals you want to collect. @@ -24,7 +24,7 @@ For example, you can get metrics for a Linux host using `prometheus.exporter.uni You can also scrape any Prometheus endpoint using `prometheus.scrape`. Use `discovery.*` components to find targets for `prometheus.scrape`. -[Grafana Infrastructure Observability]:https://grafana.com/docs/grafana-cloud/monitor-infrastructure/ +[Grafana Infrastructure Observability]: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/ ## Metrics for applications @@ -36,7 +36,7 @@ For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-i If your application is already instrumented with Prometheus metrics, there is no need to use `otelcol.*` components. Use `prometheus.*` components for the entire pipeline and send the metrics using `prometheus.remote_write`. -[Grafana Application Observability]:https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/ +[Grafana Application Observability]: https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/ ## Logs from infrastructure @@ -58,7 +58,7 @@ All application telemetry must follow the [OpenTelemetry semantic conventions][O For example, if your application runs on Kubernetes, every trace, log, and metric can have a `k8s.namespace.name` resource attribute. -[OTel-semantics]:https://opentelemetry.io/docs/concepts/semantic-conventions/ +[OTel-semantics]: https://opentelemetry.io/docs/concepts/semantic-conventions/ ## Traces diff --git a/docs/sources/collect/datadog-traces-metrics.md b/docs/sources/collect/datadog-traces-metrics.md index 2ab9da3590..ba39d81f62 100644 --- a/docs/sources/collect/datadog-traces-metrics.md +++ b/docs/sources/collect/datadog-traces-metrics.md @@ -14,17 +14,17 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect [Datadog][] traces and This topic describes how to: -* Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics. -* Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver. -* Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver. +- Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics. +- Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver. +- Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver. ## Before you begin -* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. -* Identify where to write the collected telemetry. +- Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. +- Identify where to write the collected telemetry. Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces. -* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. +- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Configure {{% param "PRODUCT_NAME" %}} to send traces and metrics @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - * _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. + - _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - * _``_: The basic authentication username. - * _``_: The basic authentication password or API key. + - _``_: The basic authentication username. + - _``_: The basic authentication password or API key. ## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -88,8 +88,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - * _``_: How long until a series not receiving new samples is removed, such as "5m". - * _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. + - _``_: How long until a series not receiving new samples is removed, such as "5m". + - _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. 1. Add the following `otelcol.receiver.datadog` component to your configuration file. @@ -103,10 +103,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data } ``` - Replace the following: + Replace the following: - * _``_: The host address where the receiver listens. - * _``_: The port where the receiver listens. + - _``_: The host address where the receiver listens. + - _``_: The port where the receiver listens. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -117,10 +117,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data } ``` - Replace the following: + Replace the following: - * _``_: The basic authentication username. - * _``_: The basic authentication password or API key. + - _``_: The basic authentication username. + - _``_: The basic authentication password or API key. ## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -139,8 +139,8 @@ We recommend this approach for current Datadog users who want to try using {{< p Replace the following: - * _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. - * _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. + - _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. + - _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. You can do this by setting up your Datadog Agent in the following way: @@ -148,7 +148,7 @@ You can do this by setting up your Datadog Agent in the following way: 1. Replace the DD_URL in the configuration YAML: ```yaml - dd_url: http://: + dd_url: http://: ``` Or by setting an environment variable: @@ -162,9 +162,9 @@ You can do this by setting up your Datadog Agent in the following way: The `otelcol.receiver.datadog` component is experimental. To use this component, you need to start {{< param "PRODUCT_NAME" >}} with additional command line flags: - ```bash - alloy run config.alloy --stability.level=experimental - ``` +```bash +alloy run config.alloy --stability.level=experimental +``` [Datadog]: https://www.datadoghq.com/ [Datadog Agent]: https://docs.datadoghq.com/agent/ diff --git a/docs/sources/collect/ecs-opentelemetry-data.md b/docs/sources/collect/ecs-opentelemetry-data.md index 298bd73bc2..e405fd9a73 100644 --- a/docs/sources/collect/ecs-opentelemetry-data.md +++ b/docs/sources/collect/ecs-opentelemetry-data.md @@ -20,10 +20,10 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle ## Before you begin -* Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. -* Have an available Amazon ECS or AWS Fargate deployment. -* Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data. -* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. +- Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. +- Have an available Amazon ECS or AWS Fargate deployment. +- Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data. +- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Use a custom OpenTelemetry configuration file from the SSM Parameter store @@ -39,8 +39,8 @@ In ECS, you can set the values of environment variables from AWS Systems Manager 1. Open the AWS Systems Manager console. 1. Select Elastic Container Service. - 1. In the navigation pane, choose *Task definition*. - 1. Choose *Create new revision*. + 1. In the navigation pane, choose _Task definition_. + 1. Choose _Create new revision_. 1. Add an environment variable. @@ -53,15 +53,15 @@ In ECS, you can set the values of environment variables from AWS Systems Manager ### Create the SSM parameter 1. Open the AWS Systems Manager console. -1. In the navigation pane, choose *Parameter Store*. -1. Choose *Create parameter*. +1. In the navigation pane, choose _Parameter Store_. +1. Choose _Create parameter_. 1. Create a parameter with the following values: - * Name: `otel-collector-config` - * Tier: `Standard` - * Type: `String` - * Data type: `Text` - * Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. + - Name: `otel-collector-config` + - Tier: `Standard` + - Type: `String` + - Data type: `Text` + - Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. ### Run your task @@ -75,13 +75,13 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet 1. Download the [ECS Fargate task definition template][template] from GitHub. 1. Edit the task definition template and add the following parameters. - * `{{region}}`: The region to send the data to. - * `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN. - * `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN. - * `command` - Assign a value to the command variable to select the path to the configuration file. + - `{{region}}`: The region to send the data to. + - `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN. + - `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN. + - `command` - Assign a value to the command variable to select the path to the configuration file. The AWS Collector comes with two configurations. Select one of them based on your environment: - * Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces. - * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. + - Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces. + - Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. 1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template. ## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar diff --git a/docs/sources/collect/logs-in-kubernetes.md b/docs/sources/collect/logs-in-kubernetes.md index d8b8b17fb2..5feda702fb 100644 --- a/docs/sources/collect/logs-in-kubernetes.md +++ b/docs/sources/collect/logs-in-kubernetes.md @@ -4,7 +4,7 @@ aliases: - ../tasks/collect-logs-in-kubernetes/ # /docs/alloy/latest/tasks/collect-logs-in-kubernetes/ description: Learn how to collect logs on Kubernetes and forward them to Loki menuTitle: Collect Kubernetes logs -title: Collect Kubernetes logs and forward them to Loki +title: Collect Kubernetes logs and forward them to Loki weight: 250 --- @@ -14,26 +14,26 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect logs and forward them This topic describes how to: -* Configure logs delivery. -* Collect logs from Kubernetes Pods. +- Configure logs delivery. +- Collect logs from Kubernetes Pods. ## Components used in this topic -* [`discovery.kubernetes`][discovery.kubernetes] -* [`discovery.relabel`][discovery.relabel] -* [`local.file_match`][local.file_match] -* [`loki.source.file`][loki.source.file] -* [`loki.source.kubernetes`][loki.source.kubernetes] -* [`loki.source.kubernetes_events`][loki.source.kubernetes_events] -* [`loki.process`][loki.process] -* [`loki.write`][loki.write] +- [`discovery.kubernetes`][discovery.kubernetes] +- [`discovery.relabel`][discovery.relabel] +- [`local.file_match`][local.file_match] +- [`loki.source.file`][loki.source.file] +- [`loki.source.kubernetes`][loki.source.kubernetes] +- [`loki.source.kubernetes_events`][loki.source.kubernetes_events] +- [`loki.process`][loki.process] +- [`loki.write`][loki.write] ## Before you begin -* Ensure that you are familiar with logs labelling when working with Loki. -* Identify where to write collected logs. +- Ensure that you are familiar with logs labelling when working with Loki. +- Identify where to write collected logs. You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs. -* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. +- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Configure logs delivery @@ -56,9 +56,9 @@ To configure a `loki.write` component for logs delivery, complete the following Replace the following: - * _`