Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1.5] Format with prettier #2771

Draft
wants to merge 1 commit into
base: release/v1.5
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/sources/collect/choose-component.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ menuTitle: Choose a component
weight: 100
---

# Choose a {{< param "FULL_PRODUCT_NAME" >}} component
# Choose a {{< param "FULL_PRODUCT_NAME" >}} component

[Components][components] are the building blocks of {{< param "FULL_PRODUCT_NAME" >}}, and there is a [large number of them][components-ref].
The components you select and configure depend on the telemetry signals you want to collect.
Expand All @@ -24,7 +24,7 @@ For example, you can get metrics for a Linux host using `prometheus.exporter.uni
You can also scrape any Prometheus endpoint using `prometheus.scrape`.
Use `discovery.*` components to find targets for `prometheus.scrape`.

[Grafana Infrastructure Observability]:https://grafana.com/docs/grafana-cloud/monitor-infrastructure/
[Grafana Infrastructure Observability]: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/

## Metrics for applications

Expand All @@ -36,7 +36,7 @@ For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-i
If your application is already instrumented with Prometheus metrics, there is no need to use `otelcol.*` components.
Use `prometheus.*` components for the entire pipeline and send the metrics using `prometheus.remote_write`.

[Grafana Application Observability]:https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/
[Grafana Application Observability]: https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/

## Logs from infrastructure

Expand All @@ -58,7 +58,7 @@ All application telemetry must follow the [OpenTelemetry semantic conventions][O

For example, if your application runs on Kubernetes, every trace, log, and metric can have a `k8s.namespace.name` resource attribute.

[OTel-semantics]:https://opentelemetry.io/docs/concepts/semantic-conventions/
[OTel-semantics]: https://opentelemetry.io/docs/concepts/semantic-conventions/

## Traces

Expand Down
46 changes: 23 additions & 23 deletions docs/sources/collect/datadog-traces-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,17 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect [Datadog][] traces and

This topic describes how to:

* Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics.
* Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver.
* Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver.
- Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics.
- Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver.
- Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver.

## Before you begin

* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
* Identify where to write the collected telemetry.
- Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
- Identify where to write the collected telemetry.
Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

## Configure {{% param "PRODUCT_NAME" %}} to send traces and metrics

Expand All @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

* _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.
- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.

## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand Down Expand Up @@ -88,8 +88,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

* _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
* _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.

1. Add the following `otelcol.receiver.datadog` component to your configuration file.

Expand All @@ -103,10 +103,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
}
```

Replace the following:
Replace the following:

* _`<HOST>`_: The host address where the receiver listens.
* _`<PORT>`_: The port where the receiver listens.
- _`<HOST>`_: The host address where the receiver listens.
- _`<PORT>`_: The port where the receiver listens.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -117,10 +117,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
}
```

Replace the following:
Replace the following:

* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.
- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.

## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -139,16 +139,16 @@ We recommend this approach for current Datadog users who want to try using {{< p

Replace the following:

* _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
* _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.

Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
You can do this by setting up your Datadog Agent in the following way:

1. Replace the DD_URL in the configuration YAML:

```yaml
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
```

Or by setting an environment variable:
Expand All @@ -162,9 +162,9 @@ You can do this by setting up your Datadog Agent in the following way:
The `otelcol.receiver.datadog` component is experimental.
To use this component, you need to start {{< param "PRODUCT_NAME" >}} with additional command line flags:

```bash
alloy run config.alloy --stability.level=experimental
```
```bash
alloy run config.alloy --stability.level=experimental
```

[Datadog]: https://www.datadoghq.com/
[Datadog Agent]: https://docs.datadoghq.com/agent/
Expand Down
38 changes: 19 additions & 19 deletions docs/sources/collect/ecs-opentelemetry-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle

## Before you begin

* Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry.
* Have an available Amazon ECS or AWS Fargate deployment.
* Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
- Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry.
- Have an available Amazon ECS or AWS Fargate deployment.
- Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data.
- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

## Use a custom OpenTelemetry configuration file from the SSM Parameter store

Expand All @@ -39,8 +39,8 @@ In ECS, you can set the values of environment variables from AWS Systems Manager

1. Open the AWS Systems Manager console.
1. Select Elastic Container Service.
1. In the navigation pane, choose *Task definition*.
1. Choose *Create new revision*.
1. In the navigation pane, choose _Task definition_.
1. Choose _Create new revision_.

1. Add an environment variable.

Expand All @@ -53,15 +53,15 @@ In ECS, you can set the values of environment variables from AWS Systems Manager
### Create the SSM parameter

1. Open the AWS Systems Manager console.
1. In the navigation pane, choose *Parameter Store*.
1. Choose *Create parameter*.
1. In the navigation pane, choose _Parameter Store_.
1. Choose _Create parameter_.
1. Create a parameter with the following values:

* Name: `otel-collector-config`
* Tier: `Standard`
* Type: `String`
* Data type: `Text`
* Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
- Name: `otel-collector-config`
- Tier: `Standard`
- Type: `String`
- Data type: `Text`
- Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].

### Run your task

Expand All @@ -75,13 +75,13 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet

1. Download the [ECS Fargate task definition template][template] from GitHub.
1. Edit the task definition template and add the following parameters.
* `{{region}}`: The region to send the data to.
* `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN.
* `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
* `command` - Assign a value to the command variable to select the path to the configuration file.
- `{{region}}`: The region to send the data to.
- `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN.
- `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
- `command` - Assign a value to the command variable to select the path to the configuration file.
The AWS Collector comes with two configurations. Select one of them based on your environment:
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces.
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
- Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces.
- Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.

## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar
Expand Down
74 changes: 37 additions & 37 deletions docs/sources/collect/logs-in-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ aliases:
- ../tasks/collect-logs-in-kubernetes/ # /docs/alloy/latest/tasks/collect-logs-in-kubernetes/
description: Learn how to collect logs on Kubernetes and forward them to Loki
menuTitle: Collect Kubernetes logs
title: Collect Kubernetes logs and forward them to Loki
title: Collect Kubernetes logs and forward them to Loki
weight: 250
---

Expand All @@ -14,26 +14,26 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect logs and forward them

This topic describes how to:

* Configure logs delivery.
* Collect logs from Kubernetes Pods.
- Configure logs delivery.
- Collect logs from Kubernetes Pods.

## Components used in this topic

* [`discovery.kubernetes`][discovery.kubernetes]
* [`discovery.relabel`][discovery.relabel]
* [`local.file_match`][local.file_match]
* [`loki.source.file`][loki.source.file]
* [`loki.source.kubernetes`][loki.source.kubernetes]
* [`loki.source.kubernetes_events`][loki.source.kubernetes_events]
* [`loki.process`][loki.process]
* [`loki.write`][loki.write]
- [`discovery.kubernetes`][discovery.kubernetes]
- [`discovery.relabel`][discovery.relabel]
- [`local.file_match`][local.file_match]
- [`loki.source.file`][loki.source.file]
- [`loki.source.kubernetes`][loki.source.kubernetes]
- [`loki.source.kubernetes_events`][loki.source.kubernetes_events]
- [`loki.process`][loki.process]
- [`loki.write`][loki.write]

## Before you begin

* Ensure that you are familiar with logs labelling when working with Loki.
* Identify where to write collected logs.
- Ensure that you are familiar with logs labelling when working with Loki.
- Identify where to write collected logs.
You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

## Configure logs delivery

Expand All @@ -56,9 +56,9 @@ To configure a `loki.write` component for logs delivery, complete the following

Replace the following:

* _`<LABEL>`_: The label for the component, such as `default`.
- _`<LABEL>`_: The label for the component, such as `default`.
The label you use must be unique across all `loki.write` components in the same configuration file.
* _`<LOKI_URL>`_ : The full URL of the Loki endpoint where logs are sent, such as `https://logs-us-central1.grafana.net/loki/api/v1/push`.
- _`<LOKI_URL>`_ : The full URL of the Loki endpoint where logs are sent, such as `https://logs-us-central1.grafana.net/loki/api/v1/push`.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -71,8 +71,8 @@ To configure a `loki.write` component for logs delivery, complete the following

Replace the following:

* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.
- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.

1. If you have more than one endpoint to write logs to, repeat the `endpoint` block for additional endpoints.

Expand Down Expand Up @@ -110,8 +110,8 @@ loki.source.file "example" {

Replace the following:

* _`<USERNAME>`_: The remote write username.
* _`<PASSWORD>`_: The remote write password.
- _`<USERNAME>`_: The remote write username.
- _`<PASSWORD>`_: The remote write password.

For more information on configuring logs delivery, refer to [loki.write][].

Expand All @@ -129,9 +129,9 @@ Thanks to the component architecture, you can follow one or all of the next sect

To get the system logs, you should use the following components:

* [`local.file_match`][local.file_match]: Discovers files on the local filesystem.
* [`loki.source.file`][loki.source.file]: Reads log entries from files.
* [`loki.write`][loki.write]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.
- [`local.file_match`][local.file_match]: Discovers files on the local filesystem.
- [`loki.source.file`][loki.source.file]: Reads log entries from files.
- [`loki.write`][loki.write]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.

Here is an example using those stages.

Expand All @@ -157,8 +157,8 @@ loki.source.file "node_logs" {

Replace the following values:

* _`<CLUSTER_NAME>`_: The label for this specific Kubernetes cluster, such as `production` or `us-east-1`.
* _`<WRITE_COMPONENT_NAME>`_: The name of your `loki.write` component, such as `default`.
- _`<CLUSTER_NAME>`_: The label for this specific Kubernetes cluster, such as `production` or `us-east-1`.
- _`<WRITE_COMPONENT_NAME>`_: The name of your `loki.write` component, such as `default`.

### Pods logs

Expand All @@ -168,11 +168,11 @@ You can get pods logs through the log files on each node. In this guide, you get

You need the following components:

* [`discovery.kubernetes`][discovery.kubernetes]: Discover pods information and list them for components to use.
* [`discovery.relabel`][discovery.relabel]: Enforce relabelling strategies on the list of pods.
* [`loki.source.kubernetes`][loki.source.kubernetes]: Tails logs from a list of Kubernetes pods targets.
* [`loki.process`][loki.process]: Modify the logs before sending them to the next component.
* [`loki.write`][loki.write]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.
- [`discovery.kubernetes`][discovery.kubernetes]: Discover pods information and list them for components to use.
- [`discovery.relabel`][discovery.relabel]: Enforce relabelling strategies on the list of pods.
- [`loki.source.kubernetes`][loki.source.kubernetes]: Tails logs from a list of Kubernetes pods targets.
- [`loki.process`][loki.process]: Modify the logs before sending them to the next component.
- [`loki.write`][loki.write]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.

Here is an example using those stages:

Expand Down Expand Up @@ -267,16 +267,16 @@ loki.process "pod_logs" {

Replace the following values:

* _`<CLUSTER_NAME>`_: The label for this specific Kubernetes cluster, such as `production` or `us-east-1`.
* _`<WRITE_COMPONENT_NAME>`_: The name of your `loki.write` component, such as `default`.
- _`<CLUSTER_NAME>`_: The label for this specific Kubernetes cluster, such as `production` or `us-east-1`.
- _`<WRITE_COMPONENT_NAME>`_: The name of your `loki.write` component, such as `default`.

### Kubernetes Cluster Events

You need the following components:

* [`loki.source.kubernetes_events`][loki.source.kubernetes_events]: Tails events from Kubernetes API.
* [`loki.process`][loki.process]: Modify the logs before sending them to the next component.
* [`loki.write`][loki.write]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.
- [`loki.source.kubernetes_events`][loki.source.kubernetes_events]: Tails events from Kubernetes API.
- [`loki.process`][loki.process]: Modify the logs before sending them to the next component.
- [`loki.write`][loki.write]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.

Here is an example using those stages:

Expand Down Expand Up @@ -312,8 +312,8 @@ loki.process "cluster_events" {

Replace the following values:

* _`<CLUSTER_NAME>`_: The label for this specific Kubernetes cluster, such as `production` or `us-east-1`.
* _`<WRITE_COMPONENT_NAME>`_: The name of your `loki.write` component, such as `default`.
- _`<CLUSTER_NAME>`_: The label for this specific Kubernetes cluster, such as `production` or `us-east-1`.
- _`<WRITE_COMPONENT_NAME>`_: The name of your `loki.write` component, such as `default`.

[Loki]: https://grafana.com/oss/loki/
[discovery.kubernetes]: ../../reference/components/discovery/discovery.kubernetes/
Expand Down
Loading
Loading