Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1.4] Format with prettier #2772

Draft
wants to merge 1 commit into
base: release/v1.4
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/sources/collect/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ weight: 100

# Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}}

{{< section >}}
{{< section >}}
14 changes: 7 additions & 7 deletions docs/sources/collect/choose-component.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ menuTitle: Choose a component
weight: 100
---

# Choose a {{< param "FULL_PRODUCT_NAME" >}} component
# Choose a {{< param "FULL_PRODUCT_NAME" >}} component

[Components][components] are the building blocks of {{< param "FULL_PRODUCT_NAME" >}}, and there is a [large number of them][components-ref].
The components you select and configure depend on the telemetry signals you want to collect.
Expand All @@ -19,13 +19,13 @@ The components you select and configure depend on the telemetry signals you want
Use `prometheus.*` components to collect infrastructure metrics.
This will give you the best experience with [Grafana Infrastructure Observability][].

For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.

You can also scrape any Prometheus endpoint using `prometheus.scrape`.
Use `discovery.*` components to find targets for `prometheus.scrape`.

[Grafana Infrastructure Observability]:https://grafana.com/docs/grafana-cloud/monitor-infrastructure/
[Grafana Infrastructure Observability]: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/

## Metrics for applications

Expand All @@ -37,7 +37,7 @@ For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-i
If your application is already instrumented with Prometheus metrics, there is no need to use `otelcol.*` components.
Use `prometheus.*` components for the entire pipeline and send the metrics using `prometheus.remote_write`.

[Grafana Application Observability]:https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/
[Grafana Application Observability]: https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/

## Logs from infrastructure

Expand All @@ -53,13 +53,13 @@ which wouldn't correspond to the `namespace` label that is common in the Prometh
## Logs from applications

Use `otelcol.receiver.*` components to collect application logs.
This will gather the application logs in an OpenTelemetry-native way, making it easier to
This will gather the application logs in an OpenTelemetry-native way, making it easier to
correlate the logs with OpenTelemetry metrics and traces coming from the application.
All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation.

For example, if your application runs on Kubernetes, every trace, log, and metric can have a `k8s.namespace.name` resource attribute.

[OTel-semantics]:https://opentelemetry.io/docs/concepts/semantic-conventions/
[OTel-semantics]: https://opentelemetry.io/docs/concepts/semantic-conventions/

## Traces

Expand Down
50 changes: 25 additions & 25 deletions docs/sources/collect/datadog-traces-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,17 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect [Datadog][] traces and

This topic describes how to:

* Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics.
* Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver.
* Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver.
- Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics.
- Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver.
- Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver.

## Before you begin

* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
* Identify where you will write the collected telemetry.
- Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
- Identify where you will write the collected telemetry.
Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

## Configure {{% param "PRODUCT_NAME" %}} to send traces and metrics

Expand All @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.

## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand Down Expand Up @@ -88,8 +88,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.

1. Add the following `otelcol.receiver.datadog` component to your configuration file.

Expand All @@ -103,10 +103,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
}
```

Replace the following:
Replace the following:

- _`<HOST>`_: The host address where the receiver will listen.
- _`<PORT>`_: The port where the receiver will listen.
- _`<HOST>`_: The host address where the receiver will listen.
- _`<PORT>`_: The port where the receiver will listen.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -117,10 +117,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
}
```

Replace the following:
Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.

## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -139,19 +139,19 @@ We recommend this approach for current Datadog users who want to try using {{< p

Replace the following:

- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.

Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
You can do this by setting up your Datadog Agent in the following way:

1. Replace the DD_URL in the configuration YAML:

```yaml
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
```
Or by setting an environment variable:

Or by setting an environment variable:

```bash
DD_DD_URL='{"http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>": ["datadog-receiver"]}'
Expand All @@ -162,9 +162,9 @@ Or by setting an environment variable:
The `otelcol.receiver.datadog` component is experimental.
To use this component, you need to start {{< param "PRODUCT_NAME" >}} with additional command line flags:

```bash
alloy run config.alloy --stability.level=experimental
```
```bash
alloy run config.alloy --stability.level=experimental
```

[Datadog]: https://www.datadoghq.com/
[Datadog Agent]: https://docs.datadoghq.com/agent/
Expand Down
39 changes: 20 additions & 19 deletions docs/sources/collect/logs-in-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ aliases:
- ../tasks/collect-logs-in-kubernetes/ # /docs/alloy/latest/tasks/collect-logs-in-kubernetes/
description: Learn how to collect logs on Kubernetes and forward them to Loki
menuTitle: Collect Kubernetes logs
title: Collect Kubernetes logs and forward them to Loki
title: Collect Kubernetes logs and forward them to Loki
weight: 250
---

Expand All @@ -14,26 +14,26 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect logs and forward them

This topic describes how to:

* Configure logs delivery.
* Collect logs from Kubernetes Pods.
- Configure logs delivery.
- Collect logs from Kubernetes Pods.

## Components used in this topic

* [discovery.kubernetes][]
* [discovery.relabel][]
* [local.file_match][]
* [loki.source.file][]
* [loki.source.kubernetes][]
* [loki.source.kubernetes_events][]
* [loki.process][]
* [loki.write][]
- [discovery.kubernetes][]
- [discovery.relabel][]
- [local.file_match][]
- [loki.source.file][]
- [loki.source.kubernetes][]
- [loki.source.kubernetes_events][]
- [loki.process][]
- [loki.write][]

## Before you begin

* Ensure that you are familiar with logs labelling when working with Loki.
* Identify where you will write collected logs.
- Ensure that you are familiar with logs labelling when working with Loki.
- Identify where you will write collected logs.
You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

## Configure logs delivery

Expand Down Expand Up @@ -74,9 +74,9 @@ To configure a `loki.write` component for logs delivery, complete the following
- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.

1. If you have more than one endpoint to write logs to, repeat the `endpoint` block for additional endpoints.
1. If you have more than one endpoint to write logs to, repeat the `endpoint` block for additional endpoints.

The following simple example demonstrates configuring `loki.write` with multiple endpoints, mixed usage of basic authentication,
The following simple example demonstrates configuring `loki.write` with multiple endpoints, mixed usage of basic authentication,
and a `loki.source.file` component that collects logs from the filesystem on Alloy's own container.

```alloy
Expand Down Expand Up @@ -110,8 +110,8 @@ loki.source.file "example" {

Replace the following:

- _`<USERNAME>`_: The remote write username.
- _`<PASSWORD>`_: The remote write password.
- _`<USERNAME>`_: The remote write username.
- _`<PASSWORD>`_: The remote write password.

For more information on configuring logs delivery, refer to [loki.write][].

Expand All @@ -128,6 +128,7 @@ Thanks to the component architecture, you can follow one or all of the next sect
### System logs

To get the system logs, you should use the following components:

1. [local.file_match][]: Discovers files on the local filesystem.
1. [loki.source.file][]: Reads log entries from files.
1. [loki.write][]: Send logs to the Loki endpoint. You should have configured it in the [Configure logs delivery](#configure-logs-delivery) section.
Expand Down Expand Up @@ -329,4 +330,4 @@ Replace the following values:
[loki.process]: ../../reference/components/loki/loki.process/
[loki.source.kubernetes_events]: ../../reference/components/loki/loki.source.kubernetes_events/
[Components]: ../../get-started/components/
[Objects]: ../../concepts/configuration-syntax/expressions/types_and_values/#objects
[Objects]: ../../concepts/configuration-syntax/expressions/types_and_values/#objects
18 changes: 11 additions & 7 deletions docs/sources/collect/metamonitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,21 @@ This topic describes how to collect and forward metrics, logs, and traces data f

## Components and configuration blocks used in this topic

* [prometheus.exporter.self][]
* [prometheus.scrape][]
* [logging][]
* [tracing][]
- [prometheus.exporter.self][]
- [prometheus.scrape][]
- [logging][]
- [tracing][]

## Before you begin

* Identify where to send {{< param "PRODUCT_NAME" >}}'s telemetry data.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
- Identify where to send {{< param "PRODUCT_NAME" >}}'s telemetry data.
- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

## Meta-monitoring metrics

{{< param "PRODUCT_NAME" >}} exposes its internal metrics using the Prometheus exposition format.

In this task, you will use the [prometheus.exporter.self][] and [prometheus.scrape][] components to scrape {{< param "PRODUCT_NAME" >}}'s internal metrics and forward it to compatible {{< param "PRODUCT_NAME" >}} components.
In this task, you will use the [prometheus.exporter.self][] and [prometheus.scrape][] components to scrape {{< param "PRODUCT_NAME" >}}'s internal metrics and forward it to compatible {{< param "PRODUCT_NAME" >}} components.

1. Add the following `prometheus.exporter.self` component to your configuration. The component accepts no arguments.

Expand All @@ -40,6 +40,7 @@ In this task, you will use the [prometheus.exporter.self][] and [prometheus.scra
```

1. Add the following `prometheus.scrape` component to your configuration file.

```alloy
prometheus.scrape "<SCRAPE_LABEL>" {
targets = prometheus.exporter.self.<SELF_LABEL>.targets
Expand All @@ -48,6 +49,7 @@ In this task, you will use the [prometheus.exporter.self][] and [prometheus.scra
```

Replace the following:

- _`<SELF_LABEL>`_: The label for the component such as `default` or `metamonitoring`. The label must be unique across all `prometheus.exporter.self` components in the same configuration file.
- _`<SCRAPE_LABEL>`_: The label for the scrape component such as `default`. The label must be unique across all `prometheus.scrape` components in the same configuration file.
- _`<METRICS_RECEIVER_LIST>`_: A comma-delimited list of component receivers to forward metrics to.
Expand Down Expand Up @@ -91,6 +93,7 @@ The block is specified without a label and can only be provided once per configu
```

Replace the following:

- _`<LOG_LEVEL>`_: The log level to use for {{< param "PRODUCT_NAME" >}}'s logs. If the attribute isn't set, it defaults to `info`.
- _`<LOG_FORMAT>`_: The log format to use for {{< param "PRODUCT_NAME" >}}'s logs. If the attribute isn't set, it defaults to `logfmt`.
- _`<LOGS_RECEIVER_LIST>`_: A comma-delimited list of component receivers to forward logs to.
Expand Down Expand Up @@ -131,6 +134,7 @@ In this task you will use the [tracing][] block to forward {{< param "PRODUCT_NA
```

Replace the following:

- _`<SAMPLING_FRACTION>`_: The fraction of traces to keep. If the attribute isn't set, it defaults to `0.1`.
- _`<TRACES_RECEIVER_LIST>`_: A comma-delimited list of component receivers to forward traces to.
For example, to send to an existing OpenTelemetry exporter component use `otelcol.exporter.otlp.EXPORT_LABEL.input`.
Expand Down
Loading
Loading