From 9350e833af125147f014a5bb693c8f7948306878 Mon Sep 17 00:00:00 2001 From: Jack Baldry Date: Wed, 19 Feb 2025 17:57:09 +0000 Subject: [PATCH] [v1.5] Format with `prettier` Consistent formatting improves readability and makes it easier for tools to transform the source. The general human understanding of Markdown has some ambiguities so it's possible this PR will break some documentation presentation because the formatter follows the CommonMark specification. Elminating this ambiguity provides more consistent behavior and makes it easier for future readers to understand your Markdown. Consistent formatting is also necessary to provide meaningful diffs in future automated PRs. If you would like to benefit from automated improvements made by the Docs Platform team, you must adopt `prettier` in your local development and enforce it in CI. If you would like help running `prettier` in CI, reach out in the [#docs-platform Slack channel](https://raintank-corp.slack.com/archives/C07R2REUULS). Created-By: reverse-changes Repository: grafana/alloy Website-Pull-Request: https://github.com/grafana/website/pull/24071 --- docs/sources/collect/choose-component.md | 8 +- .../sources/collect/datadog-traces-metrics.md | 46 +- .../sources/collect/ecs-opentelemetry-data.md | 38 +- docs/sources/collect/logs-in-kubernetes.md | 74 +- docs/sources/collect/metamonitoring.md | 33 +- docs/sources/collect/opentelemetry-data.md | 81 +- .../collect/opentelemetry-to-lgtm-stack.md | 38 +- docs/sources/collect/prometheus-metrics.md | 260 +++---- docs/sources/configure/_index.md | 6 +- docs/sources/configure/kubernetes.md | 20 +- docs/sources/configure/linux.md | 14 +- docs/sources/configure/macos.md | 6 +- docs/sources/configure/nonroot.md | 5 +- docs/sources/configure/windows.md | 10 +- docs/sources/data-collection.md | 17 +- .../get-started/community_components.md | 6 +- .../get-started/component_controller.md | 8 +- docs/sources/get-started/components.md | 4 +- .../configuration-syntax/_index.md | 32 +- .../configuration-syntax/components.md | 4 +- .../expressions/function_calls.md | 2 +- .../expressions/operators.md | 88 +-- .../expressions/types_and_values.md | 41 +- docs/sources/get-started/custom_components.md | 10 +- docs/sources/get-started/modules.md | 8 +- docs/sources/introduction/_index.md | 26 +- .../introduction/backward-compatibility.md | 18 +- .../introduction/estimate-resource-usage.md | 14 +- .../introduction/supported-platforms.md | 16 +- docs/sources/reference/cli/_index.md | 12 +- docs/sources/reference/cli/convert.md | 20 +- .../reference/cli/environment-variables.md | 35 +- docs/sources/reference/cli/fmt.md | 8 +- docs/sources/reference/cli/run.md | 82 +- docs/sources/reference/cli/tools.md | 38 +- .../sources/reference/compatibility/_index.md | 91 ++- .../reference/components/beyla/beyla.ebpf.md | 70 +- .../components/discovery/discovery.azure.md | 40 +- .../components/discovery/discovery.consul.md | 40 +- .../discovery/discovery.consulagent.md | 35 +- .../discovery/discovery.digitalocean.md | 32 +- .../components/discovery/discovery.dns.md | 16 +- .../components/discovery/discovery.docker.md | 52 +- .../discovery/discovery.dockerswarm.md | 134 ++-- .../components/discovery/discovery.ec2.md | 60 +- .../components/discovery/discovery.eureka.md | 54 +- .../components/discovery/discovery.file.md | 24 +- .../components/discovery/discovery.gce.md | 32 +- .../components/discovery/discovery.hetzner.md | 64 +- .../components/discovery/discovery.http.md | 64 +- .../components/discovery/discovery.ionos.md | 50 +- .../components/discovery/discovery.kubelet.md | 74 +- .../discovery/discovery.kubernetes.md | 193 ++--- .../components/discovery/discovery.kuma.md | 26 +- .../discovery/discovery.lightsail.md | 40 +- .../components/discovery/discovery.linode.md | 60 +- .../discovery/discovery.marathon.md | 36 +- .../components/discovery/discovery.nerve.md | 14 +- .../components/discovery/discovery.nomad.md | 36 +- .../discovery/discovery.openstack.md | 44 +- .../discovery/discovery.ovhcloud.md | 76 +- .../components/discovery/discovery.process.md | 15 +- .../discovery/discovery.puppetdb.md | 72 +- .../discovery/discovery.scaleway.md | 84 +-- .../discovery/discovery.serverset.md | 14 +- .../components/discovery/discovery.triton.md | 32 +- .../components/discovery/discovery.uyuni.md | 28 +- .../components/faro/faro.receiver.md | 55 +- .../reference/components/local/local.file.md | 6 +- .../components/local/local.file_match.md | 30 +- .../reference/components/loki/loki.process.md | 75 +- .../reference/components/loki/loki.relabel.md | 10 +- .../components/loki/loki.rules.kubernetes.md | 69 +- .../components/loki/loki.secretfilter.md | 4 +- .../components/loki/loki.source.api.md | 33 +- .../loki/loki.source.awsfirehose.md | 41 +- .../loki/loki.source.azure_event_hubs.md | 2 - .../components/loki/loki.source.cloudflare.md | 172 ++--- .../components/loki/loki.source.docker.md | 11 +- .../components/loki/loki.source.file.md | 23 +- .../components/loki/loki.source.gcplog.md | 17 +- .../components/loki/loki.source.gelf.md | 13 +- .../components/loki/loki.source.heroku.md | 17 +- .../components/loki/loki.source.journal.md | 11 +- .../components/loki/loki.source.kafka.md | 7 +- .../components/loki/loki.source.kubernetes.md | 61 +- .../loki/loki.source.kubernetes_events.md | 19 +- .../components/loki/loki.source.podlogs.md | 101 ++- .../components/loki/loki.source.syslog.md | 15 +- .../loki/loki.source.windowsevent.md | 2 +- .../reference/components/loki/loki.write.md | 34 +- .../mimir/mimir.rules.kubernetes.md | 64 +- .../components/otelcol/otelcol.auth.basic.md | 20 +- .../components/otelcol/otelcol.auth.bearer.md | 22 +- .../otelcol/otelcol.auth.headers.md | 40 +- .../components/otelcol/otelcol.auth.oauth2.md | 36 +- .../components/otelcol/otelcol.auth.sigv4.md | 39 +- .../otelcol/otelcol.connector.host_info.md | 8 +- .../otelcol/otelcol.connector.servicegraph.md | 80 +- .../otelcol/otelcol.connector.spanlogs.md | 2 + .../otelcol/otelcol.exporter.awss3.md | 80 +- .../otelcol/otelcol.exporter.datadog.md | 171 +++-- .../otelcol/otelcol.exporter.debug.md | 38 +- .../otelcol/otelcol.exporter.kafka.md | 105 +-- .../otelcol/otelcol.exporter.loadbalancing.md | 443 +++++------ .../otelcol/otelcol.exporter.loki.md | 24 +- .../otelcol/otelcol.exporter.otlp.md | 84 ++- .../otelcol/otelcol.exporter.otlphttp.md | 80 +- .../otelcol/otelcol.exporter.prometheus.md | 39 +- ...telcol.extension.jaeger_remote_sampling.md | 147 ++-- .../otelcol/otelcol.processor.attributes.md | 166 ++-- .../otelcol/otelcol.processor.batch.md | 57 +- .../otelcol.processor.deltatocumulative.md | 6 +- .../otelcol/otelcol.processor.discovery.md | 62 +- .../otelcol/otelcol.processor.filter.md | 107 +-- .../otelcol/otelcol.processor.groupbyattrs.md | 31 +- .../otelcol.processor.k8sattributes.md | 171 ++--- .../otelcol.processor.memory_limiter.md | 31 +- ...otelcol.processor.probabilistic_sampler.md | 43 +- .../otelcol.processor.resourcedetection.md | 707 +++++++++--------- .../otelcol/otelcol.processor.span.md | 111 +-- .../otelcol/otelcol.processor.transform.md | 188 ++--- .../otelcol/otelcol.receiver.datadog.md | 46 +- .../otelcol/otelcol.receiver.file_stats.md | 123 ++- .../otelcol/otelcol.receiver.jaeger.md | 129 ++-- .../otelcol/otelcol.receiver.kafka.md | 118 +-- .../otelcol/otelcol.receiver.loki.md | 12 +- .../otelcol/otelcol.receiver.opencensus.md | 63 +- .../otelcol/otelcol.receiver.otlp.md | 123 +-- .../otelcol/otelcol.receiver.prometheus.md | 13 +- .../otelcol/otelcol.receiver.vcenter.md | 210 +++--- .../otelcol/otelcol.receiver.zipkin.md | 48 +- .../prometheus/prometheus.exporter.azure.md | 40 +- .../prometheus.exporter.blackbox.md | 20 +- .../prometheus.exporter.cadvisor.md | 45 +- .../prometheus.exporter.cloudwatch.md | 31 +- .../prometheus/prometheus.exporter.consul.md | 28 +- .../prometheus.exporter.elasticsearch.md | 8 +- .../prometheus/prometheus.exporter.gcp.md | 2 +- .../prometheus/prometheus.exporter.mysql.md | 6 +- .../prometheus.exporter.postgres.md | 58 +- .../prometheus/prometheus.exporter.self.md | 8 +- .../prometheus/prometheus.exporter.snmp.md | 24 +- .../prometheus.exporter.snowflake.md | 2 +- .../prometheus/prometheus.exporter.unix.md | 183 ++--- .../prometheus/prometheus.exporter.windows.md | 287 +++---- .../prometheus.operator.podmonitors.md | 107 +-- .../prometheus/prometheus.operator.probes.md | 109 +-- .../prometheus.operator.servicemonitors.md | 107 +-- .../prometheus/prometheus.receive_http.md | 28 +- .../prometheus/prometheus.relabel.md | 39 +- .../prometheus/prometheus.remote_write.md | 222 +++--- .../prometheus/prometheus.scrape.md | 103 +-- .../prometheus/prometheus.write.queue.md | 153 ++-- .../components/pyroscope/pyroscope.ebpf.md | 57 +- .../components/pyroscope/pyroscope.java.md | 13 +- .../pyroscope/pyroscope.receive_http.md | 5 +- .../components/pyroscope/pyroscope.scrape.md | 61 +- .../components/pyroscope/pyroscope.write.md | 13 +- .../components/remote/remote.http.md | 6 +- .../remote/remote.kubernetes.configmap.md | 18 +- .../remote/remote.kubernetes.secret.md | 18 +- .../reference/components/remote/remote.s3.md | 10 +- .../components/remote/remote.vault.md | 18 +- .../reference/config-blocks/argument.md | 16 +- .../reference/config-blocks/declare.md | 10 +- .../sources/reference/config-blocks/export.md | 6 +- docs/sources/reference/config-blocks/http.md | 148 ++-- .../reference/config-blocks/import.git.md | 32 +- .../reference/config-blocks/import.http.md | 30 +- .../reference/config-blocks/import.string.md | 6 +- .../reference/config-blocks/livedebugging.md | 1 + .../reference/config-blocks/logging.md | 22 +- .../reference/config-blocks/remotecfg.md | 58 +- .../reference/config-blocks/tracing.md | 26 +- docs/sources/reference/stdlib/array.md | 4 +- docs/sources/reference/stdlib/constants.md | 6 +- docs/sources/reference/stdlib/encoding.md | 2 +- docs/sources/reference/stdlib/string.md | 5 +- docs/sources/release-notes.md | 4 +- docs/sources/set-up/deploy.md | 76 +- docs/sources/set-up/install/ansible.md | 53 +- docs/sources/set-up/install/binary.md | 12 +- docs/sources/set-up/install/chef.md | 100 +-- docs/sources/set-up/install/docker.md | 18 +- docs/sources/set-up/install/kubernetes.md | 16 +- docs/sources/set-up/install/macos.md | 6 +- docs/sources/set-up/install/openshift.md | 28 +- docs/sources/set-up/install/puppet.md | 118 +-- docs/sources/set-up/install/windows.md | 16 +- docs/sources/set-up/migrate/from-flow.md | 61 +- docs/sources/set-up/migrate/from-operator.md | 55 +- docs/sources/set-up/migrate/from-otelcol.md | 67 +- .../sources/set-up/migrate/from-prometheus.md | 58 +- docs/sources/set-up/migrate/from-promtail.md | 56 +- docs/sources/set-up/migrate/from-static.md | 96 +-- docs/sources/set-up/run/binary.md | 16 +- docs/sources/set-up/run/windows.md | 8 +- docs/sources/shared/agent-deprecation.md | 9 +- docs/sources/shared/index.md | 1 - .../components/authorization-block.md | 10 +- .../reference/components/azuread-block.md | 12 +- .../reference/components/basic-auth-block.md | 10 +- .../components/exporter-component-exports.md | 6 +- .../components/extract-field-block.md | 22 +- .../components/field-filter-block.md | 18 +- .../components/http-client-config-block.md | 20 +- .../reference/components/loki-server-grpc.md | 22 +- .../reference/components/loki-server-http.md | 16 +- .../components/managed_identity-block.md | 14 +- .../components/match-properties-block.md | 18 +- .../reference/components/oauth2-block.md | 24 +- .../components/otelcol-compression-field.md | 10 +- .../components/otelcol-debug-metrics-block.md | 8 +- .../otelcol-filter-attribute-block.md | 14 +- .../otelcol-filter-library-block.md | 12 +- .../otelcol-filter-log-severity-block.md | 60 +- .../components/otelcol-filter-regexp-block.md | 8 +- .../otelcol-filter-resource-block.md | 12 +- .../components/otelcol-grpc-balancer-name.md | 4 +- .../otelcol-kafka-authentication-kerberos.md | 20 +- .../otelcol-kafka-authentication-plaintext.md | 8 +- ...elcol-kafka-authentication-sasl-aws_msk.md | 8 +- .../otelcol-kafka-authentication-sasl.md | 20 +- .../otelcol-kafka-metadata-retry.md | 8 +- .../components/otelcol-kafka-metadata.md | 6 +- .../components/otelcol-queue-block.md | 10 +- .../components/otelcol-retry-block.md | 16 +- .../components/otelcol-tls-client-block.md | 38 +- .../components/otelcol-tls-server-block.md | 36 +- .../reference/components/output-block-logs.md | 6 +- .../components/output-block-metrics.md | 6 +- .../components/output-block-traces.md | 6 +- .../reference/components/output-block.md | 10 +- .../components/prom-operator-scrape.md | 8 +- .../reference/components/rule-block-logs.md | 40 +- .../shared/reference/components/rule-block.md | 40 +- .../reference/components/sigv4-block.md | 14 +- .../reference/components/tls-config-block.md | 36 +- .../components/write_relabel_config.md | 40 +- .../troubleshoot/controller_metrics.md | 12 +- docs/sources/troubleshoot/debug.md | 57 +- docs/sources/troubleshoot/support_bundle.md | 21 +- .../tutorials/first-components-and-stdlib.md | 66 +- .../tutorials/logs-and-relabeling-basics.md | 16 +- docs/sources/tutorials/processing-logs.md | 62 +- docs/sources/tutorials/send-logs-to-loki.md | 214 +++--- .../tutorials/send-metrics-to-prometheus.md | 57 +- 248 files changed, 6117 insertions(+), 5904 deletions(-) diff --git a/docs/sources/collect/choose-component.md b/docs/sources/collect/choose-component.md index 36a880d54c..383a0df610 100644 --- a/docs/sources/collect/choose-component.md +++ b/docs/sources/collect/choose-component.md @@ -6,7 +6,7 @@ menuTitle: Choose a component weight: 100 --- -# Choose a {{< param "FULL_PRODUCT_NAME" >}} component +# Choose a {{< param "FULL_PRODUCT_NAME" >}} component [Components][components] are the building blocks of {{< param "FULL_PRODUCT_NAME" >}}, and there is a [large number of them][components-ref]. The components you select and configure depend on the telemetry signals you want to collect. @@ -24,7 +24,7 @@ For example, you can get metrics for a Linux host using `prometheus.exporter.uni You can also scrape any Prometheus endpoint using `prometheus.scrape`. Use `discovery.*` components to find targets for `prometheus.scrape`. -[Grafana Infrastructure Observability]:https://grafana.com/docs/grafana-cloud/monitor-infrastructure/ +[Grafana Infrastructure Observability]: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/ ## Metrics for applications @@ -36,7 +36,7 @@ For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-i If your application is already instrumented with Prometheus metrics, there is no need to use `otelcol.*` components. Use `prometheus.*` components for the entire pipeline and send the metrics using `prometheus.remote_write`. -[Grafana Application Observability]:https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/ +[Grafana Application Observability]: https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/introduction/ ## Logs from infrastructure @@ -58,7 +58,7 @@ All application telemetry must follow the [OpenTelemetry semantic conventions][O For example, if your application runs on Kubernetes, every trace, log, and metric can have a `k8s.namespace.name` resource attribute. -[OTel-semantics]:https://opentelemetry.io/docs/concepts/semantic-conventions/ +[OTel-semantics]: https://opentelemetry.io/docs/concepts/semantic-conventions/ ## Traces diff --git a/docs/sources/collect/datadog-traces-metrics.md b/docs/sources/collect/datadog-traces-metrics.md index 2ab9da3590..ba39d81f62 100644 --- a/docs/sources/collect/datadog-traces-metrics.md +++ b/docs/sources/collect/datadog-traces-metrics.md @@ -14,17 +14,17 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect [Datadog][] traces and This topic describes how to: -* Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics. -* Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver. -* Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver. +- Configure {{< param "PRODUCT_NAME" >}} to send traces and metrics. +- Configure the {{< param "PRODUCT_NAME" >}} Datadog Receiver. +- Configure the Datadog Agent to forward traces and metrics to the {{< param "PRODUCT_NAME" >}} Datadog Receiver. ## Before you begin -* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. -* Identify where to write the collected telemetry. +- Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. +- Identify where to write the collected telemetry. Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces. -* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. +- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Configure {{% param "PRODUCT_NAME" %}} to send traces and metrics @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - * _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. + - _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - * _``_: The basic authentication username. - * _``_: The basic authentication password or API key. + - _``_: The basic authentication username. + - _``_: The basic authentication password or API key. ## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -88,8 +88,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - * _``_: How long until a series not receiving new samples is removed, such as "5m". - * _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. + - _``_: How long until a series not receiving new samples is removed, such as "5m". + - _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. 1. Add the following `otelcol.receiver.datadog` component to your configuration file. @@ -103,10 +103,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data } ``` - Replace the following: + Replace the following: - * _``_: The host address where the receiver listens. - * _``_: The port where the receiver listens. + - _``_: The host address where the receiver listens. + - _``_: The port where the receiver listens. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -117,10 +117,10 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data } ``` - Replace the following: + Replace the following: - * _``_: The basic authentication username. - * _``_: The basic authentication password or API key. + - _``_: The basic authentication username. + - _``_: The basic authentication password or API key. ## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -139,8 +139,8 @@ We recommend this approach for current Datadog users who want to try using {{< p Replace the following: - * _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. - * _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. + - _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. + - _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. You can do this by setting up your Datadog Agent in the following way: @@ -148,7 +148,7 @@ You can do this by setting up your Datadog Agent in the following way: 1. Replace the DD_URL in the configuration YAML: ```yaml - dd_url: http://: + dd_url: http://: ``` Or by setting an environment variable: @@ -162,9 +162,9 @@ You can do this by setting up your Datadog Agent in the following way: The `otelcol.receiver.datadog` component is experimental. To use this component, you need to start {{< param "PRODUCT_NAME" >}} with additional command line flags: - ```bash - alloy run config.alloy --stability.level=experimental - ``` +```bash +alloy run config.alloy --stability.level=experimental +``` [Datadog]: https://www.datadoghq.com/ [Datadog Agent]: https://docs.datadoghq.com/agent/ diff --git a/docs/sources/collect/ecs-opentelemetry-data.md b/docs/sources/collect/ecs-opentelemetry-data.md index 298bd73bc2..e405fd9a73 100644 --- a/docs/sources/collect/ecs-opentelemetry-data.md +++ b/docs/sources/collect/ecs-opentelemetry-data.md @@ -20,10 +20,10 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle ## Before you begin -* Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. -* Have an available Amazon ECS or AWS Fargate deployment. -* Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data. -* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. +- Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. +- Have an available Amazon ECS or AWS Fargate deployment. +- Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data. +- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Use a custom OpenTelemetry configuration file from the SSM Parameter store @@ -39,8 +39,8 @@ In ECS, you can set the values of environment variables from AWS Systems Manager 1. Open the AWS Systems Manager console. 1. Select Elastic Container Service. - 1. In the navigation pane, choose *Task definition*. - 1. Choose *Create new revision*. + 1. In the navigation pane, choose _Task definition_. + 1. Choose _Create new revision_. 1. Add an environment variable. @@ -53,15 +53,15 @@ In ECS, you can set the values of environment variables from AWS Systems Manager ### Create the SSM parameter 1. Open the AWS Systems Manager console. -1. In the navigation pane, choose *Parameter Store*. -1. Choose *Create parameter*. +1. In the navigation pane, choose _Parameter Store_. +1. Choose _Create parameter_. 1. Create a parameter with the following values: - * Name: `otel-collector-config` - * Tier: `Standard` - * Type: `String` - * Data type: `Text` - * Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. + - Name: `otel-collector-config` + - Tier: `Standard` + - Type: `String` + - Data type: `Text` + - Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. ### Run your task @@ -75,13 +75,13 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet 1. Download the [ECS Fargate task definition template][template] from GitHub. 1. Edit the task definition template and add the following parameters. - * `{{region}}`: The region to send the data to. - * `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN. - * `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN. - * `command` - Assign a value to the command variable to select the path to the configuration file. + - `{{region}}`: The region to send the data to. + - `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN. + - `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN. + - `command` - Assign a value to the command variable to select the path to the configuration file. The AWS Collector comes with two configurations. Select one of them based on your environment: - * Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces. - * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. + - Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces. + - Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. 1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template. ## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar diff --git a/docs/sources/collect/logs-in-kubernetes.md b/docs/sources/collect/logs-in-kubernetes.md index d8b8b17fb2..5feda702fb 100644 --- a/docs/sources/collect/logs-in-kubernetes.md +++ b/docs/sources/collect/logs-in-kubernetes.md @@ -4,7 +4,7 @@ aliases: - ../tasks/collect-logs-in-kubernetes/ # /docs/alloy/latest/tasks/collect-logs-in-kubernetes/ description: Learn how to collect logs on Kubernetes and forward them to Loki menuTitle: Collect Kubernetes logs -title: Collect Kubernetes logs and forward them to Loki +title: Collect Kubernetes logs and forward them to Loki weight: 250 --- @@ -14,26 +14,26 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect logs and forward them This topic describes how to: -* Configure logs delivery. -* Collect logs from Kubernetes Pods. +- Configure logs delivery. +- Collect logs from Kubernetes Pods. ## Components used in this topic -* [`discovery.kubernetes`][discovery.kubernetes] -* [`discovery.relabel`][discovery.relabel] -* [`local.file_match`][local.file_match] -* [`loki.source.file`][loki.source.file] -* [`loki.source.kubernetes`][loki.source.kubernetes] -* [`loki.source.kubernetes_events`][loki.source.kubernetes_events] -* [`loki.process`][loki.process] -* [`loki.write`][loki.write] +- [`discovery.kubernetes`][discovery.kubernetes] +- [`discovery.relabel`][discovery.relabel] +- [`local.file_match`][local.file_match] +- [`loki.source.file`][loki.source.file] +- [`loki.source.kubernetes`][loki.source.kubernetes] +- [`loki.source.kubernetes_events`][loki.source.kubernetes_events] +- [`loki.process`][loki.process] +- [`loki.write`][loki.write] ## Before you begin -* Ensure that you are familiar with logs labelling when working with Loki. -* Identify where to write collected logs. +- Ensure that you are familiar with logs labelling when working with Loki. +- Identify where to write collected logs. You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs. -* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. +- Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Configure logs delivery @@ -56,9 +56,9 @@ To configure a `loki.write` component for logs delivery, complete the following Replace the following: - * _`