From 1a6641512caaa66fdf24032934a5e634eb4fc1c4 Mon Sep 17 00:00:00 2001 From: Jeffrey Limnardy Date: Mon, 21 Oct 2024 11:15:25 +0200 Subject: [PATCH] docs: Updated deprecated otel docs link (#1539) --- ...ntegrate-prometheus-with-telemetry-manager-using-alerting.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributor/arch/003-integrate-prometheus-with-telemetry-manager-using-alerting.md b/docs/contributor/arch/003-integrate-prometheus-with-telemetry-manager-using-alerting.md index d0ee2f6af..30a8ef8a1 100644 --- a/docs/contributor/arch/003-integrate-prometheus-with-telemetry-manager-using-alerting.md +++ b/docs/contributor/arch/003-integrate-prometheus-with-telemetry-manager-using-alerting.md @@ -25,7 +25,7 @@ Our current reconciliation strategy triggers either when a change occurs or ever #### Flakiness Mitigation -To ensure reliability and avoid false alerts, it's crucial to introduce a delay before signaling a problem. As suggested in [OTel Collector monitoring best practices](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/monitoring.md): +To ensure reliability and avoid false alerts, it's crucial to introduce a delay before signaling a problem. As suggested in [OTel Collector monitoring best practices](https://opentelemetry.io/docs/collector/internal-telemetry/#use-internal-telemetry-to-monitor-the-collector): > Use the rate of otelcol_processor_dropped_spans > 0 and otelcol_processor_dropped_metric_points > 0 to detect data loss. Depending on requirements, set up a minimal time window before alerting to avoid notifications for minor losses that fall within acceptable levels of reliability.