Skip to content

Commit

Permalink
Merge pull request #1188 from splunk/repo-sync
Browse files Browse the repository at this point in the history
Pulling refs/heads/main into main
  • Loading branch information
aurbiztondo-splunk authored Feb 16, 2024
2 parents 79e85a9 + 03af45c commit 2c2804d
Show file tree
Hide file tree
Showing 4 changed files with 326 additions and 5 deletions.
2 changes: 1 addition & 1 deletion data-visualization/dashboards/dashboards.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Dashboards in Splunk Observability Cloud
Best practices for creating dashboards<dashboards-best-practices>
dashboards-import-export
Share, clone, and mirror dashboards<dashboard-share-clone-mirror>
Dashboards available<dashboards-list>
Available dashboards <dashboards-list>


Dashboards are groupings of charts and visualizations of metrics. Well-designed dashboards provide useful and actionable insight into your system at a glance. Dashboards can be complex or contain just a few charts that drill down only into the data you want to see.
Expand Down
1 change: 1 addition & 0 deletions metrics-and-metadata/data-tools-landing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Data tools in Splunk Observability Cloud

Metric finder and metadata catalogue <metrics-finder-metadata-catalog>
Related Content <relatedcontent>
relatedcontent-collector-apm.rst
Global data links <link-metadata-to-content>

Splunk Observability Cloud provides a wide array of features and tools to help you manage, understand, and leverage your data:
Expand Down
316 changes: 316 additions & 0 deletions metrics-and-metadata/relatedcontent-collector-apm.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,316 @@
.. _relatedcontent-collector-apm:
.. _get-started-enablerelatedcontent:

***********************************************************************************
Configure the Collector to enable Related Content for Infra and APM
***********************************************************************************

.. meta::
:description: Configue the Collector to enable Related Content for APM.

The default configuration of the Splunk Distribution of the OpenTelemetry Collector automatically configures Related Content for you. If you're using a custom configuration, read on.

For an introduction to Related Content, see :ref:`get-started-relatedcontent`.

Configure the Collector in host monitoring (agent) mode to enable Related Content
==========================================================================================================

To view your infrastructure data in the APM service dashboards, you need to enable certain components in the OpenTelemetry Collector. To learn more, see :ref:`otel-components` and :ref:`otel-data-processing`.

Collector configuration in host monitoring mode
-----------------------------------------------------------------

These are the configuration details required:

``hostmetrics`` receiver
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Enable ``cpu``, ``memory``, ``filesystem`` and ``network`` to collect their metrics.

To learn more, see :ref:`host-metrics-receiver`.

``signalfx`` exporter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The SignalFx exporter aggregates the metrics from the ``hostmetrics`` receiver. It also sends metrics such as ``cpu.utilization``, which are referenced in the relevant APM service charts.

To learn more, see :ref:`signalfx-exporter`.

Correlation flag
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

By default, correlation is activated, utilizing standard SignalFx exporter configurations. This setup enables the Collector to execute relvant API calls, thereby linking your spans with the associated infrastructure metrics.

The SignalFx exporter must be enabled for both the metrics and traces pipelines. To adjust the correlation option further, see the SignalFx exporter's options at :ref:`signalfx-exporter-settings`.

``resourcedetection`` processor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This processor enables a unique ``host.name`` value to be set for metrics and traces. The ``host.name`` is determined by either the EC2 host name or the system host name.

Use the following configuration:

* Use the cloud provider or the :ref:`environment variable <collector-env-var>` to set ``host.name``
* Enable ``override``

To learn more, see :ref:`resourcedetection-processor`.

``resource/add_environment`` processor (optional)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

APM charts require the environment span attribute to be set correctly.

To set this attribute you have two options:

* Configure the attribute in instrumentation
* Use this processor to insert a ``deployment.environment`` span attribute to all spans

To learn more, see :ref:`resourcedetection-processor`.

Example
-----------------------------------------------------------------

Here are the relevant config snippets from each section:

.. code-block::
receivers:
hostmetrics:
collection_interval: 10s
scrapers:
cpu:
disk:
filesystem:
memory:
network:
processors:
resourcedetection:
detectors: [system,env,gcp,ec2]
override: true
resource/add_environment:
attributes:
- action: insert
value: staging
key: deployment.environment
exporters:
# Traces
sapm:
access_token: "${SPLUNK_ACCESS_TOKEN}"
endpoint: "${SPLUNK_TRACE_URL}"
# Metrics + Events + APM correlation calls
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
api_url: "${SPLUNK_API_URL}"
ingest_url: "${SPLUNK_INGEST_URL}"
service:
extensions: [health_check, http_forwarder, zpages]
pipelines:
traces:
receivers: [jaeger, zipkin]
processors: [memory_limiter, batch, resourcedetection, resource/add_environment]
exporters: [sapm, signalfx]
metrics:
receivers: [hostmetrics]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx]
Configure the Collector to enable Related Content from host monitoring (agent) mode to data forwarding (gateway) mode
============================================================================================================================

If you need to run the Opentelemetry Collector in both host monitoring (agent) and data forwarding (gateway) modes, refer to the following sections.

For more information, see :ref:`otel-deployment-mode`.

Configure the agent
-----------------------------------------------------------------

Follow the same steps as mentioned in the previous section and include the following changes:

``http_forwarder`` extension
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The ``http_forwarder`` listens on port ``6060`` and sends all the REST API calls directly to Splunk Observability Cloud.

If your agent cannot talk to the Splunk SaaS backend directly, use the ``egress`` endpoint to change to the URL of the gateway.

``signalfx`` exporter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. caution:: You must send the REST API calls, required for trace correlation, via the SignalFx exporter in the ``traces`` pipeline.

If you want, you can also use the exporter for metrics, although it's best to use the OTLP exporter. See :ref:`enablerelatedcontent-otlp` for more details.

Use the following configuration:

* Set the ``api_url`` endpoint to the URL of the gateway. Specify the ingress port of the ``http_forwarder`` of the gateway, which is ``6060`` by default.
* Set the ``ingest_url`` endpoint to the URL of the gateway. Specify the ingress port of the ``signalfx`` receiver of the gateway, which is ``9943`` by default.

All pipelines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Send all metrics, traces and logs pipelines to the appropriate receivers on the gateway.

.. _enablerelatedcontent-otlp:

``otlp exporter`` (optional)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Using the OTLP exporter is optional, but recommended for the majority of your traffic from the agent to the gateway. Since all data gets converted to ``otlp`` upon receival, the OTLP exporter is the most efficient way to send data to the gateway. Use the SignalFx exporter only to make REST API calls in the traces pipeline.

The OTLP exporter uses the ``grpc`` protocol, so the endpoint must be defined as the IP address of the gateway.

.. note:: If you are using the OTLP exporter for metrics, the ``hostmetrics`` aggregation must be performed at the gateway.

To learn more, see :ref:`otlp-exporter`.

Example
-----------------------------------------------------------------

Here are the relevant config snippets from each section:

.. code-block::
receivers:
hostmetrics:
collection_interval: 10s
scrapers:
cpu:
disk:
filesystem:
memory:
network:
processors:
resourcedetection:
detectors: [system,env,gcp,ec2]
override: true
resource/add_environment:
attributes:
- action: insert
value: staging
key: deployment.environment
exporters:
# Traces
otlp:
endpoint: "${SPLUNK_GATEWAY_URL}:4317"
tls:
insecure: true
# Metrics + Events + APM correlation calls
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
api_url: "http://${SPLUNK_GATEWAY_URL}:6060"
ingest_url: "http://${SPLUNK_GATEWAY_URL}:9943"
service:
extensions: [health_check, http_forwarder, zpages]
pipelines:
traces:
receivers: [jaeger, zipkin]
processors: [memory_limiter, batch, resourcedetection, resource/add_environment]
exporters: [otlp, signalfx]
metrics:
receivers: [hostmetrics]
processors: [memory_limiter, batch, resourcedetection]
exporters: [otlp]
Configure the gateway
-----------------------------------------------------------------

In gateway mode, the relevant receivers to match the exporters from the Agent. In addition, you need to make the following changes.

``http_forwarder`` extension
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The ``http_forwarder`` listens on port ``6060`` and sends all the REST API calls directly to Splunk Observability Cloud.

In Gateway mode, set the ``egress`` endpoint to the Splunk Observability Cloud SaaS endpoint.

``signalfx`` exporter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Set both the ``translation_rules`` and ``exclude_metrics`` flags to their default value, and thus can be commented out or simply removed. This ensures that the ``hostmetrics`` aggregations that are normally performed by the SignalFx exporter on the agent are performed by the SignalFx exporter on the gateway instead.

Example
-----------------------------------------------------------------

Here are the relevant config snippets from each section:

.. code-block::
extensions:
http_forwarder:
egress:
endpoint: "https://api.${SPLUNK_REALM}.signalfx.com"
receivers:
otlp:
protocols:
grpc:
http:
signalfx:
exporters:
# Traces
sapm:
access_token: "${SPLUNK_ACCESS_TOKEN}"
endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace"
# Metrics + Events
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
realm: "${SPLUNK_REALM}"
service:
extensions: [http_forwarder]
pipelines:
traces:
receivers: [otlp]
processors:
- memory_limiter
- batch
exporters: [sapm]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [signalfx]
Use the SignalFx exporter on both Collector modes
============================================================================================================================

Alternatively, if you want to use the SignalFx exporter for metrics on both host monitoring (agent) and data forwarding (gateway) modes, you need to disable the aggregation at the gateway. To do so, you must set the ``translation_rules`` and ``exclude_metrics`` to empty lists.

Example
-----------------------------------------------------------------

Configure the agent in gateway mode as follows:

.. code-block::
exporters:
# Traces
sapm:
access_token: "${SPLUNK_ACCESS_TOKEN}"
endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace"
# Metrics + Events
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
realm: "${SPLUNK_REALM}"
translation_rules: []
exclude_metrics: []
service:
extensions: [http_forwarder]
pipelines:
traces:
receivers: [otlp]
processors:
- memory_limiter
- batch
exporters: [sapm]
metrics:
receivers: [signalfx]
processors: [memory_limiter, batch]
exporters: [signalfx]
12 changes: 8 additions & 4 deletions metrics-and-metadata/relatedcontent.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
.. _get-started-relatedcontent:
.. _get-started-enablerelatedcontent:

*****************************************************************
Related Content in Splunk Observability Cloud
*****************************************************************

.. meta::
:description: Ensure metadata keys are correct to enable full Related Content functionality.
:description: Related Content functionality: introduction, requirements, how to use.

The Related Content feature automatically correlates and presents data between different views within Splunk Observability Cloud.

Expand Down Expand Up @@ -71,15 +70,20 @@ The following table describes when and where in Splunk Observability Cloud you c

.. _relatedcontent-collector:

Related Content and the Splunk Distribution of the OpenTelemetry Collector metadata compatibility
Use the Splunk Distribution of the OpenTelemetry Collector to enable Related Content
==========================================================================================================

Splunk Observability Cloud uses OpenTelemetry to correlate telemetry types. To enable this ability, your telemetry field names or metadata key names must exactly match the metadata key names used by both OpenTelemetry and Splunk Observability Cloud.

When you deploy the Splunk Distribution of the OpenTelemetry Collector with its default configuration to send your telemetry data to Splunk Observability Cloud, your metadata key names are automatically mapped correctly.
When you deploy the Splunk Distribution of the OpenTelemetry Collector with its default configuration to send your telemetry data to Splunk Observability Cloud, your metadata key names are automatically mapped correctly. To learn more about the Collector, see :ref:`otel-intro`.

.. caution:: If you don't use the Splunk Distribution of OpenTelemetry Collector, or you use a non-default configuration, your telemetry data might have metadata key names that are not consistent with those used by Splunk Observability Cloud and OpenTelemetry, and Related Content might not work. In that case, you must change your metadata key names.

Configure the Collector to enable APM Related Content
-----------------------------------------------------------------

The APM service dashboards include charts that indicate the health of the underlying infrastructure. The default configuration of the Splunk Distribution of the OpenTelemetry Collector automatically configures this for you, but if you're using a custom configuration, read :ref:`relatedcontent-collector-apm`.

Metadata compatibility example
-----------------------------------------------------------------

Expand Down

0 comments on commit 2c2804d

Please sign in to comment.