diff --git a/_includes/admin/roles_data_configuration.rst b/_includes/admin/roles_data_configuration.rst
index f884f6bc4..0623adcc0 100644
--- a/_includes/admin/roles_data_configuration.rst
+++ b/_includes/admin/roles_data_configuration.rst
@@ -19,11 +19,6 @@
- No
- No
- * - :strong:`View Incident Managent`
- - Yes
- - No
- - No
- - No
* - :strong:`View APM MetricSets`
- Yes
diff --git a/_includes/admin/roles_data_configuration1.rst b/_includes/admin/roles_data_configuration1.rst
index 7e91422fb..339c1eb50 100644
--- a/_includes/admin/roles_data_configuration1.rst
+++ b/_includes/admin/roles_data_configuration1.rst
@@ -18,11 +18,6 @@
- Yes
- * - :strong:`View Incident Managent`
- - Yes
- - No
-
-
* - :strong:`View APM MetricSets`
- Yes
- Yes
diff --git a/data-visualization/dashboards/dashboards-list.rst b/data-visualization/dashboards/dashboards-list.rst
new file mode 100644
index 000000000..127980a4e
--- /dev/null
+++ b/data-visualization/dashboards/dashboards-list.rst
@@ -0,0 +1,25 @@
+.. _dashboards-list-imm:
+
+*******************************************************
+Dashboards available
+*******************************************************
+
+.. meta::
+ :description: List of built-in dashboards available to you
+
+Dashboards are groupings of charts and visualizations of metrics. Both Navigators and Dashboard groups contain multiple dashboards. To learn more about where dashboards fit in the Infrastructure Monitoring hierarchy, see :ref:`get-started-infrastructure`.
+
+
+List of built-in dashboards
+-----------------------------------
+.. raw:: html
+
+
+
+
+
+
+
+
+
+
diff --git a/data-visualization/dashboards/dashboards.rst b/data-visualization/dashboards/dashboards.rst
index cab4aea05..501a7d880 100644
--- a/data-visualization/dashboards/dashboards.rst
+++ b/data-visualization/dashboards/dashboards.rst
@@ -21,12 +21,12 @@ Dashboards in Splunk Observability Cloud
Best practices for creating dashboards
dashboards-import-export
Share, clone, and mirror dashboards
-
+ Dashboards available
-Dashboards are groupings of charts and visualizations of metrics. Well-designed dashboards can provide useful and actionable insight into your system at a glance. Dashboards can be complex or contain just a few charts that drill down only into the data you want to see.
+Dashboards are groupings of charts and visualizations of metrics. Well-designed dashboards provide useful and actionable insight into your system at a glance. Dashboards can be complex or contain just a few charts that drill down only into the data you want to see.
-Continue with the following sections to learn how to use, create, and modify dashboards to suit your requirements.
+Continue with the following topics to learn how to use, create, and modify dashboards to suit your requirements.
- :ref:`dashboard-basics`
- :ref:`built-in-dashboards`
@@ -39,5 +39,6 @@ Continue with the following sections to learn how to use, create, and modify das
- :ref:`dashboards-best-practices`
- :ref:`dashboards-import-export`
- :ref:`dashboard-share-clone-mirror`
+- :ref:`dashboards-list-imm`
diff --git a/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst b/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst
index 62c83abb1..fc9d65875 100644
--- a/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst
+++ b/gdi/get-data-in/application/java/instrumentation/instrument-java-application.rst
@@ -245,7 +245,7 @@ The following example shows how to update a deployment to expose environment var
- name: OTEL_RESOURCE_ATTRIBUTES
value: "deployment.environment="
-.. note:: You can also deploy instrumentation using the Kubernetes Operator. See :ref:`auto-instrumentation-operator`.
+.. note:: You can also deploy instrumentation using the Kubernetes Operator. See :ref:`auto-instrumentation-java-k8s`.
.. _java-agent-cloudfoundry:
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-dotnet.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-dotnet.rst
index b26b002bd..4e4b2d139 100644
--- a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-dotnet.rst
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-dotnet.rst
@@ -1,4 +1,4 @@
-.. include:: /_includes/gdi/zero-config-preview-header.rst
+
.. _auto-instrumentation-dotnet:
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-k8s.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-k8s.rst
index e02308347..407f9aefd 100644
--- a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-k8s.rst
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-k8s.rst
@@ -1,179 +1,170 @@
-.. include:: /_includes/gdi/zero-config-preview-header.rst
-
.. _auto-instrumentation-java-k8s:
-*****************************************************************************
-Zero Configuration Auto Instrumentation for Java Applications on Kubernetes
-*****************************************************************************
+************************************************************************************
+Zero Configuration Automatic Instrumentation for Kubernetes Java applications
+************************************************************************************
.. meta::
- :description: How to activate zero configuration automatic instrumentation for Kubernetes Java applications and thus collect and send traces to Splunk Application Performance Monitoring (APM) without altering your code.
+ :description: Use the Collector with the upstream Kubernetes Operator for automatic instrumentation to easily add observability code to your application, enabling it to produce telemetry data.
-Zero Configuration Auto Instrumentation for Java activates automatic instrumentation for Kubernetes Java applications. When you activate automatic instrumentation, you only have to restart any applications that are already running.
+You can use the OTel Collector with an upstream Operator in a Kubernetes environment to automatically instrument your Java applications.
-.. _zero-config-k8s-prereqs:
+Requirements
+================================================================
-Prerequisites
-====================================
+Zero Config Auto Instrumentation for Java requires the following components:
-.. include:: /_includes/gdi/zero-conf-reqs.rst
+* The :ref:`Splunk OTel Collector chart `: It deploys the Collector and related resources, including the OpenTelemetry Operator.
+* The OpenTelemetry Operator, which manages auto-instrumentation of Kubernetes applications. See more in the :new-page:`OpenTelemetry GitHub repo `.
+* A Kubernetes instrumentation object ``opentelemetry.io/v1alpha1``, which configures auto-instrumentation settings for applications.
-- Install :ref:`the Splunk OpenTelemetry Collector Kubernetes Operator` on a :new-page:`compatible version of Kubernetes `.
+1. Set up the environment for instrumentation
+------------------------------------------------------------
-.. _enable-zero-conf-java-k8s:
+Create a namespace for your Java applications and deploy your Java applications to that namespace.
-Activate automatic instrumentation of Java applications on Kubernetes
-===============================================================================
+.. code-block:: bash
-Before deployment, you can activate automatic instrumentation for a Kubernetes Deployment or pod by adding the ``otel.splunk.com/inject-java`` annotation.
+ kubectl create namespace
-When you activate instrumentation, the Collector operator injects the Splunk OTel Java agent into Java applications to capture telemetry data.
+2. Deploy the Helm Chart with the Operator enabled
+------------------------------------------------------------
-To activate automatic instrumentation, add this annotation to the ``spec`` for a deployment or pod: ``otel.splunk.com/inject-java: "true"``. If you add the annotation to a pod, restarting the pod removes the annotation.
+Deploy the :ref:`Collector for Kubernetes with the Helm chart ` with ``operator.enabled=true`` to include the Operator in the deployment.
-You can also activate automatic instrumentation on a running workload.
+Ingest traces
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. _enable-zero-conf-java-yaml:
+In order to be properly ingest trace telemetry data, the attribute ``environment`` must be on board the exported traces. There are two ways to set this attribute:
-Activate or deactivate automatic instrumentation before runtime
-----------------------------------------------------------------
+* Use the `values.yaml` optional environment configuration.
+* Use the Instrumentation spec with the environment variable ``OTEL_RESOURCE_ATTRIBUTES``.
-If the deployment is not deployed, add the ``otel.splunk.com/inject-java`` annotation to the application deployment YAML file.
+Add certifications
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-For example, given the following deployment YAML:
+The Operator requires certain TLS cerificates to work. If a certification manager (or any other TLS certificate source) is not available in the cluster, then you'll need to deploy it using ``certmanager.enabled=true``. You can use the following commands to run these steps.
.. code-block:: yaml
+ # Check if cert-manager is already installed, don't deploy a second cert-manager.
+ kubectl get pods -l app=cert-manager --all-namespaces
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: my-java-app
- spec:
- template:
- spec:
- containers:
- - name: my-java-app
- image: my-java-app:latest
-
-Activate auto instrumentation by adding ``otel.splunk.com/inject-java: "true"`` to the ``spec``:
-
-.. code-block:: yaml
-
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: my-java-app
- spec:
- template:
- metadata:
- annotations:
- otel.splunk.com/inject-java: "true"
- spec:
- containers:
- - name: my-java-app
- image: my-java-app:latest
-
-The Collector operator activates automatic instrumentation for any Java applications in the deployment.
-
-To deactivate automatic instrumentation, remove the annotation or set its value to ``false``.
-
-.. _enable-zero-conf-java-patch:
-
-Activate or deactivate automatic instrumentation on a running workload
-------------------------------------------------------------------------
-
-If the application is already running, patch the deployment using ``kubectl patch`` to activate instrumentation.
-
-.. caution::
-
- Patching a deployment restarts the pods in the deployment.
-
-
-Use the following snippet as an example. Replace ```` with your deployment's name.
-
-.. code-block:: bash
-
- kubectl patch deployment -p '{"spec": {"template":{"metadata":{"annotations":{"otel.splunk.com/inject-java":"true"}}}} }'
-
-To deactivate automatic instrumentation, run the same command but change the value of the annotation to ``false``:
-
-.. code-block:: bash
-
- kubectl patch deployment -p '{"spec": {"template":{"metadata":{"annotations":{"otel.splunk.com/inject-java":"false"}}}} }'
+ # If cert-manager is not deployed.
+ helm install splunk-otel-collector -f ./my_values.yaml --set certmanager.enabled=true,operator.enabled=true,environment=dev -n monitoring helm-charts/splunk-otel-collector
+ # If cert-manager is already deployed.
+ helm install splunk-otel-collector -f ./my_values.yaml --set operator.enabled=true,environment=dev -n monitoring helm-charts/splunk-otel-collector
-.. _k8s-zero-conf-java-verify:
+3. Verify all the OpenTelemetry resources are deployed successfully
+---------------------------------------------------------------------------
-Check the status of automatic instrumentation
--------------------------------------------------
+Resources include the Collector, the Operator, webhook, an instrumentation.
-When you successfully activate instrumentation for a deployment, the metadata for every pod in the deployment includes the annotation ``otel.splunk.com/injection-status:success``.
+Run the following to verify the resources are deployed correctly:
-Use the following command to check for the ``injection-status`` annotation. Replace ```` with the name of your pod.
-
-.. code-block:: bash
-
- kubectl get pod -o yaml | grep inject
-
-The command result is similar to the following:
+.. code-block:: yaml
+
+ kubectl get pods -n monitoring
+ # NAME READY
+ # NAMESPACE NAME READY STATUS
+ # monitoring splunk-otel-collector-agent-lfthw 2/2 Running
+ # monitoring splunk-otel-collector-cert-manager-6b9fb8b95f-2lmv4 1/1 Running
+ # monitoring splunk-otel-collector-cert-manager-cainjector-6d65b6d4c-khcrc 1/1 Running
+ # monitoring splunk-otel-collector-cert-manager-webhook-87b7ffffc-xp4sr 1/1 Running
+ # monitoring splunk-otel-collector-k8s-cluster-receiver-856f5fbcf9-pqkwg 1/1 Running
+ # monitoring splunk-otel-collector-opentelemetry-operator-56c4ddb4db-zcjgh 2/2 Running
+
+ kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io -n monitoring
+ # NAME WEBHOOKS AGE
+ # splunk-otel-collector-cert-manager-webhook 1 14m
+ # splunk-otel-collector-opentelemetry-operator-mutation 3 14m
+
+ kubectl get otelinst -n {target_application_namespace}
+ # NAME AGE ENDPOINT
+ # splunk-instrumentation 3m http://$(SPLUNK_OTEL_AGENT):4317
+
+4. Set annotations to instrument Java applications
+------------------------------------------------------------
+
+Activate and deactivate auto instrumentation for Java
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To activate auto instrumentation for your Node.js deployment, run the following command:
.. code-block:: bash
- otel.splunk.com/inject-java: "true"
- otel.splunk.com/injection-status: success
-
-
-If the ``injection-status`` annotation is not present or is not set to ``success``, auto instrumentation is not activated. See the troubleshooting section for next steps.
-
-If the ``injection-status`` annotation is set to ``success``, you have activated instrumentation correctly. You can :ref:`verify-apm-data` or :ref:`optionally configure instrumentation settings`.
-
-.. _configure-java-zeroconf-k8s:
-
-Optionally configure instrumentation
------------------------------------------
+ kubectl patch deployment -n -p '{"spec": {"template":{"metadata":{"annotations":{"instrumentation.opentelemetry.io/inject-java":"/splunk-otel-collector"}}}} }'
-The default settings for auto instrumentation are sufficient for most cases. You can add advanced configuration like activating custom sampling and including custom data in the reported spans with environment variables and Java system properties.
+.. note::
+ * The deployment pod will restart after running this command.
+ * If the chart is not installed in the "default" namespace, modify the annotation value to be "{chart_namespace}/splunk-otel-collector".
-For example, if you want every span to include the key-value pair ``build.id=feb2023_v2``, set the ``OTEL_RESOURCE_ATTRIBUTES`` environment variable.
-
- .. code-block:: bash
-
- kubectl set env deployment/ OTEL_RESOURCE_ATTRIBUTES=build.id=feb2023_v2
-
-See :ref:`advanced-java-otel-configuration` for the full list of supported environment variables.
-
-.. include:: /_includes/gdi/next-steps.rst
-
-.. _k8s-zero-conf-troubleshooting:
-
-Troubleshooting
-=======================
-
-If you activate auto instrumentation and you do not see any telemetry data in Splunk Observability Cloud APM, try the following steps:
-
-- Check the Collector operator logs. Look for the pods in the ``splunk-otel-operator-system`` namespace, and then examine their logs:
+To deactivate auto instrumentation for your Java deployment, run the following command:
.. code-block:: bash
- kubectl get pods --namespace=splunk-otel-operator-system
-
- NAME READY STATUS RESTARTS AGE
- splunk-otel-agent-7cspj 1/1 Running 0 31h
- splunk-otel-agent-gkmts 1/1 Running 0 31h
- splunk-otel-agent-xbnpm 1/1 Running 0 31h
- splunk-otel-cluster-receiver-8cd9874c8-6jlz6 1/1 Running 0 31h
- splunk-otel-operator-controller-manager-8455c8bc7-m8f24 1/1 Running 0 31h
+ kubectl patch deployment -n --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/instrumentation.opentelemetry.io~1inject-java"}]'
- kubectl logs --namespace=splunk-otel-operator-system splunk-otel-operator-controller-manager-8455c8bc7-m8f24
+Verify instrumentation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Run this command to see the logs for one of the pods:
+To verify that the instrumentation was successful, run the following command on an individual pod. Your instrumented pod should contain an initContainer named ``opentelemetry-auto-instrumentation`` and the target application container should have several ``OTEL_*`` environment variables similar to those in the output below.
.. code-block:: bash
- kubectl logs --namespace=splunk-otel-operator-system
-
-- You can also follow the :ref:`steps to troubleshoot the Java agent`.
-
-.. include:: /_includes/troubleshooting-components.rst
\ No newline at end of file
+ kubectl describe pod -n otel-demo -l app.kubernetes.io/name=opentelemetry-demo-frontend
+ # Name: opentelemetry-demo-frontend-57488c7b9c-4qbfb
+ # Namespace: otel-demo
+ # Annotations: instrumentation.opentelemetry.io/inject-nodejs: default/splunk-otel-collector
+ # Status: Running
+ # Init Containers:
+ # opentelemetry-auto-instrumentation:
+ # Command:
+ # cp
+ # -a
+ # /autoinstrumentation/.
+ # /otel-auto-instrumentation/
+ # State: Terminated
+ # Reason: Completed
+ # Exit Code: 0
+ # Containers:
+ # frontend:
+ # State: Running
+ # Ready: True
+ # Environment:
+ # FRONTEND_PORT: 8080
+ # FRONTEND_ADDR: :8080
+ # AD_SERVICE_ADDR: opentelemetry-demo-adservice:8080
+ # CART_SERVICE_ADDR: opentelemetry-demo-cartservice:8080
+ # CHECKOUT_SERVICE_ADDR: opentelemetry-demo-checkoutservice:8080
+ # CURRENCY_SERVICE_ADDR: opentelemetry-demo-currencyservice:8080
+ # PRODUCT_CATALOG_SERVICE_ADDR: opentelemetry-demo-productcatalogservice:8080
+ # RECOMMENDATION_SERVICE_ADDR: opentelemetry-demo-recommendationservice:8080
+ # SHIPPING_SERVICE_ADDR: opentelemetry-demo-shippingservice:8080
+ # WEB_OTEL_SERVICE_NAME: frontend-web
+ # PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: http://localhost:8080/otlp-http/v1/traces
+ # NODE_OPTIONS: --require /otel-auto-instrumentation/autoinstrumentation.js
+ # SPLUNK_OTEL_AGENT: (v1:status.hostIP)
+ # OTEL_SERVICE_NAME: opentelemetry-demo-frontend
+ # OTEL_EXPORTER_OTLP_ENDPOINT: http://$(SPLUNK_OTEL_AGENT):4317
+ # OTEL_RESOURCE_ATTRIBUTES_POD_NAME: opentelemetry-demo-frontend-57488c7b9c-4qbfb (v1:metadata.name)
+ # OTEL_RESOURCE_ATTRIBUTES_NODE_NAME: (v1:spec.nodeName)
+ # OTEL_PROPAGATORS: tracecontext,baggage,b3
+ # OTEL_RESOURCE_ATTRIBUTES: splunk.zc.method=autoinstrumentation-nodejs:0.41.1,k8s.container.name=frontend,k8s.deployment.name=opentelemetry-demo-frontend,k8s.namespace.name=otel-demo,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=opentelemetry-demo-frontend-57488c7b9c,service.version=1.5.0-frontend
+ # Mounts:
+ # /otel-auto-instrumentation from opentelemetry-auto-instrumentation (rw)
+ # Volumes:
+ # opentelemetry-auto-instrumentation:
+ # Type: EmptyDir (a temporary directory that shares a pod's lifetime)
+
+5. View results at Splunk Observability APM
+------------------------------------------------------------
+
+Allow the Operator to do the work. The Operator intercepts and alters the Kubernetes API requests to create and update annotated pods, the internal pod application containers are instrumented, and trace and metrics data populates the :ref:`APM dashboard `.
+
+Learn more
+===========================================================================
+
+* To learn more about how Zero Config Auto Instrumentation works in Splunk Observability Cloud, see :new-page:`more detailed documentation in GitHub `.
+* Refer to :new-page:`the operator pattern in the Kubernetes documentation ` for more information.
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-linux.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-linux.rst
index 02008d7e9..9f38b8567 100644
--- a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-linux.rst
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-linux.rst
@@ -1,4 +1,4 @@
-.. include:: /_includes/gdi/zero-config-preview-header.rst
+
.. _auto-instrumentation-java-linux:
@@ -30,37 +30,79 @@ You can install the ``splunk-otel-auto-instrumentation`` package in the followin
.. tab:: Installer script
- To install the package, run the Collector installer script with the ``--with-instrumentation`` option. The installer script will install the Collector and the Java agent from the Splunk Distribution of OpenTelemetry Java. The Java agent is then loaded automatically when a Java application starts on the local machine.
+ Using the installer script, you can install the auto instrumentation package for Java and activate auto instrumentation for Java for either all supported Java applications on the host via the system-wide method or for only Java applications running as ``systemd`` services.
+
+ .. note:: By default, auto instrumentation is activated for both Java and Node.js when using the installer script. To deactivate auto instrumentation for Node.js, add the ``--without-instrumentation-sdk node`` or ``--with-instrumentation-sdk java`` option in the installer script command.
+
+ .. tabs::
+
+ .. tab:: System-wide
+
+ Run the installer script with the ``--with-instrumentation`` option, as shown in the following example. Replace ```` and ```` with your Splunk Observability Cloud realm and token, respectively.
- Run the installer script with the ``--with-instrumentation`` option, as shown in the following example. Replace ```` and ```` with your Splunk Observability Cloud realm and token, respectively.
+ .. code-block:: bash
- .. code-block:: bash
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --realm --
- curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
- sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --realm --
+ .. note:: If you have a Log Observer entitlement or wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option.
- .. note:: If you have a Log Observer entitlement or wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance.
+ The system-wide auto instrumentation method automatically adds environment variables to ``/etc/splunk/zeroconfig/java.conf``.
- To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example:
+ To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example:
- .. code-block:: bash
- :emphasize-lines: 2
+ .. code-block:: bash
+ :emphasize-lines: 2
- curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
- sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --deployment-environment prod \
- --realm --
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --deployment-environment prod \
+ --realm --
- You can activate AlwaysOn Profiling for CPU and memory, as well as metrics, using additional options, as in the following example:
+ You can activate AlwaysOn Profiling for CPU and memory, as well as metrics, using additional options, as in the following example:
- .. code-block:: bash
- :emphasize-lines: 4
+ .. code-block:: bash
+ :emphasize-lines: 4
- curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
- sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --deployment-environment prod \
- --realm -- \
- --enable-profiler --enable-profiler-memory --enable-metrics
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --deployment-environment prod \
+ --realm -- \
+ --enable-profiler --enable-profiler-memory --enable-metrics
+
+ Next, ensure the service is running and restart your application. See :ref:`verify-install` and :ref:`start-restart-java-apps`.
- Next, ensure the service is running and restart your application. See :ref:`verify-install` and :ref:`start-restart-java-apps`.
+ .. tab:: ``systemd``
+
+ Run the installer script with the ``--with-systemd-instrumentation`` option, as shown in the following example. Replace ```` and ```` with your Splunk Observability Cloud realm and token, respectively.
+
+ .. code-block:: bash
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-systemd-instrumentation --realm --
+
+ The ``systemd`` instrumentation automatically adds environment variables to ``/usr/lib/systemd/system.conf.d/00-splunk-otel-auto-instrumentation.conf``.
+
+ .. note:: If you have a Log Observer entitlement or wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option.
+
+ To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example:
+
+ .. code-block:: bash
+ :emphasize-lines: 2
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-systemd-instrumentation --deployment-environment prod \
+ --realm --
+
+ You can activate AlwaysOn Profiling for CPU and memory, as well as metrics, using additional options, as in the following example:
+
+ .. code-block:: bash
+ :emphasize-lines: 4
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-systemd-instrumentation --deployment-environment prod \
+ --realm -- \
+ --enable-profiler --enable-profiler-memory --enable-metrics
+
+ Next, ensure the service is running and restart your application. See :ref:`verify-install` and :ref:`start-restart-java-apps`.
.. tab:: Linux packages (deb, rpm)
@@ -75,11 +117,11 @@ You can install the ``splunk-otel-auto-instrumentation`` package in the followin
.. tabs::
.. code-tab:: bash Debian
-
+
sudo dpkg -i
-
+
.. code-tab:: bash RPM
-
+
sudo rpm -ivh
3. Edit the ``/etc/otel/collector/splunk-otel-collector.conf`` file to set the ``SPLUNK_ACCESS_TOKEN`` and ``SPLUNK_REALM`` variables to the values you got earlier. If the file does not exist, use the provided sample at ``/etc/otel/collector/splunk-otel-collector.conf.example`` as a starting point.
@@ -93,7 +135,7 @@ You can install the ``splunk-otel-auto-instrumentation`` package in the followin
.. code-block:: bash
- sudo systemctl start splunk-otel-collector
+ sudo systemctl start splunk-otel-collector
5. :ref:`verify-install`.
6. :ref:`start-restart-java-apps`.
@@ -155,7 +197,7 @@ The default settings for zero config autoinstrumentation are sufficient for most
The installation package contains the following artifacts:
-- The configuration file at ``/usr/lib/splunk-instrumentation/instrumentation.conf``
+- The configuration file at ``/etc/splunk/zeroconfig/java.conf``. This is only applicable for the system-wide method.
- The :new-page:`Java Instrumentation Agent ` at ``/usr/lib/splunk-instrumentation/splunk-otel-javaagent.jar``
- The shared instrumentation library at ``/usr/lib/splunk-instrumentation/libsplunk.so```
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-operator.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-operator.rst
index 701ea35f2..b6781cb9c 100644
--- a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-operator.rst
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java-operator.rst
@@ -37,7 +37,7 @@ Arnau creates the ``spring-petclinic`` namespace and deploys the related Java ap
2. Deploy and configure the Collector
======================================================================
-Arnau follows the steps described in :ref:`auto-instrumentation-operator` to set up Auto Intrumentation for their clinic apps.
+Arnau follows the steps described in :ref:`auto-instrumentation-java-k8s` to set up Auto Intrumentation for their clinic apps.
After completing the deployment, Arnau is able to see the results using :ref:`APM `.
@@ -48,9 +48,4 @@ After completing the deployment, Arnau is able to see the results using :ref:`AP
Summary
======================================================================
-Arnau uses the Collector and the upstream Kubernetes Operator to auto-instrument their Java applications and see the results in APM dashboards.
-
-Learn more
-======================================================================
-
-To install the Operator for Auto Instrumentation, see :ref:`Install the Collector with the Kubernetes Operator `.
\ No newline at end of file
+Arnau uses the Collector and the upstream Kubernetes Operator to auto-instrument their Java applications and see the results in APM dashboards.
\ No newline at end of file
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java.rst
index 6143160aa..7113d03b3 100644
--- a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java.rst
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-java.rst
@@ -1,4 +1,4 @@
-.. include:: /_includes/gdi/zero-config-preview-header.rst
+
.. _auto-instrumentation-java:
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs-k8s.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs-k8s.rst
new file mode 100644
index 000000000..96a4d5e72
--- /dev/null
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs-k8s.rst
@@ -0,0 +1,176 @@
+.. include:: /_includes/gdi/zero-config-preview-header.rst
+
+.. _auto-instrumentation-nodejs-k8s:
+
+************************************************************************************
+Zero Configuration Automatic Instrumentation for Kubernetes Node.js applications
+************************************************************************************
+
+.. meta::
+ :description: Use the Collector with the upstream Kubernetes Operator for automatic instrumentation to easily add observability code to your application, enabling it to produce telemetry data.
+
+You can use the OTel Collector with an upstream Operator in a Kubernetes environment to automatically instrument your Node.js applications.
+
+.. note::
+ For a specific example of how a customer automatically instruments a Node.js application, see :new-page:`https://github.com/signalfx/splunk-otel-collector-chart/blob/main/examples/enable-operator-and-auto-instrumentation/otel-demo-nodejs.md`.
+
+Requirements
+================================================================
+
+Zero Config Auto Instrumentation for Node.js requires the following components:
+
+* The :ref:`Splunk OTel Collector chart `: It deploys the Collector and related resources, including the OpenTelemetry Operator.
+* The OpenTelemetry Operator, which manages auto-instrumentation of Kubernetes applications. See more in the :new-page:`OpenTelemetry GitHub repo `.
+* A Kubernetes instrumentation object ``opentelemetry.io/v1alpha1``, which configures auto-instrumentation settings for applications.
+
+1. Set up the environment for instrumentation
+------------------------------------------------------------
+
+Create a namespace for your Node.js applications and deploy your Node.js applications to that namespace.
+
+.. code-block:: bash
+
+ kubectl create namespace
+
+2. Deploy the Helm Chart with the Operator enabled
+------------------------------------------------------------
+
+Deploy the :ref:`Collector for Kubernetes with the Helm chart ` with ``operator.enabled=true`` to include the Operator in the deployment.
+
+Ingest traces
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In order to be properly ingest trace telemetry data, the attribute ``environment`` must be on board the exported traces. There are two ways to set this attribute:
+
+* Use the `values.yaml` optional environment configuration.
+* Use the Instrumentation spec with the environment variable ``OTEL_RESOURCE_ATTRIBUTES``.
+
+Add certifications
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Operator requires certain TLS cerificates to work. If a certification manager (or any other TLS certificate source) is not available in the cluster, then you'll need to deploy it using ``certmanager.enabled=true``. You can use the following commands to run these steps.
+
+.. code-block:: yaml
+
+ # Check if cert-manager is already installed, don't deploy a second cert-manager.
+ kubectl get pods -l app=cert-manager --all-namespaces
+
+ # If cert-manager is not deployed.
+ helm install splunk-otel-collector -f ./my_values.yaml --set certmanager.enabled=true,operator.enabled=true,environment=dev -n monitoring helm-charts/splunk-otel-collector
+
+ # If cert-manager is already deployed.
+ helm install splunk-otel-collector -f ./my_values.yaml --set operator.enabled=true,environment=dev -n monitoring helm-charts/splunk-otel-collector
+
+3. Verify all the OpenTelemetry resources are deployed successfully
+---------------------------------------------------------------------------
+
+Resources include the Collector, the Operator, webhook, an instrumentation.
+
+Run the following to verify the resources are deployed correctly:
+
+.. code-block:: yaml
+
+ kubectl get pods -n monitoring
+ # NAME READY
+ # NAMESPACE NAME READY STATUS
+ # monitoring splunk-otel-collector-agent-lfthw 2/2 Running
+ # monitoring splunk-otel-collector-cert-manager-6b9fb8b95f-2lmv4 1/1 Running
+ # monitoring splunk-otel-collector-cert-manager-cainjector-6d65b6d4c-khcrc 1/1 Running
+ # monitoring splunk-otel-collector-cert-manager-webhook-87b7ffffc-xp4sr 1/1 Running
+ # monitoring splunk-otel-collector-k8s-cluster-receiver-856f5fbcf9-pqkwg 1/1 Running
+ # monitoring splunk-otel-collector-opentelemetry-operator-56c4ddb4db-zcjgh 2/2 Running
+
+ kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io -n monitoring
+ # NAME WEBHOOKS AGE
+ # splunk-otel-collector-cert-manager-webhook 1 14m
+ # splunk-otel-collector-opentelemetry-operator-mutation 3 14m
+
+ kubectl get otelinst -n {target_application_namespace}
+ # NAME AGE ENDPOINT
+ # splunk-instrumentation 3m http://$(SPLUNK_OTEL_AGENT):4317
+
+4. Set annotations to instrument Node.js applications
+------------------------------------------------------------
+
+Activate and deactivate auto instrumentation for Node.js
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To activate auto instrumentation for your Node.js deployment, run the following command:
+
+.. code-block:: bash
+
+ kubectl patch deployment -n -p '{"spec": {"template":{"metadata":{"annotations":{"instrumentation.opentelemetry.io/inject-nodejs":"/splunk-otel-collector"}}}} }'
+
+.. note::
+ * The deployment pod will restart after running this command.
+ * If the chart is not installed in the "default" namespace, modify the annotation value to be "{chart_namespace}/splunk-otel-collector".
+
+To deactivate auto instrumentation for your Node.js deployment, run the following command:
+
+.. code-block:: bash
+
+ kubectl patch deployment -n --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/instrumentation.opentelemetry.io~1inject-nodejs"}]'
+
+Verify instrumentation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To verify that the instrumentation was successful, run the following command on an individual pod. Your instrumented pod should contain an initContainer named ``opentelemetry-auto-instrumentation`` and the target application container should have several ``OTEL_*`` environment variables similar to those in the output below.
+
+.. code-block:: bash
+
+ kubectl describe pod -n otel-demo -l app.kubernetes.io/name=opentelemetry-demo-frontend
+ # Name: opentelemetry-demo-frontend-57488c7b9c-4qbfb
+ # Namespace: otel-demo
+ # Annotations: instrumentation.opentelemetry.io/inject-nodejs: default/splunk-otel-collector
+ # Status: Running
+ # Init Containers:
+ # opentelemetry-auto-instrumentation:
+ # Command:
+ # cp
+ # -a
+ # /autoinstrumentation/.
+ # /otel-auto-instrumentation/
+ # State: Terminated
+ # Reason: Completed
+ # Exit Code: 0
+ # Containers:
+ # frontend:
+ # State: Running
+ # Ready: True
+ # Environment:
+ # FRONTEND_PORT: 8080
+ # FRONTEND_ADDR: :8080
+ # AD_SERVICE_ADDR: opentelemetry-demo-adservice:8080
+ # CART_SERVICE_ADDR: opentelemetry-demo-cartservice:8080
+ # CHECKOUT_SERVICE_ADDR: opentelemetry-demo-checkoutservice:8080
+ # CURRENCY_SERVICE_ADDR: opentelemetry-demo-currencyservice:8080
+ # PRODUCT_CATALOG_SERVICE_ADDR: opentelemetry-demo-productcatalogservice:8080
+ # RECOMMENDATION_SERVICE_ADDR: opentelemetry-demo-recommendationservice:8080
+ # SHIPPING_SERVICE_ADDR: opentelemetry-demo-shippingservice:8080
+ # WEB_OTEL_SERVICE_NAME: frontend-web
+ # PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: http://localhost:8080/otlp-http/v1/traces
+ # NODE_OPTIONS: --require /otel-auto-instrumentation/autoinstrumentation.js
+ # SPLUNK_OTEL_AGENT: (v1:status.hostIP)
+ # OTEL_SERVICE_NAME: opentelemetry-demo-frontend
+ # OTEL_EXPORTER_OTLP_ENDPOINT: http://$(SPLUNK_OTEL_AGENT):4317
+ # OTEL_RESOURCE_ATTRIBUTES_POD_NAME: opentelemetry-demo-frontend-57488c7b9c-4qbfb (v1:metadata.name)
+ # OTEL_RESOURCE_ATTRIBUTES_NODE_NAME: (v1:spec.nodeName)
+ # OTEL_PROPAGATORS: tracecontext,baggage,b3
+ # OTEL_RESOURCE_ATTRIBUTES: splunk.zc.method=autoinstrumentation-nodejs:0.41.1,k8s.container.name=frontend,k8s.deployment.name=opentelemetry-demo-frontend,k8s.namespace.name=otel-demo,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=opentelemetry-demo-frontend-57488c7b9c,service.version=1.5.0-frontend
+ # Mounts:
+ # /otel-auto-instrumentation from opentelemetry-auto-instrumentation (rw)
+ # Volumes:
+ # opentelemetry-auto-instrumentation:
+ # Type: EmptyDir (a temporary directory that shares a pod's lifetime)
+
+5. View results at Splunk Observability APM
+------------------------------------------------------------
+
+Allow the Operator to do the work. The Operator intercepts and alters the Kubernetes API requests to create and update annotated pods, the internal pod application containers are instrumented, and trace and metrics data populates the :ref:`APM dashboard `.
+
+Learn more
+===========================================================================
+
+* To learn more about how Zero Config Auto Instrumentation works in Splunk Observability Cloud, see :new-page:`more detailed documentation in GitHub `.
+* Refer to :new-page:`the operator pattern in the Kubernetes documentation ` for more information.
+
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs-linux.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs-linux.rst
new file mode 100644
index 000000000..94d9d1f01
--- /dev/null
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs-linux.rst
@@ -0,0 +1,149 @@
+.. include:: /_includes/gdi/zero-config-preview-header.rst
+
+.. _auto-instrumentation-nodejs-linux:
+
+*****************************************************************************
+Zero Configuration Auto Instrumentation for Linux Node.js applications
+*****************************************************************************
+
+.. meta::
+ :description: How to activate zero configuration automatic instrumentation for Linux Node.js applications, allowing you to collect and send traces to Splunk Application Performance Monitoring (APM) without altering your code.
+
+Zero Configuration Auto Instrumentation for Linux activates automatic instrumentation for Linux Node.js applications. When you activate automatic instrumentation, you only have to restart any applications that are already running.
+
+.. _zero-config-js-linux-prereqs:
+
+Prerequisites
+=======================================
+
+- Automatic instrumentation is only available for applications using supported Node.js libraries. See :ref:`nodes-requirements`. If your application isn't supported, manually instrument your service to generate trace data. See :ref:`nodejs-manual-instrumentation` .
+
+- :ref:`nodejs-otel-requirements`.
+
+- Your Splunk Observability Cloud realm and access token.
+
+ - To get an access token, see :ref:`admin-api-access-tokens`.
+
+ - To find the realm name of your account, open the navigation menu in Splunk Observability Cloud. Select :menuselection:`Settings`, and then select your username. The realm name appears in the :guilabel:`Organizations` section.
+
+- You must have ``npm`` to install the Node.js auto instrumentation package.
+
+.. _install-js-package:
+
+Install the package
+=======================================
+
+You can install the ``splunk-otel-auto-instrumentation`` package in the following ways:
+
+Using the installer script, you can install the auto instrumentation package for Node.js and activate auto instrumentation for Node.js for either all supported Node.js applications on the host via the system-wide method or for only Node.js applications running as ``systemd`` services.
+
+By default, the installer script installs the Node.js package globally using the ``npm install --global`` command. To specify a custom command for installation, use the ``--npm-command `` option as in the following example:
+
+.. code-block:: bash
+
+ --npm-command "/custom/path/to/npm install --prefix /custom/nodejs/install/path"
+
+.. note:: By default, auto instrumentation is activated for both Java and Node.js when using the installer script. To deactivate auto instrumentation for Java, add the ``--without-instrumentation-sdk java`` or ``--with-instrumentation-sdk node`` option in the installer script command.
+
+.. tabs::
+
+ .. tab:: System-wide
+
+ To install the package, run the Collector installer script with the ``--with-instrumentation`` option. The installer script will install the Collector and the Node.js agent from the Splunk Distribution of OpenTelemetry JS. The Node.js agent automatically loads when a Node.js application starts on the local machine.
+
+ Run the installer script with the ``--with-instrumentation`` option, as shown in the following example. Replace ```` and ```` with your Splunk Observability Cloud realm and token, respectively.
+
+ .. code-block:: bash
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sh /tmp/splunk-otel-collector.sh --with-instrumentation --realm --
+
+ .. note:: If you have a Log Observer entitlement or wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option.
+
+ The system-wide auto instrumentation method automatically adds environment variables to ``/etc/splunk/zeroconfig/node.conf``.
+
+ You can activate AlwaysOn Profiling for CPU and memory, as well as metrics, using additional options, as in the following example:
+
+ .. code-block:: bash
+ :emphasize-lines: 4
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --deployment-environment prod \
+ --realm -- \
+ --enable-profiler --enable-profiler-memory --enable-metrics
+
+ Next, ensure the collector service is running and restart your Node.js application(s). See :ref:`verify-js-agent-install` and :ref:`start-restart-js-apps`.
+
+ .. tab:: ``systemd``
+
+ Run the installer script with the ``--with-systemd-instrumentation`` option, as shown in the following example. Replace ```` and ```` with your Splunk Observability Cloud realm and token, respectively.
+
+ .. code-block:: bash
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-systemd-instrumentation --realm --
+
+ The ``systemd`` auto instrumentation method automatically adds environment variables to ``/usr/lib/systemd/system.conf.d/00-splunk-otel-auto-instrumentation.conf``.
+
+ .. note:: If you have a Log Observer entitlement or wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option.
+
+ You can activate AlwaysOn Profiling for CPU and memory, as well as metrics, using additional options, as in the following example:
+
+ .. code-block:: bash
+ :emphasize-lines: 4
+
+ curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \
+ sudo sh /tmp/splunk-otel-collector.sh --with-systemd-instrumentation --deployment-environment prod \
+ --realm -- \
+ --enable-profiler --enable-profiler-memory --enable-metrics
+
+ Next, ensure the collector service is running and restart your Node.js application(s). See :ref:`verify-js-agent-install` and :ref:`start-restart-js-apps`.
+
+.. _verify-js-agent-install:
+
+Ensure the collector service is running
+--------------------------------------------
+
+After a successful installation, run the following command to ensure the ``splunk-otel-collector`` service is running:
+
+.. code-block:: bash
+
+ sudo systemctl status splunk-otel-collector
+
+If the service is not running, start or restart it with the following command:
+
+.. code-block:: bash
+
+ sudo systemctl restart splunk-otel-collector
+
+If the service fails to start, check that the ``SPLUNK_REALM`` and ``SPLUNK_ACCESS_TOKEN`` in ``/etc/otel/collector/splunk-otel-collector.conf`` are correct. You can also view the service logs with this command:
+
+.. code-block:: bash
+
+ sudo journalctl -u splunk-otel-collector
+
+.. _start-restart-js-apps:
+
+Start your applications
+------------------------------------------------
+
+For auto instrumentation to take effect, you must either reboot the host or manually start or restart any Node.js applications on the host where you installed the package. You must restart the host or applications after installing the auto instrumentation package for the first time and whenever you make any changes to the configuration file.
+
+After your applications are running, you can verify your data. See :ref:`verify-apm-data`. You can also configure instrumentation settings. See :ref:`configure-js-zeroconfig-linux`.
+
+.. _configure-js-zeroconfig-linux:
+
+(Optional) Configure the instrumentation
+====================================================
+
+You can configure the Splunk Distribution of OpenTelemetry JS to suit your instrumentation needs. In most cases, modifying the basic configuration is enough to get started.
+
+To learn more, see :ref:`advanced-nodejs-otel-configuration`.
+
+.. _js-zeroconfig-linux-nextsteps:
+
+Next steps
+====================================================
+
+After activating automatic instrumentation for Node.js, ensure your data is flowing into Splunk Observability Cloud. See :ref:`verify-apm-data`.
+
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs.rst
new file mode 100644
index 000000000..4e8a6e088
--- /dev/null
+++ b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-nodejs.rst
@@ -0,0 +1,67 @@
+.. include:: /_includes/gdi/zero-config-preview-header.rst
+
+.. _auto-instrumentation-nodejs:
+
+*************************************************************************
+Splunk OpenTelemetry Zero Config Auto Instrumentation for Node.js
+*************************************************************************
+
+.. meta::
+ :description: Use automatic instrumentation to send traces to Splunk Observability Cloud Application Performance Monitoring (APM) without altering your code.
+
+.. toctree::
+ :hidden:
+
+ Kubernetes
+ Linux
+
+Splunk OpenTelemetry (OTel) Zero Configuration Auto Instrumentation for Node.js automatically instruments supported Node.js libraries in running applications to capture distributed traces.
+The Splunk OpenTelemetry Collector receives the distributed traces and forwards them to Splunk Application Performance Monitoring (APM) in Splunk Observability Cloud.
+
+This feature provides the following benefits:
+
+- You don't need to configure or manually instrument your applications before deployment if your Node.js applications use any of the supported libraries.
+- You can start streaming traces and monitor distributed applications with Splunk APM in minutes.
+
+.. raw:: html
+
+
+
+- Automatic instrumentation is only available for applications using supported Node.js libraries. See :ref:`nodes-requirements`. If your application isn't supported, manually instrument your service to generate trace data. See :ref:`nodejs-manual-instrumentation` .
+
+- :ref:`nodejs-otel-requirements`.
+
+- Your Splunk Observability Cloud realm and access token.
+
+ - To get an access token, see :ref:`admin-api-access-tokens`.
+
+ - To find the realm name of your account, open the navigation menu in Splunk Observability Cloud. Select :menuselection:`Settings`, and then select your username. The realm name appears in the :guilabel:`Organizations` section.
+
+.. raw:: html
+
+
+
+Zero Config Auto Instrumentation is available on Kubernetes and Linux using Splunk OpenTelemetry Node.js.
+When you activate Zero Config, Splunk OpenTelemetry Node.js automatically instruments all Node.js applications
+running in the target environment.
+
+On Linux, the target environment is the entire Linux host, so the Node.js agent instruments every Node.js application on the host.
+
+On Kubernetes, the target environment is the deployment or pod where you activated instrumentation. The Node.js agent instruments every Node.js application within the pod or deployment.
+
+In both cases you must restart the applications to start instrumentation.
+
+.. raw:: html
+
+
+
+Follow instructions from the following list:
+
+- :ref:`Install Zero Configuration Auto Instrumentation on Kubernetes `
+- :ref:`Install Zero Configuration Auto Instrumentation on Linux `
diff --git a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-operator.rst b/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-operator.rst
deleted file mode 100644
index 42b72bcb3..000000000
--- a/gdi/opentelemetry/auto-instrumentation/auto-instrumentation-operator.rst
+++ /dev/null
@@ -1,124 +0,0 @@
-.. _auto-instrumentation-operator:
-
-***************************************************************************************************
-Install the Collector and the upstream Kubernetes Operator for Auto Instrumentation
-***************************************************************************************************
-
-.. meta::
- :description: Use the Collector with the upstream Kubernetes Operator for automatic instrumentation to easily add observability code to your application, enabling it to produce telemetry data.
-
-You can use the OTel Collector with an upstream Operator in a Kubernetes environment to implement and simplify the management of OpenTelemetry Auto Instrumentation of your applications.
-
-.. caution:: This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Collector Contrib project. It's not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure.
-
-Requirements
-================================================================
-
-Operator Auto Instrumentation requires the following components:
-
-* The :ref:`Splunk OTel Collector chart `: It deploys the Collector and related resources, including the OpenTelemetry Operator.
-* The OpenTelemetry Operator, which manages auto-instrumentation of Kubernetes applications. See more in the :new-page:`OpenTelemetry GitHub repo `.
-* Instrumentation libraries generate telemetry data when your application uses instrumented components.
-* A Kubernetes instrumentation object ``opentelemetry.io/v1alpha1``, which configures auto-instrumentation settings for applications.
-
-Install the Collector using the Kubernetes Operator
-===========================================================================
-
-To use the Operator for Auto Instrumentation, follow these steps:
-
-#. Deploy the Helm chart with the required components, including the Operator, to your Kubernetes cluster.
-
-#. Verify the deployed resources are working correctly.
-
-#. Apply annotations at the pod or namespace level for the Operator to know which pods to apply auto-instrumentation to.
-
-#. Check out the results at Splunk Observability APM.
-
-1. Deploy the Helm Chart with the Operator enabled
-------------------------------------------------------------
-
-Deploy the :ref:`Collector for Kubernetes with the Helm chart ` with ``operator.enabled=true`` to include the Operator in the deployment.
-
-Ingest traces
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To ingest trace telemetry data, the attribute ``environment`` must be on board the exported traces. There are two ways to set this attribute:
-
-* Use the `values.yaml` optional environment configuration.
-* Use the Instrumentation spec with the environment variable ``OTEL_RESOURCE_ATTRIBUTES``.
-
-Add certificates
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The Operator requires certain TLS cerificates to work. If a certification manager (or any other TLS certificate source) is not available in the cluster, then you need to deploy it using ``certmanager.enabled=true``. You can use the following commands to run these steps.
-
-.. code-block:: yaml
-
- # Check if cert-manager is already installed, don't deploy a second cert-manager.
- kubectl get pods -l app=cert-manager --all-namespaces
-
- # If cert-manager is not deployed, make sure to add certmanager.enabled=true to the list of values to set
- helm install splunk-otel-collector -f ./my_values.yaml --set operator.enabled=true,environment=dev splunk-otel-collector-chart/splunk-otel-collector
-
-2. Verify all the OpenTelemetry resources are deployed successfully
----------------------------------------------------------------------------
-
-Resources include the Collector, the Operator, webhook, an instrumentation.
-
-Run the following to verify the resources are deployed correctly:
-
-.. code-block:: yaml
-
- kubectl get pods
- # NAME READY STATUS
- # splunk-otel-collector-agent-lfthw 2/2 Running
- # splunk-otel-collector-cert-manager-6b9fb8b95f-2lmv4 1/1 Running
- # splunk-otel-collector-cert-manager-cainjector-6d65b6d4c-khcrc 1/1 Running
- # splunk-otel-collector-cert-manager-webhook-87b7ffffc-xp4sr 1/1 Running
- # splunk-otel-collector-k8s-cluster-receiver-856f5fbcf9-pqkwg 1/1 Running
- # splunk-otel-collector-opentelemetry-operator-56c4ddb4db-zcjgh 2/2 Running
-
- kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io
- # NAME WEBHOOKS AGE
- # splunk-otel-collector-cert-manager-webhook 1 14m
- # splunk-otel-collector-opentelemetry-operator-mutation 3 14m
-
- kubectl get otelinst
- # NAME AGE ENDPOINT
- # splunk-otel-collector 3s http://$(SPLUNK_OTEL_AGENT):4317
-
-3. Set annotations to instrument applications
-------------------------------------------------------------
-
-You can add an ``instrumentation.opentelemetry.io/inject-{instrumentation_library}`` annotation to the following:
-
-* Namespace: All pods within that namespace are instrumented.
-* Pod Spec Objects: PodSpec objects that are available as part of Deployment, Statefulset, or other resources can be annotated.
-
-Instrumentation annotations can have the following values:
-
-* ``"true"``: Inject, and the Instrumentation resource from the namespace to use.
-* ``"my-instrumentation"``: Name of Instrumentation CR instance in the current namespace to use.
-* ``"my-other-namespace/my-instrumentation"``: Name and namespace of Instrumentation CR instance in another namespace to use.
-* ``"false"``: Do not inject.
-
-Sample annotations include:
-
-* ``instrumentation.opentelemetry.io/inject-java: "true"``
-* ``instrumentation.opentelemetry.io/inject-dotnet: "true"``
-* ``instrumentation.opentelemetry.io/inject-nodejs: "true"``
-* ``instrumentation.opentelemetry.io/inject-python: "true"``
-
-.. note:: .NET automatic instrumentation is not compatible with Alpine-based images.
-
-4. Check out the results at Splunk Observability APM
-------------------------------------------------------------
-
-Allow the Operator to do the work. The Operator intercepts and alters the Kubernetes API requests to create and update annotated pods, the internal pod application containers are instrumented, and trace and metrics data populates the :ref:`APM dashboard `.
-
-Learn more
-===========================================================================
-
-* See :ref:`auto-instrumentation-java-operator`.
-* To learn more about how Auto Instrumentation works in Splunk Observability Cloud, see :new-page:`more detailed documentation in GH `.
-* See :new-page:`the operator pattern in the Kubernetes documentation ` for more information.
diff --git a/gdi/opentelemetry/install-k8s.rst b/gdi/opentelemetry/install-k8s.rst
index 018290851..83504a99e 100644
--- a/gdi/opentelemetry/install-k8s.rst
+++ b/gdi/opentelemetry/install-k8s.rst
@@ -330,7 +330,7 @@ See the following manifest to set security constraints:
Use the Kubernetes Operator in OpenTelemetry
============================================================================================
-You can install the Collector with an upstream Kubernetes Operator for Auto Instrumentation. This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Operator project. See more at :ref:`auto-instrumentation-operator`.
+You can install the Collector with an upstream Kubernetes Operator for Auto Instrumentation. This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Operator project. See the :new-page:`OpenTelemetry GitHub repo ` for more information.
.. note:: The upstream Kubernetes Operator is not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure.
diff --git a/gdi/opentelemetry/install-linux.rst b/gdi/opentelemetry/install-linux.rst
index 8756b3a29..758d388e6 100644
--- a/gdi/opentelemetry/install-linux.rst
+++ b/gdi/opentelemetry/install-linux.rst
@@ -255,10 +255,10 @@ The Linux installer script supports the following options:
- Override the autogenerated service names for all instrumented Java applications on this host with ````. Only applicable if the ``--with-instrumentation`` option is also specified.
- Empty
* - ``--[no-]generate-service-name``
- - Specify ``--no-generate-service-name`` to prevent the preloader from setting the ``OTEL_SERVICE_NAME`` environment variable. Only applicable if the ``--with-instrumentation`` option is also specified.
+ - Specify ``--no-generate-service-name`` to prevent the preloader from setting the ``OTEL_SERVICE_NAME`` environment variable. Only applicable if the ``--with-instrumentation`` option is also specified. This option has been deprecated for Splunk OpenTelemetry Auto Instrumentation versions ``0.87`` or higher, and the bundled Auto Instrumentation agents automatically generate a service name by default.
- ``--generate-service-name``
* - ``--[enable|disable]-telemetry``
- - Activate or deactivate the instrumentation preloader from sending the ``splunk.linux-autoinstr.executions`` metric to the Collector. Only applicable if the ``--with-instrumentation`` option is also specified.
+ - Activate or deactivate the instrumentation preloader from sending the ``splunk.linux-autoinstr.executions`` metric to the Collector. Only applicable if the ``--with-instrumentation`` option is also specified. This option has been deprecated for Splunk OpenTelemetry Auto Instrumentation versions ``0.87`` or higher, and the `libsplunk.so` library no longer generates the ``splunk.linux-autoinstr.executions`` telemetry metric.
- ``--enable-telemetry``
* - ``--[enable|disable]-profiler``
- Activate or deactivate AlwaysOn CPU Profiling. Only applicable if the ``--with-instrumentation`` option is also specified.
diff --git a/gdi/opentelemetry/install-the-collector.rst b/gdi/opentelemetry/install-the-collector.rst
index 4bcdba13c..aa1b79ef1 100644
--- a/gdi/opentelemetry/install-the-collector.rst
+++ b/gdi/opentelemetry/install-the-collector.rst
@@ -54,7 +54,7 @@ The Splunk Distribution of OpenTelemetry Collector is supported on Kubernetes, L
Deploy one of the following packages to gather data for Splunk Observability Cloud.
-* Splunk Distribution of OpenTelemetry Collector for Kubernetes or ``splunk-otel-collector-chart``. See :ref:`Install on Kubernetes `. You can also install the Kubernetes Operator for Auto Instrumentation, as explained in :ref:`Install the Collector with the Kubernetes Operator for Auto Instrumentation `.
+* Splunk Distribution of OpenTelemetry Collector for Kubernetes or ``splunk-otel-collector-chart``. See :ref:`Install on Kubernetes `. You can also install the Kubernetes Operator for Auto Instrumentation. See :ref:`zero-config` for more information.
* Splunk Distribution of OpenTelemetry Collector for Linux or ``splunk-otel-collector``. See :ref:`Install on Linux (script) ` or :ref:`Install on Linux (manual) `, including instructions to install using the :ref:`binary file `.
* Splunk Distribution of OpenTelemetry Collector for Windows or ``splunk-otel-collector``. See :ref:`Install on Windows (script) ` or :ref:`Install on Windows (manual) `, including instructions for the :ref:`binary file `.
diff --git a/gdi/opentelemetry/zero-config.rst b/gdi/opentelemetry/zero-config.rst
index 957f980c1..af50f92be 100644
--- a/gdi/opentelemetry/zero-config.rst
+++ b/gdi/opentelemetry/zero-config.rst
@@ -1,4 +1,4 @@
-.. include:: /_includes/gdi/zero-config-preview-header.rst
+
.. _zero-config:
@@ -12,37 +12,40 @@ Splunk OpenTelemetry Zero Configuration Auto Instrumentation
.. toctree::
:hidden:
- Kubernetes Operator
Java
.NET
+ Node.js
-Splunk OpenTelemetry Zero Configuration Auto Instrumentation provides several packages that automatically instrument your back-end applications and services to capture and report distributed traces and metrics to the Splunk Distribution of OpenTelemetry Collector, and then on to Splunk APM.
+Splunk OpenTelemetry Zero Configuration Auto Instrumentation automatically instruments your back-end applications and services to capture and report distributed traces and metrics to the Splunk Distribution of OpenTelemetry Collector, and then on to Splunk APM.
-The following diagram demonstrates the process of manually instrumenting your applications compared to the process of using zero configuration auto instrumentation to instrument your applications:
+The following diagram demonstrates the process of manually instrumenting your applications:
.. mermaid::
flowchart TB
subgraph "Manual instrumentation"
- A["Install the Splunk \n Distribution of
- OpenTelemetry Collector \n agent for your integration"]
+ A["Connect to your cloud environment"]
- B["Follow guided setup instructions \n to configure your environment"]
+ B["Deploy the Splunk Distribution of \n OpenTelemetry Collector in your environment"]
- C["Deploy the Splunk Distribution of \n OpenTelemetry Collector"]
+ C["Deploy language-specific components \n to each service"]
D["Run your application"]
A --> B --> C --> D
end
+The following diagram demonstrates the process of using zero config auto instrumentation to instrument your applications:
+
.. mermaid::
flowchart TB
subgraph "Zero configuration auto instrumentation"
- X["Install the zero-config package \n for your application"]
- Y["Ensure the Splunk Distribution of \nOpenTelemetry Collector
- is running"]
+
+ X["Connect to your cloud environment"]
+
+ Y["Deploy the Splunk Distribution \n of OpenTelemetry Collector in your environment"]
+
Z["Run your application"]
X --> Y --> Z
@@ -54,11 +57,33 @@ The Zero Configuration packages provide the following benefits:
- You can start streaming traces and monitor distributed applications with Splunk APM in minutes.
- You don't need to configure or instrument your back-end services or applications before deployment.
-The following packages are available:
+Zero Configuration Auto Instrumentation is available for Java, .NET, and Node.js applications.
+
+.. list-table::
+ :header-rows: 1
+ :width: 60%
+ :widths: 15 15 15 15
+
+ * - Application/language
+ - Supported for Linux
+ - Supported for Windows
+ - Supported for Kubernetes
+ * - Java
+ - Yes
+ - No
+ - Yes
+ * - .NET
+ - No
+ - Yes
+ - No
+ * - Node.js
+ - In preview
+ - No
+ - In preview
+
+To get started with automatic instrumentation for your applications, see the following pages:
- :ref:`auto-instrumentation-java`
- :ref:`auto-instrumentation-dotnet`
-
-.. note:: You can also install the Collector with the Kubernetes Operator for Auto Instrumentation. See :ref:`Install the Collector with the Kubernetes Operator ` for more information.
-
+- :ref:`auto-instrumentation-nodejs`
diff --git a/incident-intelligence/create-configure-incident-policies.rst b/incident-intelligence/create-configure-incident-policies.rst
index 01b7af0d1..4e6d72aad 100644
--- a/incident-intelligence/create-configure-incident-policies.rst
+++ b/incident-intelligence/create-configure-incident-policies.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-create-configure-incident-policies:
************************************************************************
diff --git a/incident-intelligence/create-manage-on-call-schedules/create-manage-on-call-schedules.rst b/incident-intelligence/create-manage-on-call-schedules/create-manage-on-call-schedules.rst
index 10442bbdb..970c419de 100644
--- a/incident-intelligence/create-manage-on-call-schedules/create-manage-on-call-schedules.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/create-manage-on-call-schedules.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-create-manage-on-call-schedules:
Create and manage on-call schedules
diff --git a/incident-intelligence/create-manage-on-call-schedules/create-on-call-schedule.rst b/incident-intelligence/create-manage-on-call-schedules/create-on-call-schedule.rst
index aaa0a6c1a..d99ee6ecb 100644
--- a/incident-intelligence/create-manage-on-call-schedules/create-on-call-schedule.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/create-on-call-schedule.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-create-on-call-schedule:
Create an on-call schedule
diff --git a/incident-intelligence/create-manage-on-call-schedules/on-call_schedule_steps.html b/incident-intelligence/create-manage-on-call-schedules/on-call_schedule_steps.html
index 1131a2ffd..23206fc42 100644
--- a/incident-intelligence/create-manage-on-call-schedules/on-call_schedule_steps.html
+++ b/incident-intelligence/create-manage-on-call-schedules/on-call_schedule_steps.html
@@ -1,3 +1,6 @@
+:orphan:
+
+
diff --git a/incident-intelligence/create-manage-on-call-schedules/reassign-shift.rst b/incident-intelligence/create-manage-on-call-schedules/reassign-shift.rst
index 89be9c3ce..4fa5cce51 100644
--- a/incident-intelligence/create-manage-on-call-schedules/reassign-shift.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/reassign-shift.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _reassign-shift:
Reassign a full or partial shift in Incident Intelligence
diff --git a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-business-hours.rst b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-business-hours.rst
index 1320b535c..6efdabeaf 100644
--- a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-business-hours.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-business-hours.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-scenario-business-hours:
Scenario: Skyler creates business-hours and nights-and-weekend rotations for the web application service
diff --git a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-day-by-day.rst b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-day-by-day.rst
index b3cb2c335..cf656d14d 100644
--- a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-day-by-day.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-day-by-day.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-scenario-day-by-day:
Scenario: Skyler creates every-other-day coverage using the Day-by-day shift type
diff --git a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-week-by-week.rst b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-week-by-week.rst
index 3095df9e4..b1cd83dd1 100644
--- a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-week-by-week.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenario-week-by-week.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-scenario-week-by-week:
Scenario: Skyler creates weekly coverage using the Week-by-week shift type
diff --git a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenarios-schedules.rst b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenarios-schedules.rst
index a81b29ef1..a74498345 100644
--- a/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenarios-schedules.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/scenarios-schedules/scenarios-schedules.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-scenarios-schedules:
Scenarios for notifying the correct responder using schedules in Incident Intelligence
diff --git a/incident-intelligence/create-manage-on-call-schedules/sync-on-call-schedule.rst b/incident-intelligence/create-manage-on-call-schedules/sync-on-call-schedule.rst
index 4cc4c5772..477d159e6 100644
--- a/incident-intelligence/create-manage-on-call-schedules/sync-on-call-schedule.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/sync-on-call-schedule.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-sync-on-call-schedule:
Check your on-call schedule and sync it to your personal calendar
diff --git a/incident-intelligence/create-manage-on-call-schedules/whos-on-call.rst b/incident-intelligence/create-manage-on-call-schedules/whos-on-call.rst
index a2a3add7f..8838eb282 100644
--- a/incident-intelligence/create-manage-on-call-schedules/whos-on-call.rst
+++ b/incident-intelligence/create-manage-on-call-schedules/whos-on-call.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-whos-on-call:
Check who's currently on call
diff --git a/incident-intelligence/incident-intelligence-overview.rst b/incident-intelligence/incident-intelligence-overview.rst
index 809585644..a026f63b8 100644
--- a/incident-intelligence/incident-intelligence-overview.rst
+++ b/incident-intelligence/incident-intelligence-overview.rst
@@ -1,3 +1,7 @@
+:orphan:
+
+:orphan:
+
.. _ii-incident-intelligence-overview:
Splunk Incident Intelligence overview
diff --git a/incident-intelligence/ingest-alerts/ingest-alerts.rst b/incident-intelligence/ingest-alerts/ingest-alerts.rst
index 53eab10bd..2077e81b3 100644
--- a/incident-intelligence/ingest-alerts/ingest-alerts.rst
+++ b/incident-intelligence/ingest-alerts/ingest-alerts.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-ingest-alerts:
Ingest alerts in Splunk Incident Intelligence
diff --git a/incident-intelligence/ingest-alerts/ingest-azure.rst b/incident-intelligence/ingest-alerts/ingest-azure.rst
index bc10304fb..fa5d3e44b 100644
--- a/incident-intelligence/ingest-alerts/ingest-azure.rst
+++ b/incident-intelligence/ingest-alerts/ingest-azure.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-ingest-azure-alerts:
Ingest Azure Monitor alerts
diff --git a/incident-intelligence/ingest-alerts/ingest-cloudwatch.rst b/incident-intelligence/ingest-alerts/ingest-cloudwatch.rst
index 2b8efa22d..0226508c5 100644
--- a/incident-intelligence/ingest-alerts/ingest-cloudwatch.rst
+++ b/incident-intelligence/ingest-alerts/ingest-cloudwatch.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-ingest-cloudwatch-alerts:
Ingest Amazon CloudWatch alarms
diff --git a/incident-intelligence/ingest-alerts/ingest-prometheus.rst b/incident-intelligence/ingest-alerts/ingest-prometheus.rst
index 054d0c675..cd587a88e 100644
--- a/incident-intelligence/ingest-alerts/ingest-prometheus.rst
+++ b/incident-intelligence/ingest-alerts/ingest-prometheus.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-ingest-prometheus-alerts:
Ingest Prometheus alerts
diff --git a/incident-intelligence/ingest-alerts/ingest-rest.rst b/incident-intelligence/ingest-alerts/ingest-rest.rst
index fb0d8d834..ed7af7183 100644
--- a/incident-intelligence/ingest-alerts/ingest-rest.rst
+++ b/incident-intelligence/ingest-alerts/ingest-rest.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-ingest-rest:
Ingest generic REST alerts
diff --git a/incident-intelligence/ingest-alerts/ingest-splunk-alerts.rst b/incident-intelligence/ingest-alerts/ingest-splunk-alerts.rst
index cad9f6df9..02ce6c527 100644
--- a/incident-intelligence/ingest-alerts/ingest-splunk-alerts.rst
+++ b/incident-intelligence/ingest-alerts/ingest-splunk-alerts.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-ingest-splunk-itsi-alerts:
Ingest alerts from Splunk Enterprise and Splunk Cloud Platform
diff --git a/incident-intelligence/intro-to-incident-intelligence.rst b/incident-intelligence/intro-to-incident-intelligence.rst
index 9f452d882..042566f06 100644
--- a/incident-intelligence/intro-to-incident-intelligence.rst
+++ b/incident-intelligence/intro-to-incident-intelligence.rst
@@ -1,4 +1,5 @@
-
+:orphan:
+
.. _ii-get-started-incident-intelligence:
Introduction to Splunk Incident Intelligence
diff --git a/incident-intelligence/key-concepts.rst b/incident-intelligence/key-concepts.rst
index 76b1f3144..656215bb2 100644
--- a/incident-intelligence/key-concepts.rst
+++ b/incident-intelligence/key-concepts.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-key-concepts:
Key concepts in Splunk Incident Intelligence
diff --git a/incident-intelligence/manage-notifications/example-notifications.rst b/incident-intelligence/manage-notifications/example-notifications.rst
index a38aba8b2..8337e752d 100644
--- a/incident-intelligence/manage-notifications/example-notifications.rst
+++ b/incident-intelligence/manage-notifications/example-notifications.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-example-notifications:
Example notifications: Email, mobile push, SMS, and voice
diff --git a/incident-intelligence/manage-notifications/manage-notifications.rst b/incident-intelligence/manage-notifications/manage-notifications.rst
index bb0a88b68..f7f2299b3 100644
--- a/incident-intelligence/manage-notifications/manage-notifications.rst
+++ b/incident-intelligence/manage-notifications/manage-notifications.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-manage-notifications:
Manage notifications from Incident Intelligence
diff --git a/incident-intelligence/manage-notifications/notification-preferences.rst b/incident-intelligence/manage-notifications/notification-preferences.rst
index 6451fa8f6..a53135790 100644
--- a/incident-intelligence/manage-notifications/notification-preferences.rst
+++ b/incident-intelligence/manage-notifications/notification-preferences.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-notification-preferences:
Set your on-call notification preferences
diff --git a/incident-intelligence/manage-notifications/prevent-spam.rst b/incident-intelligence/manage-notifications/prevent-spam.rst
index be27963f1..7b89299c5 100644
--- a/incident-intelligence/manage-notifications/prevent-spam.rst
+++ b/incident-intelligence/manage-notifications/prevent-spam.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-prevent-spam:
Prevent SMS and voice notifications from going to spam
diff --git a/incident-intelligence/manage-notifications/sending-phone-numbers.rst b/incident-intelligence/manage-notifications/sending-phone-numbers.rst
index 0b9cb4cc0..8ee385e49 100644
--- a/incident-intelligence/manage-notifications/sending-phone-numbers.rst
+++ b/incident-intelligence/manage-notifications/sending-phone-numbers.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-sending-phone-numbers:
Sending phone numbers for voice and SMS
diff --git a/incident-intelligence/respond-manage-incidents/add-incident-tools-resources.rst b/incident-intelligence/respond-manage-incidents/add-incident-tools-resources.rst
index b2e31e6f3..b7649320f 100644
--- a/incident-intelligence/respond-manage-incidents/add-incident-tools-resources.rst
+++ b/incident-intelligence/respond-manage-incidents/add-incident-tools-resources.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-add-incident-tools-resources:
Add collaboration tools and resources to an incident
diff --git a/incident-intelligence/respond-manage-incidents/respond-manage-incidents.rst b/incident-intelligence/respond-manage-incidents/respond-manage-incidents.rst
index c4fc6f47e..ff56cde86 100644
--- a/incident-intelligence/respond-manage-incidents/respond-manage-incidents.rst
+++ b/incident-intelligence/respond-manage-incidents/respond-manage-incidents.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-respond-manage-incidents:
Respond to and manage incidents
diff --git a/incident-intelligence/set-up-incident-intelligence.rst b/incident-intelligence/set-up-incident-intelligence.rst
index f36412b49..f2439b11f 100644
--- a/incident-intelligence/set-up-incident-intelligence.rst
+++ b/incident-intelligence/set-up-incident-intelligence.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _ii-set-up-incident-intelligence:
Set up Splunk Incident Intelligence
diff --git a/index.rst b/index.rst
index 4dfad5ddf..2941725ef 100644
--- a/index.rst
+++ b/index.rst
@@ -802,52 +802,6 @@ View a list of all supported integrations :ref:`supported-data-sources`
Configure your tests TOGGLE
-.. toctree::
- :caption: Incident Intelligence
- :maxdepth: 3
-
- Introduction to Splunk Incident Intelligence
-
-.. toctree::
- :maxdepth: 3
-
- incident-intelligence/incident-intelligence-overview
-
-.. toctree::
- :maxdepth: 3
-
- incident-intelligence/key-concepts
-
-.. toctree::
- :maxdepth: 3
-
- Set up Incident Intelligence
-
-.. toctree::
- :maxdepth: 3
-
- Ingest alerts in Incident Intelligence TOGGLE
-
-.. toctree::
- :maxdepth: 3
-
- incident-intelligence/create-configure-incident-policies
-
-.. toctree::
- :maxdepth: 4
-
- Create and manage on-call schedules TOGGLE
-
-.. toctree::
- :maxdepth: 3
-
- Respond to and manage incidents TOGGLE
-
-.. toctree::
- :maxdepth: 3
-
- Manage notifications from Incident Intelligence TOGGLE
-
.. toctree::
:caption: Reference and Legal
diff --git a/infrastructure/manage-navigator-dashbds.rst b/infrastructure/manage-navigator-dashbds.rst
new file mode 100644
index 000000000..a83afe0ef
--- /dev/null
+++ b/infrastructure/manage-navigator-dashbds.rst
@@ -0,0 +1,71 @@
+.. _manage-dashboards-imm:
+
+***************************************************************************
+Customize dashboards in Splunk Infrastructure Monitoring navigators
+***************************************************************************
+
+.. meta::
+ :description: Customize dashboards in the navigators for Splunk Infrastructure Monitoring
+
+.. note:: You must be an admin user to perform the tasks described in this topic.
+
+Apart from modifying parameters for the data that you view and monitor in a navigator as explained in :ref:`customize-navigator`, you can also
+use navigator customization to modify the number and scope of the dashboards associated with a navigator. Dashboard customization persists
+across sessions and applies to all users viewing the involved navigator.
+
+You can apply custom dashboard settings to both the aggregate view of all navigators for a specific techhology, or the more focused instance view of
+navigators for a representative example, as for example, monitoring all active EC2 hosts in an aggregate view vs. monitoring one active EC2 host in an instance view.
+A label next to the title of the Navigator settings page identifies whether you're working with an aggregate
+view or an instance view. Fewer dashboards display for instance views than for aggregate views, but you can customize either view.
+
+Use :guilabel:`Manage navigator dashboards` to find dashboards that you can add to the set associated with a navigator.
+
+As an admin user, you can access :guilabel:`Manage navigator dashboards` in either of the following ways:
+
+- From the drop-down menu displayed when you select the gear icon on the Infrastructure Navigabor home page.
+
+- From the ellipses at the right side of a navigator title bar.
+
+Either access method opens the Navigator settings page, from which you can select up to 10 dashboards to display in the navigator. Current
+dashboards are listed in a table that displays them by name.
+
+To add one or more dashboards to the default dashboard set for a navigator, do the following from the Navigator settings page:
+
+#. Click on **+Add dashboard**.
+
+#. Scroll through the list of dashboards, using the Prev and Next buttons to navigate through multiple pages as desired. All available dashboards are displayed bydefault, so the dashboard list can be extensive.
+
+#. (Optional) In the search field of the :guilabel:`Select a dashboard`` window, enter the name of the dashboard you want to find. If dashboards in different groups have the same name, as they might in the case of a common function like "Service endpoint," then search displays the relevant part of the dashboard list where the dashboards appear in alphabetical order.
+
+#. (Optional) Use the buttons next to the search field to apply either of the following search filters:
+
+ * Created by me
+ * Favorites
+
+#. Click on the name of the dashboard you want to link to the navigator.
+
+#. Click :guilabel:`Select` at the bottom right of the dashboard listing.
+
+#. (Optional) Repeat steps 2 through 5 to link additional dashboards to the navigator.
+
+#. (Optional) To change the dashboard display order on the home page for the navigator, click on a dashboard name and drag it up or down in the dashboard list.
+
+#. Click :guilabel:`Save changes` to confirm and apply your choices.
+
+If you select :guilabel:`Reset to built-in dashboards` rather than :guilabel:`Save changes`, then your system resets to the original state
+of the navigator without any customization.
+
+To hide a dashboard from view temporarily without actually disassociating it from a navigator, open the Dashboards list and click on the eye symbol shown to the right of the dashboard namew. When a dashboard is hidden, the eye symbol has a slash through it and the dashboard name displays with its name grayed out.
+
+
+Built-in dashboards
+-----------------------------
+
+Built-in dashboards ship with particular navigators as part of a default set. In dashboard lists, they have a :guilabel:`Built-in` label next to their names.
+A dashboard with a :guilabel:`Limited access` label is associated with an access control list (ACL), and might not be visible to all users.
+
+Custom dashboards
+-----------------------------
+
+Custom dashboards are monitoring tools that you add to the built-in dashboard set when you modify navigators to more closely match the needs
+of your end-to-end computing environment.
diff --git a/infrastructure/navigators-list.rst b/infrastructure/navigators-list.rst
new file mode 100644
index 000000000..b559c13ac
--- /dev/null
+++ b/infrastructure/navigators-list.rst
@@ -0,0 +1,30 @@
+.. _navigators-list-imm:
+
+*******************************************************
+Navigators available
+*******************************************************
+
+.. meta::
+ :description: Automated list of the navigators available to you
+
+In Splunk Infrastructure Monitoring, a navigator is a collection of resources that enables you to monitor metrics and logs across various instances of your services so you can detect outliers in the instance population based on key performance indicators. Resources in a navigator include, but are not limited to, a full list of entities, dashboards, related alerts and detectors, and service dependencies.
+
+View navigators
+----------------------
+
+To see all navigators, select :guilabel:`Infrastructure` from the Splunk Observability Cloud home page.
+
+
+List of navigators
+----------------------
+.. raw:: html
+
+
+
+
+
+
+
+
+
+
diff --git a/infrastructure/use-navigators.rst b/infrastructure/use-navigators.rst
index 831114fae..23ec7698f 100644
--- a/infrastructure/use-navigators.rst
+++ b/infrastructure/use-navigators.rst
@@ -63,6 +63,8 @@ For information on customizing the content and format of the navigator, includin
For interactive walkthroughs of how to use navigators in Infrastructure Monitoring to troubleshoot your web server or observe your application and the underlying infrastructure, see :new-page:`Splunk Infrastructure Monitoring web server troubleshooting scenario ` and :new-page:`Splunk Infrastructure Monitoring application monitoring scenario `.
+For a list of all the navigators available, see :ref:`navigators-list-imm`.
+
.. note::
The format and content displayed in the navigator for AWS Lambda is different from what is discussed below.
@@ -112,7 +114,7 @@ Use the Dashboard section
The :strong:`Dashboard` section contains built-in dashboards that provide access to detailed information about the instances displayed.
-Dashboards in navigators are read |hyph| only, so you can't directly make any changes to them. However, you can clone a built-in dashboard to make changes to the clone, or download a built-in dashboard.
+Dashboards in navigators are read |hyph| only, so you can't directly make any changes to them. However, you can clone a built-in dashboard to make changes to the clone, or download a built-in dashboard. As an admin, you can also add or remove custom dashboards, and hide any built-in dashboards that you don't use.
To learn more, see :ref:`Clone a built-in dashboard in a navigator` and :ref:`Export a built-in dashboard in a navigator` in the :ref:`built-in-dashboards` documentation.
@@ -302,3 +304,18 @@ Follow these steps to remove an inactive navigator.
:alt: This image shows a navigator with a Remove Navigator option.
#. Confirm your selection.
+
+.. _list-available-navigators:
+
+List available navigators
+-------------------------------
+
+For a list of all the navigators available, see :ref:`navigators-list-imm`.
+
+.. toctree::
+ :hidden:
+
+ navigators-list
+ manage-navigator-dashbds
+
+
diff --git a/private-preview/rbac/roles-and-capabilities-about.rst b/private-preview/rbac/roles-and-capabilities-about.rst
index 8f7ebe9b3..3e281e544 100644
--- a/private-preview/rbac/roles-and-capabilities-about.rst
+++ b/private-preview/rbac/roles-and-capabilities-about.rst
@@ -81,7 +81,7 @@ APIs honor capabilities based on the role defined to their token. This is import
Multiple roles for a user or team
-You can assign multiple roles to individual users. The user receives a combination of capabilities inherited from all of their roles. Additionally, if you revoke a role from a user the change takes effect immediately. The cache is invalidated and the user no longer has access to the capabilities associated with the role that was revoked.
+You can assign multiple roles to individual users. The user receives a combination of capabilities inherited from all of their roles. Additionally, if you revoke a role from a user the change takes effect immediately.
.. list-table::