From a291b17ec581888683cae800acebb45f280d2268 Mon Sep 17 00:00:00 2001 From: Anna Urbiztondo Date: Fri, 10 Nov 2023 11:24:31 +0100 Subject: [PATCH 1/4] PQ skeleton --- .../kubernetes-config-advanced.rst | 82 ++++++++++++++++++- 1 file changed, 81 insertions(+), 1 deletion(-) diff --git a/gdi/opentelemetry/kubernetes-config-advanced.rst b/gdi/opentelemetry/kubernetes-config-advanced.rst index 3dfab9b48..b476bad67 100644 --- a/gdi/opentelemetry/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/kubernetes-config-advanced.rst @@ -309,4 +309,84 @@ Support of Pod Security Policies (PSP) was removed in Kubernetes 1.25. If you st .. code-block:: yaml - helm install my-splunk-otel-collector -f my_values.yaml splunk-otel-collector-chart/splunk-otel-collector \ No newline at end of file + helm install my-splunk-otel-collector -f my_values.yaml splunk-otel-collector-chart/splunk-otel-collector + + +Configure data persistence queues +================================================== + +Without any configuration, data is queued in memory only. When data cannot be sent, it's retried a few times for up to 5 minutes by default, and then dropped. If, for any reason, the Collector is restarted in this period, the queued data will be gone. + +If you want the queue to be persisted on disk if the Collector restarts, set ``splunkPlatform.sendingQueue.persistentQueue.enabled`` to enable support for logs, metrics and traces. + +By default, data is persisted in the ``/var/addon/splunk/exporter_queue`` directory. To override this path, use the ``splunkPlatform.sendingQueue.persistentQueue.storagePath`` option. + +Check the :new-page:`Data Persistence in the OpenTelemetry Collector ` for a detailed explantion. + +.. note:: Data can only be persisted for agent daemonsets. + +Config examples +----------------------------------------------------------------------------- + +Use following in values.yaml to disable data persistense for logs, metrics, or traces: + +.. code-block:: yaml + + agent: + config: + exporters: + splunk_hec/platform_logs: + sending_queue: + storage: null + +or + +.. code-block:: yaml + + agent: + config: + exporters: + splunk_hec/platform_metrics: + sending_queue: + storage: null + +or + +.. code-block:: yaml + + agent: + config: + exporters: + splunk_hec/platform_traces: + sending_queue: + storage: null + +Support for persistent queue +----------------------------------------------------------------------------- + +The following support is offered: + +Support for ``GKE/Autopilot`` and ``EKS/Fargate`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Persistent buffering is not supported for ``GKE/Autopilot`` and ``EKS/Fargate``, since the directory needs to be mounted via ``hostPath``. + +Also, ``GKE/Autopilot`` and ``EKS/Fargate`` don't allow volume mounts, as Splunk Observability Cloud doesn't manage the underlying infrastructure. + +Refer to :new-page:`aws/fargate ` and :new-page:`gke/autopilot ` for more information. + +Gateway support +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The filestorage extention acquires an exclusive lock for the queue directory. + +It's not possible to run persistent buffering if there are multiple replicas of a pod and ``gateway`` runs 3 replicas by default. + +Even if support is somehow provided, only one of the pods will be able to acquire the lock and run, while the others will be blocked and unable to operate. + +Cluster Receiver support +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because any available node can be selected by the Kubernetes control plane to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts wouldn't work for such environments. + +Data persistence is currently not applicable to the Kubernetes cluster metrics and Kubernetes events. \ No newline at end of file From cc07901d8f2ff3eef2a37467290b1e7bbbc216dd Mon Sep 17 00:00:00 2001 From: Anna Urbiztondo Date: Fri, 10 Nov 2023 15:35:18 +0100 Subject: [PATCH 2/4] Headers, rewording --- .../kubernetes-config-advanced.rst | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/gdi/opentelemetry/kubernetes-config-advanced.rst b/gdi/opentelemetry/kubernetes-config-advanced.rst index b476bad67..1086dfea2 100644 --- a/gdi/opentelemetry/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/kubernetes-config-advanced.rst @@ -308,10 +308,8 @@ Support of Pod Security Policies (PSP) was removed in Kubernetes 1.25. If you st .. code-block:: yaml - helm install my-splunk-otel-collector -f my_values.yaml splunk-otel-collector-chart/splunk-otel-collector - Configure data persistence queues ================================================== @@ -330,6 +328,9 @@ Config examples Use following in values.yaml to disable data persistense for logs, metrics, or traces: +Logs +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + .. code-block:: yaml agent: @@ -339,7 +340,9 @@ Use following in values.yaml to disable data persistense for logs, metrics, or t sending_queue: storage: null -or + +Metrics +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: yaml @@ -350,7 +353,8 @@ or sending_queue: storage: null -or +Traces +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: yaml @@ -380,13 +384,11 @@ Gateway support The filestorage extention acquires an exclusive lock for the queue directory. -It's not possible to run persistent buffering if there are multiple replicas of a pod and ``gateway`` runs 3 replicas by default. - -Even if support is somehow provided, only one of the pods will be able to acquire the lock and run, while the others will be blocked and unable to operate. +It's not possible to run persistent buffering if there are multiple replicas of a pod. Even if support could be provided, only one of the pods will be able to acquire the lock and run, while the others will be blocked and unable to operate. Cluster Receiver support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because any available node can be selected by the Kubernetes control plane to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts wouldn't work for such environments. +The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because the Kubernetes control plane can select any available node to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts wouldn't work for such environments. Data persistence is currently not applicable to the Kubernetes cluster metrics and Kubernetes events. \ No newline at end of file From 34ae5d6134030d9bf6192a0cb3bbbfe25923d79b Mon Sep 17 00:00:00 2001 From: Tracey Carter Date: Wed, 15 Nov 2023 14:19:08 -0800 Subject: [PATCH 3/4] removed sentence about script --- logs/scp.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/logs/scp.rst b/logs/scp.rst index b8ca7d825..802fa9136 100644 --- a/logs/scp.rst +++ b/logs/scp.rst @@ -117,7 +117,7 @@ In Splunk Cloud Platform, follow the instructions in the guided setup for the in Submit a support ticket =================================================================== -If you were not able to run the script in step 3d in the preceeding section, you may submit a support ticket from your Splunk Cloud Platform instance to do this on your behalf. Submit a ticket to Splunk Support to configure your Splunk Cloud Platform instance's IP allow list. Configuring your allow list properly opens your Splunk Cloud Platform instance management port to Log Observer Connect, which can then search your Splunk Cloud Platform instance log data. After Splunk Support prepares your Splunk Cloud Platform instance, you can securely create a connection to Log Observer Connect. +If you were not able to independently secure a connection to your Splunk Cloud Platform instance in step 8 in the previous section, you may submit a support ticket from your Splunk Cloud Platform instance to do this on your behalf. Submit a ticket to Splunk Support to configure your Splunk Cloud Platform instance's IP allow list. Configuring your allow list properly opens your Splunk Cloud Platform instance management port to Log Observer Connect, which can then search your Splunk Cloud Platform instance log data. After Splunk Support prepares your Splunk Cloud Platform instance, you can securely create a connection to Log Observer Connect. To submit a support ticket, follow these steps: From dda466cdfbffac20097cfe287bf8669ce737d3a3 Mon Sep 17 00:00:00 2001 From: Anna U <104845867+aurbiztondo-splunk@users.noreply.github.com> Date: Thu, 16 Nov 2023 07:18:16 +0100 Subject: [PATCH 4/4] Update gdi/opentelemetry/kubernetes-config-advanced.rst Co-authored-by: jvoravong <47871238+jvoravong@users.noreply.github.com> --- gdi/opentelemetry/kubernetes-config-advanced.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gdi/opentelemetry/kubernetes-config-advanced.rst b/gdi/opentelemetry/kubernetes-config-advanced.rst index 1086dfea2..4bcb80e21 100644 --- a/gdi/opentelemetry/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/kubernetes-config-advanced.rst @@ -315,7 +315,7 @@ Configure data persistence queues Without any configuration, data is queued in memory only. When data cannot be sent, it's retried a few times for up to 5 minutes by default, and then dropped. If, for any reason, the Collector is restarted in this period, the queued data will be gone. -If you want the queue to be persisted on disk if the Collector restarts, set ``splunkPlatform.sendingQueue.persistentQueue.enabled`` to enable support for logs, metrics and traces. +If you want the queue to be persisted on disk if the Collector restarts, set ``splunkPlatform.sendingQueue.persistentQueue.enabled=true`` to enable support for logs, metrics and traces. By default, data is persisted in the ``/var/addon/splunk/exporter_queue`` directory. To override this path, use the ``splunkPlatform.sendingQueue.persistentQueue.storagePath`` option.