You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe what happened:
When running the Datadog agent as a sidecar in an ECS Fargate task that starts end exits within a short period of time, metrics for the containers are not available in Datadog, even if the other containers in the task wait for the Datadog agent to be healthy (agent health) before they themselves start.
Describe what you expected:
I would expect to always see my containers in the Containers view in Datadog and for there to be some metrics submitted for any task that has a Datadog agent running, even if the runtime of the task is brief. It is understandable that checks that run on an interval might never have run by the time the task is going to shut down, but today you can't find any trace of the container ever having existed.
Steps to reproduce the issue:
Deploy a task with the Datadog agent in one container and another container that just starts up and then exits in <1 minute.
Additional environment details (Operating System, Cloud provider, etc):
In this case I am deploying the agent as a sidecar in an ECS Fargate task, but I would expect the same in a k8s pod or any other environment that relies on container autodiscovery. The key point seems to be that the total runtime of the containers should be brief enough that the agent doesn't have time to discover the containers, collect metrics and submit them to Datadog.
I have also opened support request #1920828 for this issue, which contains more specific information such as the task definition used.
The text was updated successfully, but these errors were encountered:
Agent Environment
7.59.0
Describe what happened:
When running the Datadog agent as a sidecar in an ECS Fargate task that starts end exits within a short period of time, metrics for the containers are not available in Datadog, even if the other containers in the task wait for the Datadog agent to be healthy (
agent health
) before they themselves start.Describe what you expected:
I would expect to always see my containers in the Containers view in Datadog and for there to be some metrics submitted for any task that has a Datadog agent running, even if the runtime of the task is brief. It is understandable that checks that run on an interval might never have run by the time the task is going to shut down, but today you can't find any trace of the container ever having existed.
Steps to reproduce the issue:
Deploy a task with the Datadog agent in one container and another container that just starts up and then exits in <1 minute.
Additional environment details (Operating System, Cloud provider, etc):
In this case I am deploying the agent as a sidecar in an ECS Fargate task, but I would expect the same in a k8s pod or any other environment that relies on container autodiscovery. The key point seems to be that the total runtime of the containers should be brief enough that the agent doesn't have time to discover the containers, collect metrics and submit them to Datadog.
I have also opened support request #1920828 for this issue, which contains more specific information such as the task definition used.
The text was updated successfully, but these errors were encountered: