You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I have installed the fluentd-elasticsearch-10.0.1 helm chart and after sometime I can see my fluentd pods keeps restarting
multiple times stating liveness probe has failed.
Which version of the chart:
fluentd-elasticsearch-10.0.1
What happened:
I have installed the fluentd-elasticsearch-10.0.1 helm chart and after sometime I can see my fluentd pods keeps restarting
multiple times stating liveness probe has failed.
What you expected to happen:
Fluentd pods works well without liveness probe error and without restarting.
How to reproduce it (as minimally and precisely as possible):
This could be something like:
values.yaml (only put values which differ from the defaults)
configMaps:
useDefaults:
containersInputConf: false
systemInputConf: false
elasticsearch:
auth:
enabled: true
password: admin
user: admin
hosts:
- opensearch-cluster-master.seldon-logs.svc.cluster.local:9200
logstash:
enabled: true
prefix: kubernetes_cluster
scheme: https
sslVerify: false
extraConfigMaps:
containers.input.conf: |-
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/containers.log.pos
tag raw.kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
# Concatenate multi-line logs
<filter **>
@id filter_concat
@type concat
key message
multiline_end_regexp /\n$/
separator ""
</filter>
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
@id filter_kubernetes_metadata
@type kubernetes_metadata
</filter>
# Fixes json fields in Elasticsearch
<filter kubernetes.**>
@id filter_parser
@type parser
key_name log
reserve_data true
remove_key_name_field true
<parse>
@type multi_format
<pattern>
format json
</pattern>
<pattern>
format none
</pattern>
</parse>
</filter>
#exclude kube-system
<match kubernetes.var.log.containers.**kube-system**.log>
@type null
</match>
# Filter to only records with label fluentd=true
<filter kubernetes.**>
@type grep
<regexp>
key $.kubernetes.labels.fluentd
pattern true
</regexp>
</filter>
<filter kubernetes.**>
@type grep
<exclude>
key $.kubernetes.container_name
pattern istio-proxy
</exclude>
</filter>
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
The text was updated successfully, but these errors were encountered:
Describe the bug
I have installed the fluentd-elasticsearch-10.0.1 helm chart and after sometime I can see my fluentd pods keeps restarting
multiple times stating liveness probe has failed.
Version of Helm and Kubernetes:
Helm Version:
Kubernetes Version:
Which version of the chart:
fluentd-elasticsearch-10.0.1
What happened:
I have installed the fluentd-elasticsearch-10.0.1 helm chart and after sometime I can see my fluentd pods keeps restarting
multiple times stating liveness probe has failed.
What you expected to happen:
Fluentd pods works well without liveness probe error and without restarting.
How to reproduce it (as minimally and precisely as possible):
This could be something like:
values.yaml (only put values which differ from the defaults)
The text was updated successfully, but these errors were encountered: