You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, istiod enforces a 5 minutes retention time for all stats metrics (i.e. if a static counter does not increase in 5 minutes, it will be dropped):
I would also like to ask you to increase the global rotation interval to 6 hours. This 5 minutes rotation interval is often found to be very aggressive (even drops time series for destinations which are in CrashloopBackoff and waiting for 10 minutes).
Reasons
We identified a negative side-effect of this rotation on our metrics collection and monitoring, specific to VictoriaMetrics streaming aggregation and the increase() MetricsQL function.
With 6 hours rotation, hopefully we would still not hit the OOM and scrapeSizeExeceeded issues we have seen with some clusters, but would make the streaming aggregation more stable.
Verify if the solution works for both open-source Kyma and SAP BTP, Kyma runtime.
If the default configuration of Istio Operator has been changed, you performed a manual upgrade test to verify that the change can be rolled out correctly.
Add release notes.
Attachments
Follow-Up Issues
The text was updated successfully, but these errors were encountered:
Description
Currently, istiod enforces a 5 minutes retention time for all stats metrics (i.e. if a static counter does not increase in 5 minutes, it will be dropped):
I would also like to ask you to increase the global rotation interval to 6 hours. This 5 minutes rotation interval is often found to be very aggressive (even drops time series for destinations which are in CrashloopBackoff and waiting for 10 minutes).
Reasons
We identified a negative side-effect of this rotation on our metrics collection and monitoring, specific to VictoriaMetrics streaming aggregation and the
increase()
MetricsQL function.With 6 hours rotation, hopefully we would still not hit the OOM and scrapeSizeExeceeded issues we have seen with some clusters, but would make the streaming aggregation more stable.
Contact
@ebensom
ToDos [Developer]
PRs
ACs [PO]
DoD [Developer & Reviewer]
Attachments
Follow-Up Issues
The text was updated successfully, but these errors were encountered: