We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
So yesterday was patch day and we obtained a customer complaint:
Checking the nodes we can see that k8s-ibk-worker2 is still cordoned and that k8s-ibk-worker[13] were not patched at all:
k8s-ibk-worker2
k8s-ibk-worker[13]
previous logs of daemon set pod on k8s-ibk-worker2:
I0510 03:42:33.300276 1 main.go:145] using InClusterConfig 2024-05-10T03:42:33.300746721Z I0510 03:42:33.300707 1 main.go:121] using namespace wd-suss for the Lease E0510 03:42:43.302185 1 main.go:43] Get "https://10.43.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
current logs of daemon set pod on k8s-ibk-worker2:
I0510 03:47:57.129368 1 main.go:145] using InClusterConfig 2024-05-10T03:47:57.129846622Z I0510 03:47:57.129805 1 main.go:121] using namespace wd-suss for the Lease 2024-05-10T03:47:57.152218409Z I0510 03:47:57.152112 1 suss.go:82] "node k8s-ibk-worker2 found\n" 2024-05-10T03:47:57.152240565Z I0510 03:47:57.152142 1 main.go:58] listen on localhost:9993
current logs of daemon set pod on k8s-ibk-worker1: suss-kcjc9_suss.log
k8s-ibk-worker1
current logs of daemon set pod on k8s-ibk-worker3: suss-64qd4_suss.log
k8s-ibk-worker3
@gprossliner any idea what could be going on?
I only mitigated it via:
The text was updated successfully, but these errors were encountered:
Interesting side fact: CRM can't see any historic data before 05:30, and also no traffic came to online-aufladen:
Incoming transactions:
Monitoring:
Sorry, something went wrong.
gprossliner
No branches or pull requests
So yesterday was patch day and we obtained a customer complaint:
Checking the nodes we can see that
k8s-ibk-worker2
is still cordoned and thatk8s-ibk-worker[13]
were not patched at all:previous logs of daemon set pod on
k8s-ibk-worker2
:current logs of daemon set pod on
k8s-ibk-worker2
:current logs of daemon set pod on
k8s-ibk-worker1
:suss-kcjc9_suss.log
current logs of daemon set pod on
k8s-ibk-worker3
:suss-64qd4_suss.log
@gprossliner any idea what could be going on?
I only mitigated it via:
The text was updated successfully, but these errors were encountered: