You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Something in this chart causes the entire target group to fail. None of the ingresses handled by the controller will be reconciled when the ingress from the chart below is present.
These events are flooding the ingresses:
Warning FailedDeployModel 11m (x9 over 22m) ingress (combined from similar events): Failed deploy model due to operation error Elastic Load Balancing v2: CreateTargetGroup, https response error StatusCode: 400, RequestID: 1e452797-903f-4147-b8da-55fa4fb7f6b0, api error ValidationError: 1 validation error detected: Value '0' at 'port' failed to satisfy constraint: Member must have value greater than or equal to 1
Warning FailedDeployModel 39s ingress Failed deploy model due to operation error Elastic Load Balancing v2: CreateTargetGroup, https response error StatusCode: 400, RequestID: c79f78bf-0330-40e1-8c99-66a772e0bc8b, api error ValidationError: 1 validation error detected: Value '0' at 'port' failed to satisfy constraint: Member must have value greater than or equal to 1
And these look the same on all ingresses handled by the controller.
I have checked that the target ports on every level corresponds correctly:
Ingress:
Using EKS (yes/no), if so version? Yes, v1.31.4-eks-2d5f260
Additional Context:
This might of course be misconfiguration by me or the kubetail chart, but I still don't think an error like this should kill all ingress reconciliation.
The text was updated successfully, but these errors were encountered:
@zac-nixon Is there a way to configure ingressClass/IngressClassParams so that it can automatically add this annotation for all Ingresses associated with that ingressClass?
To use a ClusterIP service, you must use the ip target type.
alb.ingress.kubernetes.io/target-type: ip
Thank you! I did not realize that the issue you linked was the same problem as I had. Anyway, a target-type mismatch should not ruin the entire target-group generation so I still believe this should be handled in the controller with some event occurring for the related ingress instead.
I think at the very least, I should make the error better as we've gotten a couple issues for this same mismatch issue. To address your point about one bad ingress blocking the whole reconciliation; I agree in principal by technically it's very hard to implement as the one member of the ingress group could potentially conflict or change the behavior of other members of the group.
Describe the bug
Something in this chart causes the entire target group to fail. None of the ingresses handled by the controller will be reconciled when the ingress from the chart below is present.
These events are flooding the ingresses:
And these look the same on all ingresses handled by the controller.
I have checked that the target ports on every level corresponds correctly:
Ingress:
Service:
Deployment:
All looks fine. I can port-forward to the service and it works fine.
Steps to reproduce
Install the kubetail chart:
kubetail-values.yaml:
Expected outcome
Ingress is reconciled properly and no other ingress is affected.
Environment
Additional Context:
This might of course be misconfiguration by me or the kubetail chart, but I still don't think an error like this should kill all ingress reconciliation.
The text was updated successfully, but these errors were encountered: