Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CreateTargetGroup fails: ValidationError: Member must have value greater than or equal to 1 #4014

Open
lunderhage opened this issue Jan 10, 2025 · 4 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@lunderhage
Copy link

Describe the bug
Something in this chart causes the entire target group to fail. None of the ingresses handled by the controller will be reconciled when the ingress from the chart below is present.

These events are flooding the ingresses:

  Warning  FailedDeployModel  11m (x9 over 22m)  ingress  (combined from similar events): Failed deploy model due to operation error Elastic Load Balancing v2: CreateTargetGroup, https response error StatusCode: 400, RequestID: 1e452797-903f-4147-b8da-55fa4fb7f6b0, api error ValidationError: 1 validation error detected: Value '0' at 'port' failed to satisfy constraint: Member must have value greater than or equal to 1
  Warning  FailedDeployModel  39s                ingress  Failed deploy model due to operation error Elastic Load Balancing v2: CreateTargetGroup, https response error StatusCode: 400, RequestID: c79f78bf-0330-40e1-8c99-66a772e0bc8b, api error ValidationError: 1 validation error detected: Value '0' at 'port' failed to satisfy constraint: Member must have value greater than or equal to 1

And these look the same on all ingresses handled by the controller.

I have checked that the target ports on every level corresponds correctly:
Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/actions.ssl-redirect: "443"
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/group.name: mydomain-ingress
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
    external-dns.alpha.kubernetes.io/hostname: mydomain.se
    meta.helm.sh/release-name: kubetail
    meta.helm.sh/release-namespace: kubetail
  creationTimestamp: "2025-01-10T19:39:34Z"
  finalizers:
  - group.ingress.k8s.aws/mydomain-ingress
  generation: 1
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: kubetail
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kubetail
    app.kubernetes.io/version: 0.8.3
    helm.sh/chart: kubetail-0.8.7
  name: kubetail-server
  namespace: kubetail
  resourceVersion: "190783523"
  uid: e7f75922-6f5b-4c46-ac86-f0b287f45878
spec:
  ingressClassName: alb
  rules:
  - host: logs.mydomain.se
    http:
      paths:
      - backend:
          service:
            name: kubetail-server
            port:
              name: kubetail-server
        path: /
        pathType: Prefix
status:
  loadBalancer: {}

Service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: kubetail
    meta.helm.sh/release-namespace: kubetail
  creationTimestamp: "2025-01-10T19:39:33Z"
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: kubetail
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kubetail
    app.kubernetes.io/version: 0.8.3
    helm.sh/chart: kubetail-0.8.7
  name: kubetail-server
  namespace: kubetail
  resourceVersion: "190783403"
  uid: 64d217ed-a322-40fe-a279-e9fea84546c5
spec:
  clusterIP: 172.20.97.92
  clusterIPs:
  - 172.20.97.92
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: kubetail-server
    port: 7500
    protocol: TCP
    targetPort: kubetail-server
  selector:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: kubetail
    app.kubernetes.io/name: kubetail
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: kubetail
    meta.helm.sh/release-namespace: kubetail
  creationTimestamp: "2025-01-10T19:39:34Z"
  generation: 1
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: kubetail
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kubetail
    app.kubernetes.io/version: 0.8.3
    helm.sh/chart: kubetail-0.8.7
  name: kubetail-server
  namespace: kubetail
  resourceVersion: "190784007"
  uid: 60eda1a3-5d9a-4727-89a7-30c1cb260f81
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app.kubernetes.io/component: server
      app.kubernetes.io/instance: kubetail
      app.kubernetes.io/name: kubetail
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/config: c870e32ae194bf0b322403a5d0183a298bd85dcb604fc80802c5dd574bf30369
        checksum/secret: 93f88dda5cfa12676164e96cc5a8a32e70c96fc0723e437dd4498f95c733ce80
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: server
        app.kubernetes.io/instance: kubetail
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: kubetail
        app.kubernetes.io/version: 0.8.3
        helm.sh/chart: kubetail-0.8.7
    spec:
      automountServiceAccountToken: true
      containers:
      - args:
        - --config=/etc/kubetail/config.yaml
        envFrom:
        - secretRef:
            name: kubetail-server
        image: docker.io/kubetail/kubetail-server:0.9.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: kubetail-server
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        name: kubetail-server
        ports:
        - containerPort: 7500
          name: kubetail-server
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: kubetail-server
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsGroup: 1000
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/kubetail
          name: config
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kubetail-server
      serviceAccountName: kubetail-server
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: kubetail-server
        name: config
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2025-01-10T19:40:14Z"
    lastUpdateTime: "2025-01-10T19:40:14Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2025-01-10T19:39:34Z"
    lastUpdateTime: "2025-01-10T19:40:14Z"
    message: ReplicaSet "kubetail-server-7d99865fff" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

All looks fine. I can port-forward to the service and it works fine.

Steps to reproduce
Install the kubetail chart:

helm repo add kubetail https://kubetail-org.github.io/helm-charts/
helm repo update
helm -n kubetail upgrade --install kubetail kubetail/kubetail -f kubetail-values.yaml 

kubetail-values.yaml:

kubetail:
  server:
    ingress:
      enabled: true
      className: alb
      annotations:
        alb.ingress.kubernetes.io/actions.ssl-redirect: "443"
        alb.ingress.kubernetes.io/group.name: mydomain-ingress
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/ssl-redirect: "443"
        alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
        external-dns.alpha.kubernetes.io/hostname: mydomain.se
        alb.ingress.kubernetes.io/backend-protocol: HTTP
      rules:
        - host: logs.mydomain.se
          http:
            paths:
              - path: /
                pathType: Prefix

Expected outcome
Ingress is reconciled properly and no other ingress is affected.

Environment

  • AWS Load Balancer controller version: v2.11.0 (chart 1.11.0)
  • Kubernetes version: 1.31
  • Using EKS (yes/no), if so version? Yes, v1.31.4-eks-2d5f260

Additional Context:
This might of course be misconfiguration by me or the kubetail chart, but I still don't think an error like this should kill all ingress reconciliation.

@zac-nixon
Copy link
Collaborator

Seems to be a duplicate of this:
#3969

Copying the same answer:
Hi! The issue is that you're using a ClusterIP service with an instance type target group.

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.10/guide/ingress/annotations/#target-type

To use a ClusterIP service, you must use the ip target type.

alb.ingress.kubernetes.io/target-type: ip

@zac-nixon zac-nixon added the kind/support Categorizes issue or PR as a support question. label Jan 10, 2025
@zhangbc97
Copy link

zhangbc97 commented Jan 13, 2025

@zac-nixon Is there a way to configure ingressClass/IngressClassParams so that it can automatically add this annotation for all Ingresses associated with that ingressClass?

@lunderhage
Copy link
Author

Seems to be a duplicate of this: #3969

Copying the same answer: Hi! The issue is that you're using a ClusterIP service with an instance type target group.

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.10/guide/ingress/annotations/#target-type

To use a ClusterIP service, you must use the ip target type.

alb.ingress.kubernetes.io/target-type: ip

Thank you! I did not realize that the issue you linked was the same problem as I had. Anyway, a target-type mismatch should not ruin the entire target-group generation so I still believe this should be handled in the controller with some event occurring for the related ingress instead.

@zac-nixon
Copy link
Collaborator

I think at the very least, I should make the error better as we've gotten a couple issues for this same mismatch issue. To address your point about one bad ingress blocking the whole reconciliation; I agree in principal by technically it's very hard to implement as the one member of the ingress group could potentially conflict or change the behavior of other members of the group.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

3 participants