Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/postgresql-ha] “kind does not match between main” errors in Postgres-HA (bitnami) + Pgpool setup #31697

Closed
dingerkingh opened this issue Feb 1, 2025 · 2 comments
Assignees
Labels
postgresql-ha solved tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@dingerkingh
Copy link

dingerkingh commented Feb 1, 2025

Name and Version

bitnami/postgres-ha

What architecture are you using?

amd64

What steps will reproduce the bug?

Issue Description

I am running the Helm chart to deploy a Postgres HA cluster with Pgpool. Everything appears to work initially, but I often see this error in the logs (or from the client):

ERROR: kind does not match between main ...

I’m unsure whether it’s an actual data out-of-sync problem or a separate issue (e.g., a conflict in the protocol, some mismatch in replication, etc.). But this error recurs under normal operation - even with the default helm setup.

Steps to Reproduce

Deploy the Helm chart.
Connect to the cluster via Pgpool or a client like psql/pgAdmin.
Perform writes, reads, or schema modifications (for example, creating and dropping tables).
Observe the logs from Pgpool/PostgreSQL pods. The error appears as:
kind does not match between main ...

What I Have Tried

Changing Pgpool settings for failover:
failover_on_backend_error = on/off
failover_on_backend_shutdown = on/off

Checking replication status via pg_stat_replication in the primary node (everything appears normal).
Ensuring the version of pgpool images and postgresql-repmgr images match or are compatible.
Despite these attempts, the error persists. I don’t see obvious replication problems (no huge replication lag). Even with not replication lag - often I’ll see logs that a database doesn’t exist even though it shows when listing databases. However, the error’s presence is concerning, and I want to ensure there isn’t a hidden replication or configuration mismatch.

Questions / Requests for the Community

Has anyone encountered this specific “kind does not match between main” error in a Bitnami Postgres-HA + Pgpool setup and actually overcame it ?

Does this chart actually work for anybody in production and if so, could you post your values.yaml so i could compare the differences?

Are you using any custom parameters or values?

No response

What is the expected behavior?

No response

What do you see instead?

It fails

Additional information

No response

@dingerkingh dingerkingh added the tech-issues The user has a technical issue about an application label Feb 1, 2025
@github-actions github-actions bot added the triage Triage is needed label Feb 1, 2025
@dingerkingh
Copy link
Author

dingerkingh commented Feb 1, 2025

Forgot to include my config as it stands now.

backup:
  cronjob:
    annotations: {}
    command:
      - /bin/sh
      - '-c'
      - >-
        pg_dumpall --clean --if-exists --load-via-partition-root
        --quote-all-identifiers --no-password
        --file=${PGDUMP_DIR}/pg_dumpall-$(date '+%Y-%m-%d-%H-%M').pgdump
    concurrencyPolicy: Allow
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      readOnlyRootFilesystem: true
      runAsGroup: 1001
      runAsNonRoot: true
      runAsUser: 1001
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    extraEnvVars: []
    extraEnvVarsCM: ''
    extraEnvVarsSecret: ''
    extraVolumeMounts: []
    extraVolumes: []
    failedJobsHistoryLimit: 1
    labels: {}
    nodeSelector: {}
    podSecurityContext:
      enabled: true
      fsGroup: 1001
      fsGroupChangePolicy: Always
      supplementalGroups: []
      sysctls: []
    restartPolicy: OnFailure
    schedule: '@daily'
    startingDeadlineSeconds: ''
    storage:
      accessModes:
        - ReadWriteOnce
      annotations: {}
      existingClaim: ''
      mountPath: /backup/pgdump
      resourcePolicy: ''
      size: 24Gi
      storageClass: ''
      subPath: ''
      volumeClaimTemplates:
        selector: {}
    successfulJobsHistoryLimit: 3
    timeZone: ''
    tolerations: []
    ttlSecondsAfterFinished: ''
  enabled: false
clusterDomain: cluster.local
commonAnnotations: {}
commonLabels: {}
diagnosticMode:
  args:
    - infinity
  command:
    - sleep
  enabled: false
extraDeploy: []
fullnameOverride: ''
global:
  compatibility:
    openshift:
      adaptSecurityContext: auto
  defaultStorageClass: ''
  imagePullSecrets: []
  imageRegistry: ''
  ldap:
    bindpw: ''
    existingSecret: ''
  pgpool:
    adminPassword: ''
    adminUsername: ''
    existingSecret: postgres-secrets
  postgresql:
    database: ''
    existingSecret: postgres-secrets
    password: ''
    repmgrDatabase: ''
    repmgrPassword: ''
    repmgrUsername: ''
    username: ''
  security:
    allowInsecureImages: false
  storageClass: ''
  cattle:
    systemProjectId: p-lwz7d
kubeVersion: ''
ldap:
  basedn: ''
  binddn: ''
  bindpw: ''
  bslookup: ''
  enabled: false
  existingSecret: ''
  nssInitgroupsIgnoreusers: root,nslcd
  scope: ''
  tlsReqcert: ''
  uri: ''
metrics:
  annotations:
    prometheus.io/port: '9187'
    prometheus.io/scrape: 'true'
  containerPorts:
    http: 9187
  customLivenessProbe: {}
  customMetrics: {}
  customReadinessProbe: {}
  customStartupProbe: {}
  enabled: false
  extraEnvVars: []
  extraEnvVarsCM: ''
  extraEnvVarsSecret: ''
  image:
    debug: false
    digest: ''
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/postgres-exporter
    tag: 0.16.0-debian-12-r2
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  podSecurityContext:
    enabled: true
    runAsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
    seLinuxOptions: {}
    seccompProfile:
      type: RuntimeDefault
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  resources: {}
  resourcesPreset: nano
  service:
    clusterIP: ''
    enabled: true
    externalTrafficPolicy: Cluster
    loadBalancerIP: ''
    loadBalancerSourceRanges: []
    nodePorts:
      metrics: ''
    ports:
      metrics: 9187
    type: ClusterIP
  serviceMonitor:
    annotations: {}
    enabled: false
    honorLabels: false
    interval: ''
    jobLabel: ''
    labels: {}
    metricRelabelings: []
    namespace: ''
    relabelings: []
    scrapeTimeout: ''
    selector:
      prometheus: kube-prometheus
  startupProbe:
    enabled: false
    failureThreshold: 10
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
nameOverride: ''
namespaceOverride: ''
persistence:
  accessModes:
    - ReadWriteOnce
  annotations: {}
  enabled: true
  existingClaim: ''
  labels: {}
  mountPath: /bitnami/postgresql
  selector: {}
  size: 24Gi
  storageClass: ''
persistentVolumeClaimRetentionPolicy:
  enabled: false
  whenDeleted: Retain
  whenScaled: Retain
pgpool:
  adminPassword: ''
  adminUsername: admin
  affinity: {}
  args: []
  authenticationMethod: scram-sha-256
  automountServiceAccountToken: false
  childLifeTime: ''
  childMaxConnections: ''
  clientIdleLimit: ''
  clientMinMessages: error
  command: []
  configuration: |-
    connection_cache = true
    failover_on_backend_error = on
    failover_on_backend_shutdown = on
  configurationCM: ''
  connectionLifeTime: ''
  containerPorts:
    postgresql: 5432
  containerSecurityContext:
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    enabled: true
    privileged: false
    readOnlyRootFilesystem: true
    runAsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
    seLinuxOptions: {}
    seccompProfile:
      type: RuntimeDefault
  customLivenessProbe: {}
  customReadinessProbe: {}
  customStartupProbe: {}
  customUsers:
    passwords: ''
    usernames: ''
  customUsersSecret: ''
  disableLoadBalancingOnWrite: transaction
  existingSecret: ''
  extraEnvVars: []
  extraEnvVarsCM: ''
  extraEnvVarsSecret: ''
  extraVolumeMounts: []
  extraVolumes: []
  hostAliases: []
  image:
    debug: true
    digest: ''
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/pgpool
    tag: 4.5.5-debian-12-r2
  initContainers: []
  initdbScripts: {}
  initdbScriptsCM: ''
  initdbScriptsSecret: ''
  labels: {}
  lifecycleHooks: {}
  livenessProbe:
    enabled: true
    failureThreshold: 5
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  logConnections: false
  logHostname: true
  logLinePrefix: ''
  logPerNodeStatement: false
  maxPool: '4'
  minReadySeconds: ''
  networkPolicy:
    allowExternal: true
    allowExternalEgress: true
    enabled: true
    extraEgress: []
    extraIngress: []
    ingressNSMatchLabels: {}
    ingressNSPodMatchLabels: {}
  nodeAffinityPreset:
    key: ''
    type: ''
    values: []
  nodeSelector: {}
  numInitChildren: 100
  pdb:
    create: true
    maxUnavailable: ''
    minAvailable: ''
  podAffinityPreset: ''
  podAnnotations: {}
  podAntiAffinityPreset: soft
  podLabels: {}
  podSecurityContext:
    enabled: true
    fsGroup: 1001
    fsGroupChangePolicy: Always
    supplementalGroups: []
    sysctls: []
  priorityClassName: ''
  readinessProbe:
    enabled: true
    failureThreshold: 5
    initialDelaySeconds: 20
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 3
  reservedConnections: 0
  resources: {}
  resourcesPreset: xlarge
  schedulerName: ''
  serviceAnnotations: {}
  serviceLabels: {}
  sidecars: []
  srCheckDatabase: postgres
  startupProbe:
    enabled: false
    failureThreshold: 10
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  terminationGracePeriodSeconds: ''
  tls:
    autoGenerated: false
    certCAFilename: ''
    certFilename: cert.crt
    certKeyFilename: cert.key
    certificatesSecret: certificates-tls-secret
    enabled: true
    preferServerCiphers: true
  tolerations: []
  topologySpreadConstraints: []
  updateStrategy: {}
  useLoadBalancing: true
  usePasswordFile: ''
postgresql:
  affinity: {}
  args: []
  audit:
    clientMinMessages: error
    logConnections: false
    logDisconnections: false
    logHostname: true
    logLinePrefix: ''
    logTimezone: ''
    pgAuditLog: ''
    pgAuditLogCatalog: 'off'
  automountServiceAccountToken: false
  command: []
  configuration: ''
  configurationCM: ''
  containerPorts:
    postgresql: 5432
  containerSecurityContext:
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    enabled: true
    privileged: false
    readOnlyRootFilesystem: true
    runAsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
    seLinuxOptions: {}
    seccompProfile:
      type: RuntimeDefault
  customLivenessProbe: {}
  customReadinessProbe: {}
  customStartupProbe: {}
  database: ''
  dbUserConnectionLimit: ''
  existingSecret: ''
  extendedConf: ''
  extendedConfCM: ''
  extraEnvVars: []
  extraEnvVarsCM: ''
  extraEnvVarsSecret: ''
  extraInitContainers: []
  extraVolumeMounts: []
  extraVolumes: []
  headlessWithNotReadyAddresses: false
  hostAliases: []
  hostIPC: false
  hostNetwork: false
  image:
    debug: false
    digest: ''
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/postgresql-repmgr
    tag: 16.6.0-debian-12-r3
  initContainers: []
  initdbScripts: {}
  initdbScriptsCM: ''
  initdbScriptsSecret: ''
  labels: {}
  lifecycleHooks: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  maxConnections: '500'
  networkPolicy:
    allowExternal: true
    allowExternalEgress: true
    enabled: true
    extraEgress: []
    extraIngress: []
    ingressNSMatchLabels: {}
    ingressNSPodMatchLabels: {}
  nodeAffinityPreset:
    key: ''
    type: ''
    values: []
  nodeSelector: {}
  password: ''
  pdb:
    create: true
    maxUnavailable: ''
    minAvailable: ''
  pgHbaConfiguration: ''
  pgHbaTrustAll: false
  pghbaRemoveFilters: ''
  podAffinityPreset: ''
  podAnnotations: {}
  podAntiAffinityPreset: soft
  podLabels: {}
  podManagementPolicy: Parallel
  podSecurityContext:
    enabled: true
    fsGroup: 1001
    fsGroupChangePolicy: Always
    supplementalGroups: []
    sysctls: []
  postgresConnectionLimit: ''
  postgresPassword: ''
  preStopDelayAfterPgStopSeconds: 25
  priorityClassName: ''
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 3
  repmgrChildNodesCheckInterval: 5
  repmgrChildNodesConnectedMinCount: 1
  repmgrChildNodesDisconnectTimeout: 30
  repmgrConfiguration: ''
  repmgrConnectTimeout: 5
  repmgrDatabase: repmgr
  repmgrFenceOldPrimary: false
  repmgrLogLevel: NOTICE
  repmgrPassfilePath: ''
  repmgrPassword: ''
  repmgrReconnectAttempts: 2
  repmgrReconnectInterval: 3
  repmgrUsePassfile: ''
  repmgrUsername: repmgr
  resources: {}
  resourcesPreset: xlarge
  schedulerName: ''
  serviceAnnotations: {}
  sharedPreloadLibraries: pgaudit, repmgr
  sidecars: []
  startupProbe:
    enabled: false
    failureThreshold: 10
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  statementTimeout: ''
  syncReplication: false
  syncReplicationMode: ''
  tcpKeepalivesCount: ''
  tcpKeepalivesIdle: ''
  tcpKeepalivesInterval: ''
  terminationGracePeriodSeconds: ''
  tls:
    certFilename: ''
    certKeyFilename: ''
    certificatesSecret: ''
    enabled: false
    preferServerCiphers: true
  tolerations: []
  topologySpreadConstraints: []
  updateStrategy:
    type: RollingUpdate
  upgradeRepmgrExtension: false
  usePasswordFile: ''
  usePgRewind: false
  username: postgres
psp:
  create: false
rbac:
  create: false
  rules: []
service:
  annotations: {}
  clusterIP: ''
  externalTrafficPolicy: Cluster
  extraPorts: []
  headless:
    annotations: {}
  loadBalancerIP: ''
  loadBalancerSourceRanges: []
  nodePorts:
    postgresql: ''
  portName: postgresql
  ports:
    postgresql: 5432
  serviceLabels: {}
  sessionAffinity: None
  sessionAffinityConfig: {}
  type: LoadBalancer
serviceAccount:
  annotations: {}
  automountServiceAccountToken: false
  create: true
  name: ''
volumePermissions:
  enabled: false
  image:
    digest: ''
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/os-shell
    tag: 12-debian-12-r34
  podSecurityContext:
    enabled: true
    runAsGroup: 0
    runAsNonRoot: false
    runAsUser: 0
    seLinuxOptions: {}
    seccompProfile:
      type: RuntimeDefault
  resources: {}
  resourcesPreset: nano
witness:
  affinity: {}
  args: []
  audit:
    clientMinMessages: error
    logConnections: false
    logDisconnections: false
    logHostname: true
    logLinePrefix: ''
    logTimezone: ''
    pgAuditLog: ''
    pgAuditLogCatalog: 'off'
  automountServiceAccountToken: false
  command: []
  configuration: ''
  configurationCM: ''
  containerPorts:
    postgresql: 5432
  containerSecurityContext:
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    enabled: true
    privileged: false
    readOnlyRootFilesystem: true
    runAsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
    seLinuxOptions: {}
    seccompProfile:
      type: RuntimeDefault
  create: false
  customLivenessProbe: {}
  customReadinessProbe: {}
  customStartupProbe: {}
  dbUserConnectionLimit: ''
  extendedConf: ''
  extendedConfCM: ''
  extraEnvVars: []
  extraEnvVarsCM: ''
  extraEnvVarsSecret: ''
  extraInitContainers: []
  extraVolumeMounts: []
  extraVolumes: []
  hostAliases: []
  hostIPC: false
  hostNetwork: false
  initContainers: []
  initdbScripts: {}
  initdbScriptsCM: ''
  initdbScriptsSecret: ''
  labels: {}
  lifecycleHooks: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  maxConnections: ''
  nodeAffinityPreset:
    key: ''
    type: ''
    values: []
  nodeSelector: {}
  pdb:
    create: true
    maxUnavailable: ''
    minAvailable: ''
  pgHbaConfiguration: ''
  pgHbaTrustAll: false
  pghbaRemoveFilters: ''
  podAffinityPreset: ''
  podAnnotations: {}
  podAntiAffinityPreset: soft
  podLabels: {}
  podSecurityContext:
    enabled: true
    fsGroup: 1001
    fsGroupChangePolicy: Always
    supplementalGroups: []
    sysctls: []
  postgresConnectionLimit: ''
  priorityClassName: ''
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 1
  repmgrConfiguration: ''
  repmgrConnectTimeout: 5
  repmgrLogLevel: NOTICE
  repmgrReconnectAttempts: 2
  repmgrReconnectInterval: 3
  resources: {}
  resourcesPreset: large
  schedulerName: ''
  sidecars: []
  startupProbe:
    enabled: false
    failureThreshold: 10
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  statementTimeout: ''
  tcpKeepalivesCount: ''
  tcpKeepalivesIdle: ''
  tcpKeepalivesInterval: ''
  terminationGracePeriodSeconds: ''
  tolerations: []
  topologySpreadConstraints: []
  updateStrategy:
    type: RollingUpdate
  upgradeRepmgrExtension: false

@javsalgar javsalgar changed the title “kind does not match between main” errors in Postgres-HA (bitnami) + Pgpool setup [bitnami/postgresql-ha] “kind does not match between main” errors in Postgres-HA (bitnami) + Pgpool setup Feb 3, 2025
@javsalgar
Copy link
Contributor

Hi!

Thank you for opening the ticket. I see this is a duplicate of #4785. Let's continue the conversation there. Please check what we did to reproduce the issue and please provide us with exact steps that trigger the issue, as we were having problems reproducing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
postgresql-ha solved tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

2 participants