Skip to content

Latest commit

 

History

History
334 lines (252 loc) · 15 KB

README.md

File metadata and controls

334 lines (252 loc) · 15 KB

Zammad Helm Chart

A Helm chart to install Zammad on Kubernetes

Zammad is a web based open source helpdesk/customer support system with many features to manage customer communication via several channels like telephone, facebook, twitter, chat and emails. It is distributed under version 3 of the GNU AFFERO General Public License (GNU AGPLv3).

Introduction

This chart will do the following:

  • Install a Zammad StatefulSet
  • Install Elasticsearch, Memcached, PostgreSQL, Redis & Minio (optional) as requirements

Be aware that the Zammad Helm chart version is different from the actual Zammad version.

Prerequisites

  • Kubernetes 1.19+
  • Helm 3.2.0+
  • Cluster with at least 4GB of free RAM

Installing the Chart

To install the chart use the following:

helm repo add zammad https://zammad.github.io/zammad-helm
helm upgrade --install zammad zammad/zammad

Once the Zammad pod is ready, it can be accessed using the ingress or port forwarding. To use port forwarding:

kubectl port-forward service/zammad-nginx 8080

Now you can open http://localhost:8080 in your browser.

Uninstalling the Chart

To remove the chart again use the following:

helm delete zammad

This will uninstall the Zammad StatefulSet, but keep the associated PVCs. Please delete them manually if you're sure.

Configuration

See Customizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart's values.yaml, or run this configuration command:

helm show values zammad/zammad

Choosing the Storage Provider

Zammad uses the database as the default storage provider for new systems. This works well for the majority of systems. Only if you have a large volume of tickets and attachments, you may need to store attachments in another storage provider.

We recommend the S3 storage provider using the optional minio subchart in this case.

You can also use File storage. In this case, you need to provide an existing PVC via zammadConfig.storageVolume. Note that this PVC must provide ReadWriteMany access to work properly for the different Deployments which may be on different nodes.

How to migrate from File to S3 storage

  • In the admin panel, go to "System -> Storage" and select "Simple Storage (S3)" as the new storage provider.
  • Migrate existing File store content by running kubectl exec zammad-0 -c zammad-railsserver -- rails r "Store::File.move('File', 'S3')". Example:
kubectl exec zammad-0 -c zammad-railsserver -- rails r "Store::File.move('File', 'S3')"
I, [2024-01-24T11:06:13.501572 #168]  INFO -- : ActionCable is using the redis instance at redis://:zammad@zammad-redis-master:6379.
I, [2024-01-24T11:06:13.506180#168-5980]  INFO -- : Using memcached as Rails cache store.
I, [2024-01-24T11:06:13.506246#168-5980]  INFO -- : Using the Redis back end for Zammad's web socket session store.
I, [2024-01-24T11:06:14.561169#168-5980]  INFO -- : storage remove '/opt/zammad/storage/fs/ab76/81d1/a4177/4c41f/12ddb67/96ee19e/a7e7c780a3227936c507cfbfe946afb9'
I, [2024-01-24T11:06:14.561654#168-5980]  INFO -- : Moved file ab7681d1a41774c41f12ddb6796ee19ea7e7c780a3227936c507cfbfe946afb9 from File to S3
I, [2024-01-24T11:06:14.566327#168-5980]  INFO -- : storage remove '/opt/zammad/storage/fs/dbaa/01dd/0df3a/33bce/e87c420/f221f59/6df9db38a402b30fccea09cc444a9fb0'
I, [2024-01-24T11:06:14.566513#168-5980]  INFO -- : Moved file dbaa01dd0df3a33bcee87c420f221f596df9db38a402b30fccea09cc444a9fb0 from File to S3
I, [2024-01-24T11:06:14.627896#168-5980]  INFO -- : storage remove '/opt/zammad/storage/fs/e81f/fb09/c5a26/f2081/f93401a/cbe8fff/9983e56c86fccb48d17a2eb1e5900b5b'

Deploying on OpenShift

To deploy on OpenShift unprivileged and with arbitrary UIDs and GIDs:

  • Delete the default key securityContext and zammadConfig.initContainers.zammad.securityContext.runAsUser with null.
  • Disable if used:
    • also podSecurityContext in all subcharts.
    • the privileged sysctlImage in elasticsearch subchart.
securityContext: null

zammadConfig:
  initContainers:
    zammad:
      securityContext:
        runAsUser: null
  volumePermissions:
    enabled: false
  tmpDirVolume:
    emptyDir:
      medium: memory

elasticsearch:
  sysctlImage:
    enabled: false
  master:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false

memcached:
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: false

minio:
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: false

redis:
  master:
   podSecurityContext:
     enabled: false
   containerSecurityContext:
     enabled: false
  replica:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false

Upgrading

From Chart Version 11.x to 12.0.0

The Previous StatefulSet Was Split up into Deployments

  • replicas can be set independently now for zammad-nginx and zammad-railsserver, allowing free scaling and HA setup for these.
    • For zammad-scheduler and zammad-websocket, replicas is fixed to 1 as they may only run once in the cluster.
  • The initContainers moved to a new zammad-init Job now which will be run on every helm upgrade. This reduces startup time greatly.
  • The nginx Service was renamed from zammad to zammad-nginx.
  • The previous Values.sidecars setting does not exist any more. Instead, you need to specify sidecars now on a per deployment basis, e.g. Values.zammadConfig.scheduler.sidecars.

Storage Requirements Changed

  • If you use the default DB or the new S3 storage backend for file storage, you don't need to do anything.
  • If you use the File storage backend instead, Zammad now requires a ReadWriteMany volume for storage/ that is shared in the cluster.
    • If you already had one via persistence.existingClaim before, you need to ensure it has ReadWriteMany access to be mountable across nodes and provide it via zammadConfig.storageVolume.existingClaim.
    • If you used the default PersistentVolumeClaim of the StatefulSet, you need to take manual action:
      • You can either migrate to S3 storage before upgrading to the new major version as described above in Configuration.
      • Or you can provide a zammadConfig.storageVolume.existingClaim with ReadWriteMany permission and migrate your existing data to it from the old StatefulSet.

From Chart Version 10.x to 11.0.0

  • Minimum Zammad version is now 6.3.0, where there is no var/ folder any more, and the related mount points have been removed.
  • The handling of the Autowizard secret was simplified. It is no longer processed by an init container, but instead mounted directly into the Zammad container. Therefore the secret must contain the actual raw JSON value, and not the base64 encoded JSON. So if you use an existing Autowizard secret, you will need to change it to contain the raw value now.
  • There is a new .Values.zammadConfig.postgresql.options setting that can be used to specify additional settings for the database connection. By default it specifies Zammad's default Rails DB pool size of 50. For large installations you may need to increase this value.

From Chart Version 9.x to 10.0.0

  • all containers uses readOnlyRootFilesystem: true again
  • volumePermissions init container config has been moved to initContainers section
    • if you used it before you have to adapt your config
    • it's also enabled by default now to workaround rails world writable tmp dir issues
    • if you don't like to use it you might want to set tmpDirVolume.emptyDir.medium to "Memory" instead

From Chart Version 8.x to 9.0.0

  • Zammads PVC changed to only hold contents of /opt/zammad/var & /opt/zammad/storage instead of the whole Zammad content
    • A new PVC zammad-var is created for this
    • the old zammad PVC is kept in case you need data from there (for example if you used filesystem storage)
      • you need to copy the contents of /opt/zammad/storage to the new volume manually or restore them from a backup
    • to update Zammad you have to delete the Zammad StatefulSet first, as the immutable volume config is changed
      • kubectl delete sts zammad
      • helm upgrade zammad zammad/zammad
  • Zammads initContainer rsync step is no longer needed and therfore removed
  • DB config is now done via DATABASE_URL env var instead of creating a database.yml in the config directory
  • Zammads pod securityContext has a new default setting for "seccompProfile:" with value "type: RuntimeDefault"
  • Docker registry changed to ghcr.io/zammad/zammad
  • auto_wizard.json is placed into /opt/zammad/var directory now
  • All subcharts have been updated

From Chart Version 7.x to 8.0.0

SecurityContexts of pod and containers are configurable now. We also changed the default securitycontexts, to be a bit more restrictive, therefore the major version upgrade of the chart.

On the pod level the following defaults are used:

securityContext:
  fsGroup: 1000
  runAsUser: 1000
  runAsNonRoot: true
  runAsGroup: 1000

On the containerlevel the following settings are used fo all zammad containers now (some init containers may run as root though):

    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      readOnlyRootFilesystem: true
      privileged: false

As readOnlyRootFilesystem: true is set for all Zammad containers, the Nginx container writes its PID and tmp files to /tmp. The /tmp volume can be configured via zammadConfig.tmpDirVolume. Currently a 100Mi emptyDir is used for that. The nginx config in /etc/nginx/nginx.conf is now populated from the Nginx configmap too.

If the volumpermissions initContainer is used, the username and group are taken from the securityContext.runAsUser & securityContext.runAsGroup variables.

The rsync command in the zammad-init container has been changed to not use "--no-perms --no-owner --no-group --omit-dir-times". If you wan't the old behaviout use the new .Values.zammadConfig.initContainers.zammad.extraRsyncParams variable, to add these options again.

We've also set the Elasticsearch master heapsize to "512m" by default.

From Chart Version 6.x to 7.0.0

  • Bitnami Elasticsearch chart is used now as Elastic does not support the old chart anymore in favour of ECK operator
    • reindexing of all data is needed so get sure "zammadConfig.elasticsearch.reindex" is set to "true"
  • Memchached was updated from 6.0.16 to 6.3.0
  • PostgreSql chart was updated from 10.16.2 to 12.1.0
    • this includes major version change of Postgres DB version too
    • backup / restore is needed to update
    • postgres password settings were changed
    • see also upgrading PostgreSql upgrading notes
  • Redis chart is updated from 16.8.7 to 17.3.7
  • Zammad
    • Pod Security Policy settings were removed as these are deprecated in Kubernetes 1.25
    • Docker image tag is used from Chart.yaml "appVersion" field by default
    • Replicas can be configured (needs ReadWriteMany volume if replica > 1!)
    • livenessProbes and readinessProbe have been adjusted to not be the same
    • config values has been removed from chart readme as it's easier to maintain them at a single place

From Chart Version 6.0.4 to 6.0.x

  • minimum helm version now is 3.2.0+
  • minimum Kubernetes version now is 1.19+

From Chart Version 5.x to 6.x

  • envConfig variable was replaced with zammadConfig
  • nginx, rails, scheduler, websocket and zammad vars has been merged into zammadConfig
  • Chart dependency vars have changed (reside in zammadConfig too now), so if you've disabled any of them you have to adapt to the new values from the Chart.yaml
  • extraEnv var is a list now

From Chart Version 4.x to 5.x

  • health checks have been extended from boolean flags that simply toggle readinessProbes and livenessProbes on the containers to templated ones: .zammad.{nginx,rails,websocket}.readinessProbe and .zammad.{nginx,rails,websocket}.livenessProbe have been removed in favor of livenessProbe/readinessProbe templates at .{nginx,railsserver,websocket}. You can customize those directly in your overriding values.yaml.
  • resource constraints have been grouped under .{nginx,railsserver,websocket} from above. They are disabled by default (same as prior versions), but in your overrides, make sure to reflect those changes.

From Chart Version 1.x

This has changed:

  • requirement chart condition variable name was changed
  • the labels have changed
  • the persistent volume claim was changed to persistent volume claim template
    • import your filebackup here
  • all requirement charts has been updated to the latest versions
    • Elasticsearch
      • docker image was changed to elastic/elasticsearch
      • version was raised from 5.6 to 7.6
      • reindexing will be done automatically
    • Postgres
      • bitnami/postgresql chart is used instead of stable/postgresql
      • version was raised from 10.6.0 to 11.7.0
      • there is no automated upgrade path
      • you have to import a backup manually
    • Memcached
      • bitnami/memcached chart is used instead of stable/memcached
      • version was raised from 1.5.6 to 1.5.22
      • nothing to do here

Before the update backup Zammad files and make a PostgreSQL backup, as you will need these backups later!

  • If your helm release was named "zammad" and also installed in the namespace "zammad" like:
helm upgrade --install zammad zammad/zammad --namespace=zammad --version=1.2.1
  • Do the upgrade like this:
helm delete --purge zammad
kubectl -n zammad delete pvc data-zammad-postgresql-0 data-zammad-elasticsearch-data-0 data-zammad-elasticsearch-master-0
helm upgrade --install zammad zammad/zammad --namespace=zammad --version=2.0.3
  • Import your file and SQL backups inside the Zammad & Postgresql containers

From Zammad 2.6.x to 3.x

As Helm 2.x was deprecated Helm 3.x is needed now to install Zammad Helm chart. Minimum Kubernetes version is 1.16.x now.

As Porstgresql dependency Helm chart was updated to, have a look at the upgrading instructions to 9.0.0 and 10.0.0 of the Postgresql chart:

From Zammad 3.5.x to 4.x

Ingress config has been updated to the default of charts created with Helm 3.6.0 so you might need to adapt your ingress config.