diff --git a/charts/paradedb/README.md b/charts/paradedb/README.md index 991bfc963..3205ed6d6 100644 --- a/charts/paradedb/README.md +++ b/charts/paradedb/README.md @@ -1,3 +1,4 @@ +<<<<<<< HEAD # ParadeDB Helm Chart The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming (physical) replication. @@ -39,6 +40,48 @@ cnpg/cloudnative-pg ``` #### Setting up a ParadeDB CNPG Cluster +======= +# ParadeDB CloudNativePG Cluster + +The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming replication. + +Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. + +The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). + +## Getting Started + +First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). + +### Installing the Prometheus Stack + +The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with: + +```bash +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm upgrade --atomic --install prometheus-community \ +--create-namespace \ +--namespace prometheus-community \ +--values https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \ +prometheus-community/kube-prometheus-stack +``` + +### Installing the CloudNativePG Operator + +Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the `--set` commands. + +```bash +helm repo add cnpg https://cloudnative-pg.github.io/charts +helm upgrade --atomic --install cnpg \ +--create-namespace \ +--namespace cnpg-system \ +--set monitoring.podMonitorEnabled=true \ +--set monitoring.grafanaDashboard.create=true \ +cnpg/cloudnative-pg +``` + +### Setting up a ParadeDB CNPG Cluster +> 6ea0301 (ParadeDB Support (#1)) Create a `values.yaml` and configure it to your requirements. Here is a basic example: @@ -52,7 +95,11 @@ cluster: size: 256Mi ``` +<<<<<<< HEAD Then, launch the ParadeDB cluster. +======= +Then, launch the ParadeDB cluster. If you do not wish to monitor your cluster, omit the `--set` command. +> 6ea0301 (ParadeDB Support (#1)) ```bash helm repo add paradedb https://paradedb.github.io/charts @@ -60,14 +107,61 @@ helm upgrade --atomic --install paradedb \ --namespace paradedb \ --create-namespace \ --values values.yaml \ +<<<<<<< HEAD +paradedb/paradedb +``` +======= +--set cluster.monitoring.enabled=true \ paradedb/paradedb ``` +If `--values values.yaml` is omitted, the default values will be used. For additional configuration options for the `values.yaml` file, including configuring backups and PgBouncer, please refer to the [ParadeDB Helm Chart documentation](https://artifacthub.io/packages/helm/paradedb/paradedb#values). For advanced cluster configuration options, please refer to the [CloudNativePG Cluster Chart documentation](charts/paradedb/README.md). + +### Connecting to a ParadeDB CNPG Cluster + +The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be: + +```bash +kubectl --namespace paradedb exec --stdin --tty services/paradedb-rw -- bash +``` + +This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: + +```bash +psql -d paradedb +``` + +### Connecting to the Grafana Dashboard + +To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost: + +```bash +kubectl --namespace prometheus-community port-forward svc/prometheus-community-grafana 3000:80 +``` + +You can then access the Grafana dasbhoard at [http://localhost:3000/](http://localhost:3000/) using the credentials `admin` as username and `prom-operator` as password. These default credentials are +defined in the [`kube-stack-config.yaml`](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the `values.yaml` file in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your own `values.yaml` file. + +## Development + +To test changes to the Chart on a local Minikube cluster, follow the instructions from [Getting Started](#getting-started), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. + +```bash +helm upgrade --atomic --install paradedb --namespace paradedb --create-namespace ./charts/paradedb +``` + +## Cluster Configuration +> 6ea0301 (ParadeDB Support (#1)) + If `--values values.yaml` is omitted, the default values will be used. For advanced ParadeDB configuration and monitoring, please refer to the [ParadeDB Chart documentation](https://github.com/paradedb/charts/tree/dev/charts/paradedb#values). +<<<<<<< HEAD #### Connecting to a ParadeDB CNPG Cluster You can launch a Bash shell inside a specific pod via: +======= +To use the ParadeDB Helm Chart, specify `paradedb` via the `type` parameter. +> 6ea0301 (ParadeDB Support (#1)) ```bash kubectl exec --stdin --tty -n paradedb -- bash @@ -262,6 +356,7 @@ refer to the [CloudNativePG Documentation](https://cloudnative-pg.io/documentat | recovery.google.bucket | string | `""` | | | recovery.google.gkeEnvironment | bool | `false` | | | recovery.google.path | string | `"/"` | | +<<<<<<< HEAD | recovery.import.databases | list | `[]` | Databases to import | | recovery.import.postImportApplicationSQL | list | `[]` | List of SQL queries to be executed as a superuser in the application database right after is imported. To be used with extreme care. Only available in microservice type. | | recovery.import.roles | list | `[]` | Roles to import | @@ -283,6 +378,9 @@ refer to the [CloudNativePG Documentation](https://cloudnative-pg.io/documentat | recovery.import.source.username | string | `""` | | | recovery.import.type | string | `"microservice"` | One of `microservice` or `monolith.` See: https://cloudnative-pg.io/documentation/1.24/database_import/#how-it-works | | recovery.method | string | `"backup"` | Available recovery methods: * `backup` - Recovers a CNPG cluster from a CNPG backup (PITR supported) Needs to be on the same cluster in the same namespace. * `object_store` - Recovers a CNPG cluster from a barman object store (PITR supported). * `pg_basebackup` - Recovers a CNPG cluster viaa streaming replication protocol. Useful if you want to migrate databases to CloudNativePG, even from outside Kubernetes. * `import` - Import one or more databases from an existing Postgres cluster. | +======= +| recovery.method | string | `"backup"` | Available recovery methods: * `backup` - Recovers a CNPG cluster from a CNPG backup (PITR supported) Needs to be on the same cluster in the same namespace. * `object_store` - Recovers a CNPG cluster from a barman object store (PITR supported). * `pg_basebackup` - Recovers a CNPG cluster viaa streaming replication protocol. Useful if you want to migrate databases to CloudNativePG, even from outside Kubernetes. # TODO | +> 6ea0301 (ParadeDB Support (#1)) | recovery.pgBaseBackup.database | string | `"paradedb"` | Name of the database used by the application. Default: `paradedb`. | | recovery.pgBaseBackup.owner | string | `""` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | | recovery.pgBaseBackup.secret | string | `""` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. | @@ -311,9 +409,15 @@ refer to the [CloudNativePG Documentation](https://cloudnative-pg.io/documentat | recovery.s3.secretKey | string | `""` | | | recovery.secret.create | bool | `true` | Whether to create a secret for the backup credentials | | recovery.secret.name | string | `""` | Name of the backup credentials secret | +<<<<<<< HEAD | type | string | `"paradedb"` | Type of the CNPG database. Available types: * `paradedb` * `paradedb-enterprise` | | version.paradedb | string | `"0.15.1"` | We default to v0.15.1 for testing and local development | | version.postgresql | string | `"17"` | PostgreSQL major version to use | +======= +| type | string | `"paradedb"` | Type of the CNPG database. Available types: * `paradedb` | +| version.paradedb | string | `"0.11.0"` | We default to v0.11.0 for testing and local development | +| version.postgresql | string | `"16"` | PostgreSQL major version to use | +> 6ea0301 (ParadeDB Support (#1)) | poolers[].name | string | `` | Name of the pooler resource | | poolers[].instances | number | `1` | The number of replicas we want | | poolers[].type | [PoolerType][PoolerType] | `rw` | Type of service to forward traffic to. Default: `rw`. | diff --git a/charts/paradedb/test/scheduledbackups/00-minio_cleanup.yaml b/charts/paradedb/test/scheduledbackups/00-minio_cleanup.yaml new file mode 100644 index 000000000..90151a964 --- /dev/null +++ b/charts/paradedb/test/scheduledbackups/00-minio_cleanup.yaml @@ -0,0 +1,16 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: minio-cleanup +spec: + template: + spec: + restartPolicy: OnFailure + containers: + - name: minio-cleanup + image: minio/mc + command: ['sh', '-c'] + args: + - | + mc alias set myminio https://minio.minio.svc.cluster.local minio minio123 + mc rm --recursive --force myminio/mybucket/scheduledbackups diff --git a/charts/paradedb/test/scheduledbackups/01-scheduledbackups_cluster-assert.yaml b/charts/paradedb/test/scheduledbackups/01-scheduledbackups_cluster-assert.yaml new file mode 100644 index 000000000..a3af1a25b --- /dev/null +++ b/charts/paradedb/test/scheduledbackups/01-scheduledbackups_cluster-assert.yaml @@ -0,0 +1,37 @@ +apiVersion: postgresql.cnpg.io/v1 +kind: Cluster +metadata: + name: scheduledbackups-cluster +status: + readyInstances: 1 +--- +apiVersion: postgresql.cnpg.io/v1 +kind: ScheduledBackup +metadata: + name: scheduledbackups-cluster-daily-backup +spec: + immediate: true + schedule: "0 0 0 * * *" + method: barmanObjectStore + backupOwnerReference: self + cluster: + name: scheduledbackups-cluster +--- +apiVersion: postgresql.cnpg.io/v1 +kind: ScheduledBackup +metadata: + name: scheduledbackups-cluster-weekly-backup +spec: + immediate: true + schedule: "0 0 0 * * 1" + method: barmanObjectStore + backupOwnerReference: self + cluster: + name: scheduledbackups-cluster +--- +apiVersion: postgresql.cnpg.io/v1 +kind: Backup +spec: + method: barmanObjectStore + cluster: + name: scheduledbackups-cluster diff --git a/charts/paradedb/test/scheduledbackups/01-scheduledbackups_cluster.yaml b/charts/paradedb/test/scheduledbackups/01-scheduledbackups_cluster.yaml new file mode 100644 index 000000000..94f6015c4 --- /dev/null +++ b/charts/paradedb/test/scheduledbackups/01-scheduledbackups_cluster.yaml @@ -0,0 +1,35 @@ +type: postgresql +mode: standalone + +cluster: + instances: 1 + storage: + size: 256Mi + +backups: + enabled: true + provider: s3 + endpointURL: "https://minio.minio.svc.cluster.local" + endpointCA: + name: kube-root-ca.crt + key: ca.crt + wal: + encryption: "" + data: + encryption: "" + s3: + bucket: "mybucket" + path: "/scheduledbackups/v1" + accessKey: "minio" + secretKey: "minio123" + region: "local" + retentionPolicy: "30d" + scheduledBackups: + - name: daily-backup + schedule: "0 0 0 * * *" + backupOwnerReference: self + method: barmanObjectStore + - name: weekly-backup + schedule: "0 0 0 * * 1" + backupOwnerReference: self + method: barmanObjectStore diff --git a/charts/paradedb/test/scheduledbackups/chainsaw-test.yaml b/charts/paradedb/test/scheduledbackups/chainsaw-test.yaml new file mode 100644 index 000000000..c1409ce46 --- /dev/null +++ b/charts/paradedb/test/scheduledbackups/chainsaw-test.yaml @@ -0,0 +1,27 @@ +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: scheduledbackups +spec: + timeouts: + apply: 1s + assert: 1m + cleanup: 1m + steps: + - name: Install a cluster with ScheduledBackups + try: + - script: + content: | + helm upgrade \ + --install \ + --namespace $NAMESPACE \ + --values ./01-scheduledbackups_cluster.yaml \ + --wait \ + scheduledbackups ../../ + - assert: + file: ./01-scheduledbackups_cluster-assert.yaml + - name: Cleanup + try: + - script: + content: | + helm uninstall --namespace $NAMESPACE scheduledbackups