- k8s_mdb
- Breaking Changes
- Description
- Steps to Deploy
- Settings
- Common Settings
- CA Certificate for MongoDB Deployments HIGHLY ENCOURAGED
- tlsEnabled.enabled
- tlsEnabled.caConfigMap
- MongoDB First User REQUIRED
- LDAP Authentication and Authorisation
- Configuration Items
- clusterName
- mongoDBVersion
- mongoDBFCV
- logLevel
- auth.scram.enabled
- auth.allowNoManagedUsers
- auth.ldap.enabled
- auth.ldap.servers
- auth.ldap.ldaps
- auth.ldap.caConfigMap
- auth.ldap.bindUserDN
- auth.ldap.bindUserSecret
- auth.ldap.userToDNMapping
- auth.ldap.authzQueryTemplate
- opsManager.tlsEnabled
- opsManager.baseUrl
- opsManager.orgId
- opsManager.projectName
- opsManager.omSecret
- opsManager.caConfigmap
- mongoDBAdminPasswdSecret
- additionalUsers
- Replica Set Specific Settings
- TLS X.509 Certificates for MongoDB Deployments HIGHLY ENCOURAGED
- Replica Set External Access, Services and Horizons
- Configuration Items
- replicaSet.replicas
- replicaSet.resources.limits.cpu
- replicaSet.resources.limits.mem
- replicaSet.resources.requests.cpu
- replicaSet.resources.requests.mem
- replicaSet.storage.persistenceType
- replicaSet.storage.nfs
- replicaSet.storage.nfsInitImage
- replicaSet.storage.single.size
- replicaSet.storage.single.storageClass
- replicaSet.storage.multi.data.size
- replicaSet.storage.multi.data.storageClass
- replicaSet.storage.multi.journal.size
- replicaSet.storage.multi.journal.storageClass
- replicaSet.storage.multi.logs.size
- replicaSet.storage.multi.logs.storageClass
- extAccess.enabled
- extAccess.exposeMethod
- extAccess.ports
- extAccess.ports[n].horizonName
- extAccess.ports[n].port
- extAccess.ports[n].clusterIP
- Sharded Cluster Specific Settings
- TLS X.509 Certificates for MongoDB Deployments HIGHLY ENCOURAGED
- Mongos External Access and Services
- Configuration Items
- shardSrv.shards
- shardSrv.memberPerShard
- sharding.shardSrv.resources.limits.cpu
- sharding.shardSrv.resources.limits.mem
- sharding.shardSrv.resources.requests.cpu
- sharding.shardSrv.resources.requests.mem
- sharding.shardSrv.storage.persistenceType
- sharding.shardSrv.storage.nfs
- sharding.shardSrv.storage.nfsInitImage
- sharding.shardSrv.storage.single.size
- sharding.shardSrv.storage.single.storageClass
- sharding.shardSrv.storage.multi.data.size
- sharding.shardSrv.storage.multi.data.storageClass
- sharding.shardSrv.storage.multi.journal.size
- sharding.shardSrv.storage.multi.journal.storageClass
- sharding.shardSrv.storage.multi.logs.size
- sharding.shardSrv.storage.multi.logs.storageClass
- sharding.configSrv.replicas
- sharding.configSrv.resources.limits.cpu
- sharding.configSrv.resources.limits.mem
- sharding.configSrv.resources.requests.cpu
- sharding.configSrv.resources.requests.mem
- sharding.configSrv.storage.persistenceType
- sharding.configSrv.storage.nfs
- sharding.configSrv.storage.nfsInitImage
- sharding.configSrv.storage.single.size
- sharding.configSrv.storage.single.storageClass
- sharding.configSrv.storage.multi.data.size
- sharding.configSrv.storage.multi.data.storageClass
- sharding.configSrv.storage.multi.journal.size
- sharding.configSrv.storage.multi.journal.storageClass
- sharding.configSrv.storage.multi.logs.size
- sharding.configSrv.storage.multi.logs.storageClass
- sharding.mongos.count
- sharding.mongos.resources.limits.cpu
- sharding.mongos.resources.limits.mem
- sharding.mongos.resources.requests.cpu
- sharding.mongos.resources.requests.mem
- sharding.extAccess.enabled
- sharding.extAccess.port
- Predeployment Checklist
- Run
- Common Settings
This version of the Helm charts has been tested with MongoDB Kubernetes Operator version(s):
- 1.16.x
- 1.17.x
This version adds values for sharded clusters and moves replica set-specific settings to its own object.
The series of Helm Charts to deploy MongoDB Enterprise Advanced replica sets within Kubernetes with the MongoDB Kubernetes Operator and Ops Manager.
The /examples
directory has values.yaml
examples for replica sets and sharded clusters.
- Ensure Prerequisites are met
- Create Ops Manager Access Token (Progammatic Access)
- Create Kubernetes
configmap
for Ops Manager X.509 Certificate Authority (CA) certificate chain - Create Kubernetes
configmap
for MongoDB deployments CA certificate chain - if requires - and seriously, this should just be a normal thing - Create Kubernets secrets for the MonogDB instances TLS and cluster authentication (for replica sets) or MongoDB Sharded Clusters with TLS (for sharded clusters)- once again this is "if requires", but should be just be a normal thing.....look at your life choices if you are not doing this!
- Create a Kubernetes secret for the
root
user of the MongoDB deployment - Create the
values.yaml
file for the deployment.
The MongoDB Enterprise Kubernetes Operator and MongoDB Ops Manager must be installed and operation. The Kubernetes Operator must be able to communicate with Ops Manager. Instructions on installing the MongoDB Kubernetes Operator can be found in the MongoDB documentation. MongoDB Ops Manager should be installed by the MongoDB Professional Services team so it is installed and configured securely and correctly.
Helm is required to be installed and Helmfile is also highly recommended. If Helmfile is used you will also need Helm-Diff.
These Helm charts assume PVs and Storage classes already exist within the Kubernetes cluster.
Two environment variables are required, called ENV
and NS
(both case senstive). The first describes the selected environment for deployment, which correspondes to a directory under the values
directory, and the second describes the Kubernetes namespace.
The variables for each deployment are contained in the values.yaml
. The values.yaml
file for the selected environment must reside in a directory under values/<ENV>
such as values/dev/values.yaml
or values/production/values.yaml
. Each <ENV> directory will be a different deployment. The examples
directory contains an examples values.yaml
file, plus there are examples under the values
directory so the reader can see the structure.
Within Ops Manager, an Organisation-level API token must be created with the Organisation Owner
privilege (WIP) for the organisation that is going to be used for MongoDB deployments. The MongoDB documentation explains how to create an Organisational-level API token (key pair). Ensure that the CIDR range that will be used by the Kubernetes Operator is included in the API Access List.
The following illustrates how to create the Kubernetes secret for the access token:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret generic <name-of-secret> \
--from-literal=publicKey=<publicKey> \
--from-literal=privateKey=<privateKey>
Confusingly the publicApiKey
is actually set to the value of the privateKey
portion of the access token.
The name of this secret must be set to the value of the opsManager.omSecret
key in the relevant values.yaml
file. This can be a common configmap if more than one deployment is in a Kubernetes namespace and Ops Manager Organisation.
This is REQUIRED because your Ops Manager should be using TLS!
The certificate must include the whole certificate chain of the Certificate Authority that signed the X.509 certificate for Ops Manager.
This can be a common configmap if more than one deployment is in a Kubernetes namespace and Ops Manager Organisation.
This is stored in a configMap is set in the relevant values.yaml
as opsManager.caConfigMap
. The name of the key in the configmap MUST be mms-ca.crt
, this can be created via:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create configmap <name-of-configmap> \
--from-file=mms-ca.crt
This is most likely common in all MongoDB deployments.
The certificate must include the whole certificate chain of the Certificate Authority that signed the X.509 certificate for pods.
This is stored in a configMap is set in the relevant values.yaml as tls.caConfigMap. The name of the key in the configmap MUST be ca-pem
, this can be created via:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create configmap <name-of-configmap> \
--from-file=ca-pem
This is most likely common in all MongoDB deployments.
REQUIRED if tls.enabled
is true
.
A boolean to determine if TLS is enabled for MongoDB deployments, which is should be!
The name of the configmap that contains the X.509 certificate of the Certificate Authority that use used for TLS communications to and from the MongoDB instances.
See the Deployment Requirements section for details on creating this configmap.
A secret must exist for the first user in MongoDB. This will be a user with the root
role. The name of the secret must be set in the releveant values.yaml
as mongoDBAdminPasswdSecret
value. The secret must contain a key called password
that contains the password for the user. The username is set to root
.
The secret can be create via kubectl
as follows:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret generic <name-of-secret> \
--from-literal=password=<password>
The name of the user that is created has the pattern of ap-<clusterName>-root, where <clusterName>
is the clusterName
in the values.yaml
for your deployment.
If LDAP authentication and authorisation is required the auth.ldap.enabled
must be set to true
. MongoDB highly recommends that ldaps
is used to protect the credentials of users, therefore auth.ldap.ldaps
should also be true
.
A bind user must be provided and a secret created for their password, the key within the secret must be password
. The following is an example to create the secret:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret generic <name-of-secret> \
--from-literal=password=<password>
If LDAPS is selected the CA certificate used with the LDAP servers must be provided within a configmap. The name of the key within the configmap MUST be ca-pem
. This can be achieved by:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create configmap <name-of-configmap> \
--from-file=ca-pem
To map from the username used by the MongoDB user to the Distingiushed Name (DN) within LDAP a mapping query must be provided. MongoDB Professional Services can assist with this, but the following is an example of the query for a user that logs on a [email protected] but their DN is actually cn=USER,cn=Users,dc=mongodb,dc=local
:
'[{ match: "(.+)@MONGODB.LOCAL", substitution: "cn={0},cn=Users,dc=mongodb,dc=local"}]'
If LDAP authorisation is desired query must be provided to determine the groups of the user. In the following example the user's groups are held in the memberOf
LDAP attribute of the user:
'{USER}?memberOf?base'
If LDAP authorisation is not required this setting can be skipped.
The following table describes the common values required in the relevant values.yaml
for both replica sets and sharded clusters:
Key | Purpose |
---|---|
clusterName | Name of the cluster, used for naming the pods and replica set name |
mongoDBVersion | The version of MongoDB to installed, such as 5.0.8-ent for MongoDB Enterprise 5.0.8 Enterprise Advanced |
mongoDBFCV | A string describing the Feature Compatibility Version of the deployment, default is "5.0" |
logLevel | Level of logging for MongoDB and agents, INFO or DEBUG |
auth.scram.enabled | Boolean to determine if SCRAM authentication is selected. Can be selected with auth.ldap.enabled or by itself. At least one method must be selected |
auth.allowNoManagedUsers | Boolean to determine if users not managed by Kubernetes are allowed |
auth.ldap.enabled | Boolean to determine if LDAP authentication is selected. Can be selected with auth.scram.enabled or by itself. At least one method must be selected |
auth.ldap.servers | An array of LDAP servers to use for authentication (and possibly authoisation) |
auth.ldap.ldaps | Boolean to determine if ldaps is selected for the LDAP protocol, which it should be always |
auth.ldap.caConfigMap | The name of the configmap in Kubernetes containing the CA certificate for the LDAP server(s) |
auth.ldap.bindUserDN | The Distinguished Name (DN) of the LDAP bind user |
auth.ldap.bindUserSecret | The Kubernetes secret containing the password of the bind user |
auth.ldap.userToDNMapping | The LDAP mapping to convert from the name used to log into MongoDB to what is actually used in LDAP |
auth.ldap.authzQueryTemplate | The LDAP Query Template used to perform the lookup for a user's groups |
opsManager.tlsEnabled | Boolean determining if TLS is used to communicate from the Operator and Agents to Ops Manager |
opsManager.baseUrl | The URL, including protocol and port, of Ops Manager |
opsManager.orgId | The ID of the Organisation in Ops Manager that the project will be created |
opsManager.projectName | The name of the project that will be created or used in Ops Manager for the MongoDB deployment |
opsManager.omSecret | The name of the secret that contains the credentials token for the Ops Manager API for the selected Organisation |
opsManager.caConfigmap | The name of the configmap that contains the CA certificate used to communicate with Ops Manager |
tlsEnabled.enabled | Boolean describing if TLS is used in the cluster. (This should always be true) |
tlsEnabled.caConfigMap | Name of the configMap for the CA certificate |
mongoDBAdminPasswdSecret | The secret containing the MongoDB first user |
additionalUsers[n] | Array of additional database users to create |
additionalUsers[n].username | Username of the database user to manage |
additionalUsers[n].passwdSecret | The secret name that contains the password for the user |
additionalUsers[n].roles[m] | Array of roles for the user, consisting of a db and the role |
The name to be used for the replica set. The name should be included in the MongoDB connection string when connecting to the replica set, especially from external to Kubernetes, so split horizon functions correctly.
The version of MongoDB to deploy. The form is <major>.<release-series>.<patch>-ent, such as 5.0.17-ent
for Enterprise versions. We do not encourage using odd numbers for the release series value, as these are development series.
As of MongoDB 5.0 the versioning has changed to <major>.<rapid>.<patch>-ent, where the rapid is a quarterly release of new features and not a develop/stable differentiator anymore.
The Feature Compatibility Version of the deployment. Can only ever at or one major version below the currently installed MongoDB version.
Is a string value. If you do not set this as a string, any trailing 0
will be removed by the YAML parser and Ops Manager will return a 500 error, which will take you hours to figure out!
Default is "5.0"
Log level for the MongoDB instance and automation agent. Can be DEBUG
or INFO
. In the case of DEBUG
this is equivalent to 2
for systemLog.verbosity
in the MongoDB config file.
Boolean value to determine if SCRAM authentication is enabled. Both auth.scram.enabled
and auth.ldap.enabled
can be selected, or just one, but at least one must be true
.
Boolean value to determine if users NOT managed by Kuberentes are allowed. This can include via mongorestore
or via mongosh
etc. If this is false
Ops Manager will remove any non-Kubernetes managed users.
Default is true
Boolean value to determine if LDAP authentication is enabled. Both auth.scram.enabled
and auth.ldap.enabled
can be selected, or just one, but at least one must be true
.
An array of LDAP servers to use for LDAP authentication (and authorisation of selected). Required if auth.ldap.enabled
is true
.
A boolean to determine if LDAPS is used as the protocol instead of unsafe LDAP. This should always be true
. Required if auth.ldap.enabled
is true
.
The configmap name of the CA certificate used with the LDAP servers. Required if auth.ldap.enabled
is true
and auth.ldap.ldaps
is true
.
The name of the key within the configmap must be ca-pem
.
The Distigiushed Name (DN) of the bind user that is used to perform lookups in the directory directory. Required if auth.ldap.enabled
is true
.
The name of the Kubernetes secret that contains the password of the bind user. The key within the secret must be password
. Required if auth.ldap.enabled
is true
.
The mapping to convert the username to the name in the LDAP directory. Required if auth.ldap.enabled
is true
.
See the LDAP section for more details.
The LDAP query to lookup a user's groups within the LDAP directory. Required if auth.ldap.enabled
is true
.
See the LDAP section for more details.
Boolean value determining if Ops Manager uses TLS for data in transit. Should ALWAYS be true
.
The URL of the Ops Manager, including the protocol and port number, such as https://ops-manager.mongodb.local:8443
.
The ID of the Ops Manager Organisation. The can be found in Ops Manager by browsng to the Organisation and selecting the Organisation ID in the URL. such as 5e439737e976cc5e50a7b13d
.
Read the MongoDB documentation to learn how to create or manage an Organisation in Ops Manager.
The name of the Ops Manager Project within the selected Organisation that will be created/used for the MongoDB Deployment.
The is the name of the secret that contains the token key pair for Ops Manager API access that is used by the MongoDB Kubernetes Operator to manage the deployment(s).
This can be a common configmap if more than one deployment is in a Kubernetes namespace and Ops Manager Organisation.
See the Deployment Requirements section for details on creating the API access token.
The name of the configmap that contains the X.509 certificate of the Certificate Authority that use used for TLS communications to and from Ops Manager.
This can be a common configmap if more than one deployment is in a Kubernetes namespace.
See the Deployment Requirements section for details on creating this configmap.
This is the secret name that contains the password for the first user.
See the Deployment Requirements section for details on creating this secret.
This is an array of additional data base users to create. The format is as follows:
additionalUsers:
- username: oplog0-om-user
passwdSecret: om-user
roles:
- db: admin
role: "clusterMonitor"
- db: admin
role: "readWriteAnyDatabase"
- db: admin
role: "userAdminAnyDatabase"
The username
must be unique in the database. The passwdSecret
is a reference to a Kubernetes Secret containing the user password. Just like the first user, we can use the same Kubernetes command to create the Secret:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret generic <name-of-secret> \
--from-literal=password=<password>
Each entry in the array will create a new MongoDB User (MDBU) resource in Kubernetes named:
<clusterName>-<username>
This is important to remember of creating the blockstore or oplogstore for Ops Manager as the MDBU resource name is required.
The following are settings required if a replica set is to be deployed.
To ensure a replica set is deployed set the following:
replicaSet:
enabled: true
sharding:
enabled: false
The sharding.enabled
takes precedence over the the replicaSet.enabled
setting.
This requires two secrets: one for the client communications and one for cluster communications.
The secrets contain the X.509 key and certificate. One key/certificate pair is used for all members of the replica set, therefore a Subject Alternate Name (SAN) entry must exist for each member of the replica set. The SANs will be in the form of:
<clusterName>-<X>.<clusterName>-svc.<namespace>.svc.cluster.local
Where <clusterName>
is the clusterName
in the values.yaml
for your deployment and <X>
is the 0-based number of the pod.
The certificates must include the name of FQDN external to Kubernetes as a Subject Alternate Name (SAN) if external access is required.
The secrets must be named as follows:
mdb-<clusterName>-<cert>
mdb-<clusterName>-<clusterfile>
The two secrets can be created as follows:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-cert \
--cert=<path-to-cert> \
--key=<path-to-key>
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-clusterfile \
--cert=<path-to-cert> \
--key=<path-to-key>
REQUIRED if tls.enabled
is true
.
If external access (e.g. access from external to Kubernetes) is required a NodePort or LoadBalancer service can be created for each replica set member and a MongoDB Split Horizon associated with each replica set member. The MongoDB Split Horizon provides a different view of the cluster when isMaster
is exeecuted depending on the address used in the connection string. This allows the discovery process to present the addresses of the replica set members as they should be viewed external to Kubernetes.
For a NodePort service, a Kubernetes worker node, or an address that is resolved to a worker node, needs to be allocated as the replicaSet.extAccess.ports[].horizonName
value along with an associated port for each horizon, replicaSet.extAccess.ports[].port
. The service will also be allocated an IP address internal to Kubernetes for each NodePort, the IP address is set via the replicaSet.extAccess.ports[].clusterIP
value. There is no fancy checks to determine if the addresses are valid. The address range must be a valid address range for services in Kuberenetes and cannot be used anywhere else in the Kubernetes cluster. For LoadBalancer service type, the replicaSet.extAccess.ports[].horizonName
value along with an associated port for each horizon, replicaSet.extAccess.ports[].port
, are still required, but the port is the port of the load balancer and not the NodePort
In most Kubernetes environments the NodePort port range is 30000 to 32767. The port numbers cannot overlap with port numbers already in use in any deployment of any kind in the Kubernetes cluster.
To access from external to Kubernetes the connection string for a three-member replica set would look similar to:
mongodb://<horizonName-0>:<port-0>,<horizonName-1>:<port-1>,<horizonName-2>:<port-2>/?replicaSet=<clusterName>
e.g.
mongodb://workernode5.mongodb.local:30000,workernode5.mongodb.local:30011,workernode5.mongodb.local:32002/?replicaSet=ap-mongodb-dev
The following table describes the values required in the relevant values.yaml
specifically for replica sets:
Key | Purpose |
---|---|
replicaSet.replicas | Number of members in the replica set (integer) |
replicaSet.resources.limits.cpu | The max CPU the containers can be allocated |
replicaSet.resources.limits.mem | The max memory the containers can be allocated, include units |
replicaSet.resources.requests.cpu | The initial CPU the containers can be allocated |
replicaSet.resources.requests.mem | The initial memory the containers can be allocated, include units |
replicaSet.storage.persistenceType | This is either single for all data one one partition, or multi for separate partiions for data , journal , and logs |
replicaSet.storage.nfs | Boolean value to determine if NFS if used for persistence storage, which requires a further init container to fix permissions on NFS mount |
replicaSet.storage.nfsInitImage | Image name a tag for the init container to perform the NFS permissions modification. Defaults to the same init container image as the database |
replicaSet.storage.single.size | The size of the volume for all storage, include units |
replicaSet.storage.single.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for all the storage. Default is "" |
replicaSet.storage.multi.data.size | The size of the volume for database data storage, include units |
replicaSet.storage.multi.data.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database data storage. Default is "" |
replicaSet.storage.multi.journal.size | The size of the volume for database journal, include units |
replicaSet.storage.multi.journal.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database journal. Default is "" |
replicaSet.storage.multi.logs.size | The size of the volume for database logs, include units |
replicaSet.storage.multi.logs.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database logs. Default is "" |
replicaSet.extAccess.enabled | Boolean determining of MongoDB Split Horizon is enabled |
replicaSet.extAccess.exposeMethod | The method to expose access MongoDB to clients externally to Kubernetes. The options are NodePort or LoadBalancer |
replicaSet.extAccess.ports | Array of objects describing horizone names with associated port addresses, and clouterIP if required. One entry is required per replica set member |
replicaSet.extAccess.ports[n].horizonName | Name of the MongoDB Horizon for the member |
replicaSet.extAccess.ports[n].port | The port of the MongoDB horizon. It is either the NodePort port or the LoadBalancer port |
replicaSet.extAccess.ports[n].clusterIP | The clusterIP of the NodePort. Not required if LoadBalancer is the selected method |
The number of members in the replica set. Must be an integer.
The maximum number of CPUs that can be assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The maximum memory that can be assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The initial number of CPUs that is assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The initial memory that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The type of storage for the pod. Select single
for data, journal, and logs to be on one partition. If this is select both storage.single.size
and storage.single.storageClass
must be provided.
If separate partitions are required for data, journal, and logs then select multi
, and then provide all the following:
replicaSet.storage.multi.data.size
replicaSet.storage.multi.data.storageClass
replicaSet.storage.multi.journal.size
replicaSet.storage.multi.journal.storageClass
replicaSet.storage.multi.logs.size
replicaSet.storage.multi.logs.storageClass
A boolean to determine if NFS is used as the persistent storage. If this is true
then an additional init container is prepended to the init container array in the statefulSet to that will `chown`` the permissions of the NFS mount to be that of the mongod user. The Kubernetes Operator uses 2000:2000 for the UID and GID of the mongod user.
This init container will run as root so the permissions can be set. This is done via setting the runAsUser
to 0
and the runAsNonRoot
to false
. Ensure you understand the implications of this.
This will chown /data
, /journal
and /var/log/mongodb-mms-automation
to 2000:2000
Default is false
The image to use for the init container to perform the chown
on the NFS mounts.
The default is quay.io/mongodb/mongodb-enterprise-init-database-ubi:1.0.9"
The persistent storage that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the data partition.
The persistent storage that is assigned to each pod for data storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the journal partition.
The persistent storage that is assigned to each pod for journal storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the log partition.
The persistent storage that is assigned to each pod for log storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim.
A boolean to determine if external access, and therefore Split Horizon, is required/enabled.
The service that will be used to provide access to the MognoDB replica set from external to Kubernetes. Choices are NodePort
or LoadBalancer
.
Kubernetes documentation should be consulted on the best method for the environment.
For NodePort
you can allocate the ClusterIP or have Kubernetes allocate one.
An array of object (see following attributes) that describe the Horizon name, port, and clusterIP for each member of the replica set. One object is required per member.
The MongoDB horizon name for the selected pod.
The port number for either the NodePort or the LoadBalancer for the selected pod.
The clusterIP for the selected pod. Only required when NodePort
is selected as the service. If one is not provided, Kubernetes will automatically allocate one.
The following are settings required if a replica set is to be deployed.
To ensure a sharded cluster is deployed set the following:
replicaSet:
enabled: false
sharding:
enabled: true
This requires AT LEAST six secrets. For each shard, the config server replica set (CSRS), and for all the mongos instance, there is one certificate for the client communications and one for cluster authentication.
The secrets contain the X.509 key and certificate. One key/certificate pair is used for all members of the replica set/shard/mongos pool, therefore a Subject Alternate Name (SAN) entry must exist for each member of the replica set/shard/mongos.
A SAN is required for each member of the shard, this is the FQDN for each shard member, and is as follows:
<clusterName>-<X>-.<clusterName>-svc.<namespace>.svc.cluster.local
Where <clusterName>
is the clusterName
in the values.yaml
for your deployment and <X>
is the 0-based number of the shard and <Y>
is the 0-based number of the shard member.
The certificate must include the name of FQDN external to Kubernetes as a Subject Alternate Name (SAN) if external access is required (sharding.extAccess.enabled
set to true
), plus an FQDN for each shard member for each domain set via sharding.extAccess.externalDomains
.
The secrets must be named as follows:
mdb-<clusterName>-<X>-<cert>
mdb-<clusterName>-<X>-<clusterfile>
Where <X>
is the shard number.
The two secrets for each shard can be created as follows:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-<X>-cert \
--cert=<path-to-cert> \
--key=<path-to-key>
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-<X>-clusterfile \
--cert=<path-to-cert> \
--key=<path-to-key>
REQUIRED if tls.enabled
is true
.
A single X.509 key and certificate is required for the Config Server Replica Set. These will be used to create two secrets: one for client communications and one for intra-replica set authentication. The FQDN of each replica set member must be in the certificate as a Subject Alternate Name (SAN).
The SAN FQDN for each shard member is as follows:
<clusterName>-config-<X>.<clusterName>-svc.<namespace>.svc.cluster.local
Where <clusterName>
is the clusterName
in the values.yaml
for your deployment and <X>
is the 0-based number of the replica set member.
The certificate must include the name of FQDN external to Kubernetes as a Subject Alternate Name (SAN) if external access is required (sharding.extAccess.enabled
set to true
), plus an FQDN for each config server replica set member for each domain set via sharding.extAccess.externalDomains
.
The secrets must be named as follows:
mdb-<clusterName>-config-<cert>
mdb-<clusterName>-config-<clusterfile>
The two secrets for the config server replica set can be created as follows:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-config-cert \
--cert=<path-to-cert> \
--key=<path-to-key>
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-config-clusterfile \
--cert=<path-to-cert> \
--key=<path-to-key>
REQUIRED if tls.enabled
is true
.
A single X509 key and certificate is required for all the mongos instances in the cluster. These will be used to create two secrets: one for client communications and one for intra-replica set authentication. The FQDN of each mongos instance must be in the certificate as a Subject Alternate Name (SAN).
Two secrets are required for the collective of mongos instances in the cluster (not one per mongos).
The SAN FQDN for each of the mongoses is as follows:
<clusterName>-svc-<X>.<clusterName>-svc.<namespace>.svc.cluster.local
Where <clusterName>
is the clusterName
in the values.yaml
for your deployment and <X>
is the 0-based number of the mongos instance.
The certificate must include the name of FQDN external to Kubernetes as a Subject Alternate Name (SAN) if external access is required (sharding.extAccess.enabled
set to true
), plus an FQDN for each mongos for each domain set via sharding.extAccess.externalDomains
.
The secrets must be named as follows:
mdb-<clusterName>-svc-<cert>
mdb-<clusterName>-svc-<clusterfile>
The two secrets for the config server replica set can be created as follows:
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-config-cert \
--cert=<path-to-cert> \
--key=<path-to-key>
kubectl --kubeconfig=<CONFIG_FILE> -n <NAMESPACE> create secret tls mdb-<clusterName>-config-clusterfile \
--cert=<path-to-cert> \
--key=<path-to-key>
REQUIRED if tls.enabled
is true
.
If external access (e.g. access from external to Kubernetes) is required a NodePort service will be created for the mongos pool.
To use a NodePort service, a Kubernetes worker node or an address that is resolved to a worker node, needs to be used by external clients to reach the mongos pool. The NodePort service will be allocated an IP address internal to Kubernetes.
In most Kubernetes environments the NodePort port range is 30000 to 32767. The port numbers CANNOT overlap with port numbers already in use in any deployment of any kind in the Kubernetes cluster.
To allow external access set the sharding.extAccess.enabled
setting to true. To set the port used externally, set the sharding.extAccess.port
to a number value of the port, such as 31002
. If you do not set the port number you will need to use kubectl
commands to find the port number of the service (named *****), or via your Kubernetes dashboard (if you have one).
To access from external to Kubernetes the connection string would look similar to:
mongosh "mongodb://<WORKER_NODE_FQDN>:<PORT>" --tls --tlsCASFile <CA_FILE_ABSOLUTE_PATH> --username <USERNAME>
e.g.
mongosh "mongodb://workernode5.mongodb.local:30000" --tls --tlsCAFile /etc/pki/tls/certs/ca.pem --username user55
The following table describes the values required in the relevant values.yaml
specifically for replica sets:
Key | Purpose |
---|---|
sharding.shardSrv.shards | Number of shards (integer) |
sharding.shardSrv.memberPerShard | Number of members per shard (integer) |
sharding.shardSrv.resources.limits.cpu | The max CPU the containers can be allocated |
sharding.shardSrv.resources.limits.mem | The max memory the containers can be allocated, include units |
sharding.shardSrv.resources.requests.cpu | The initial CPU the containers can be allocated |
sharding.shardSrv.resources.requests.mem | The initial memory the containers can be allocated, include units |
sharding.shardSrv.storage.persistenceType | This is either single for all data one one partition, or multi for separate partions for data , journal , and logs |
sharding.shardSrv.storage.nfs | Boolean value to determine if NFS if used for persistence storage, which requires a further init container to fix permissions on NFS mount |
sharding.shardSrv.storage.nfsInitImage | Image name and tag for the init container to perform the NFS permissions modification. Defaults to the same init container image as the database |
sharding.shardSrv.storage.single.size | The size of the volume for all storage, include units |
sharding.shardSrv.storage.single.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for all the storage. Default is "" |
sharding.shardSrv.storage.multi.data.size | The size of the volume for database data storage, include units |
sharding.shardSrv.storage.multi.data.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database data storage. Default is "" |
sharding.shardSrv.storage.multi.journal.size | The size of the volume for database journal, include units |
sharding.shardSrv.storage.multi.journal.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database journal. Default is "" |
sharding.shardSrv.storage.multi.logs.size | The size of the volume for database logs, include units |
sharding.shardSrv.storage.multi.logs.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database logs. Default is "" |
sharding.configSrv.replicas | Number of members for the config server replica set (integer) |
sharding.configSrv.resources.limits.cpu | The max CPU the containers can be allocated |
sharding.configSrv.resources.limits.mem | The max memory the containers can be allocated, include units |
sharding.configSrv.resources.requests.cpu | The initial CPU the containers can be allocated |
sharding.configSrv.resources.requests.mem | The initial memory the containers can be allocated, include units |
sharding.configSrv.storage.persistenceType | This is either single for all data one one partition, or multi for separate partiions for data , journal , and logs |
sharding.configSrv.storage.nfs | Boolean value to determine if NFS if used for persistence storage, which requires a further init container to fix permissions on NFS mount |
sharding.configSrv.storage.nfsInitImage | Image name a tag for the init container to perform the NFS permissions modification. Defaults to the same init container image as the database |
sharding.configSrv.storage.single.size | The size of the volume for all storage, include units |
sharding.configSrv.storage.single.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for all the storage. Default is "" |
sharding.configSrv.storage.multi.data.size | The size of the volume for database data storage, include units |
sharding.configSrv.storage.multi.data.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database data storage. Default is "" |
sharding.configSrv.storage.multi.journal.size | The size of the volume for database journal, include units |
sharding.configSrv.storage.multi.journal.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database journal. Default is "" |
sharding.configSrv.storage.multi.logs.size | The size of the volume for database logs, include units |
sharding.configSrv.storage.multi.logs.storageClass | The name of the StorageClass to use for the PersistentVolumeClaim for the database logs. Default is "" |
sharding.mongos.count | Number of mongos instances to deploy replica set (integer) |
sharding.mongos.resources.limits.cpu | The max CPU the containers can be allocated |
sharding.mongos.resources.limits.mem | The max memory the containers can be allocated, include units |
sharding.mongos.resources.requests.cpu | The initial CPU the containers can be allocated |
sharding.extAccess.enabled | Boolean determining if a NodePort will be created for external access |
sharding.extAccess.port | Port number as an integer that will be used for the external access NodePort |
The number of shards to create in the sharded cluster.
The number of members per shard.
The maximum number of CPUs that can be assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The maximum memory that can be assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The initial number of CPUs that is assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The initial memory that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The type of storage for the pod. Select single
for data, journal, and logs to be on one partition. If this is select both storage.single.size
and storage.single.storageClass
must be provided.
If separate partitions are required for data, journal, and logs then select multi
, and then provide all the following:
sharding.shardSrv.storage.multi.data.size
sharding.shardSrv.storage.multi.data.storageClass
sharding.shardSrv.storage.multi.journal.size
sharding.shardSrv.storage.multi.journal.storageClass
sharding.shardSrv.storage.multi.logs.size
sharding.shardSrv.storage.multi.logs.storageClass
A boolean to determine if NFS is used as the persistent storage. If this is true
then an additional init container is prepended to the init container array in the statefulSet to that will `chown`` the permissions of the NFS mount to be that of the mongod user. The Kubernetes Operator uses 2000:2000 for the UID and GID of the mongod user.
This init container will run as root so the permissions can be set. This is done via setting the runAsUser
to 0
and the runAsNonRoot
to false
. Ensure you understand the implications of this.
This will chown /data
, /journal
and /var/log/mongodb-mms-automation
to 2000:2000
Default is false
The image to use for the init container to perform the chown
on the NFS mounts.
The default is quay.io/mongodb/mongodb-enterprise-init-database-ubi:1.0.9"
The persistent storage that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the data partition.
The persistent storage that is assigned to each pod for data storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the journal partition.
The persistent storage that is assigned to each pod for journal storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the log partition.
The persistent storage that is assigned to each pod for log storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim.
The number of members in the config server replica set. Must be an integer.
The maximum number of CPUs that can be assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The maximum memory that can be assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The initial number of CPUs that is assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The initial memory that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The type of storage for the pod. Select single
for data, journal, and logs to be on one partition. If this is select both storage.single.size
and storage.single.storageClass
must be provided.
If separate partitions are required for data, journal, and logs then select multi
, and then provide all the following:
sharding.configSrv.storage.multi.data.size
sharding.configSrv.storage.multi.data.storageClass
sharding.configSrv.storage.multi.journal.size
sharding.configSrv.storage.multi.journal.storageClass
sharding.configSrv.storage.multi.logs.size
sharding.configSrv.storage.multi.logs.storageClass
A boolean to determine if NFS is used as the persistent storage. If this is true
then an additional init container is prepended to the init container array in the statefulSet to that will `chown`` the permissions of the NFS mount to be that of the mongod user. The Kubernetes Operator uses 2000:2000 for the UID and GID of the mongod user.
This init container will run as root so the permissions can be set. This is done via setting the runAsUser
to 0
and the runAsNonRoot
to false
. Ensure you understand the implications of this.
This will chown /data
, /journal
and /var/log/mongodb-mms-automation
to 2000:2000
Default is false
The image to use for the init container to perform the chown
on the NFS mounts.
The default is quay.io/mongodb/mongodb-enterprise-init-database-ubi:1.0.9"
The persistent storage that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the data partition.
The persistent storage that is assigned to each pod for data storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the journal partition.
The persistent storage that is assigned to each pod for journal storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim for the log partition.
The persistent storage that is assigned to each pod for log storage. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The name of the storage class that is used to create the persistentVolumeClaim.
The number of mongos instances to deploy.
The maximum number of CPUs that can be assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The maximum memory that can be assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
The initial number of CPUs that is assigned to each pod specified as either an integer, float, or with the m
suffix (for milliCPUS).
The initial memory that is assigned to each pod. The units suffix can be one of the following: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
A boolean to determine if external access is required/enabled. This will create services for to allow access from external to Kubernetes.
The port number as an integer that will be used with the NodePort for external access. This is normally 30000 and above and must be unique throughout the whole Kubernetes cluster.
Kubernetes documentation should be consulted and an understanding of your Kubernetes environment to determine the best method for the environment.
Ensure all the following as satisfied before attempoting to deploy:
- Create a new directory under the
values
directory for the environment - Copy the example
values.yaml
file from theexamples
directory to the new directory - Ops Manager API Access Token created including the CIDR range of the Kubernetes Operator for the API Access List
- Ops Manager API Access Token secret created
- Ops Manager CA Certificate secret created
- MongoDB deployment CA certificate configmap created (recommended)
- MongoDB deployment TLS certificate secret created (recommended)
- Password secret for the first user created
- Password secrets for other managed database users
- Configure external access (horizons) if required
- Configure LDAP access if required
- Ensure all values in the relevant
values.yaml
file set
To use the Helm charts via helmfile perform the following:
ENV=dev NS=mongodb KUBECONFIG=$PWD/kubeconfig helmfile apply
The kubeconfig
is the config file to gain access to the Kubernetes cluster. The ENV=dev
is the environment to use for the values.yaml
, in this case an environment called dev
.
To see what the actual YAML files will look like without applying them to Kubernetes use:
ENV=dev helmfile template
To destroy the environment (the PersistentVolumes will remain) use the following command:
ENV=dev NS=mongodb KUBECONFIG=$PWD/kubeconfig helmfile destroy