- Local Development
- Scaling
- Gateway Mode
- Nginx Cache
- Testing virtual-style-host requests
- Generating self-signed certificates and DH key
- Optional Features
- Running Tests
- Contribute
- License
Most of the following operations require minio client binary available locally at ./mc
See Install Minio Client for details.
Docker Compose is used for quick prototyping of the deployment without using Kubernetes.
The docker-compose examples use the following pattern:
- First, start the environment in the foreground so you can see the logs
- Then, open a new terminal to interact with the environment (usually using the
minio client
./mc
) - When done, press CTRL+C in the docker compose environment to stop the environment and remove all containers
Start the default stack:
docker-compose up --build
If you encounter this SSL version error:
ERROR: SSL error: HTTPSConnectionPool(host='<ip-address>', port=2376): Max retries exceeded with url: /v1.30/build?q=False&pull=False&t=minio&nocache=False&forcerm=False&rm=True (Caused by SSLError(SSLError(1, u'[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:727)'),))
You can resolve it like this:
export COMPOSE_TLS_VERSION=TLSv1_2
Set an alias, create a bucket and upload a file:
./mc alias set minio http://localhost:8080 12345678 12345678
./mc mb minio/test
./mc cp README.md minio/test/
List the contents of the bucket:
./mc ls minio/test
-
Install Minikube (latest stable version).
-
Install Helm (latest stable version).
-
Start a local cluster:
minikube start --driver=docker --kubernetes-version=v1.18.15 --network-plugin=cni --cni=calico
-
Switch to the minikube docker env:
eval $(minikube -p minikube docker-env)
. -
Build the Docker images:
docker-compose build
-
Build the cwm-worker-logger image:
docker build -t cwm-worker-logger ../cwm-worker-logger
- Change the directory according to where you cloned cwm-worker-logger.
- Make sure you checked out the relevant version of
cwm-worker-logger
you want to test with (e.g.git pull origin main
to get latest version).
-
Build the cwm-keda-external-scaler image:
docker build -t cwm-keda-external-scaler ../cwm-keda-external-scaler
- Change the directory according to where you cloned cwm-keda-external-scaler.
- Make sure you checked out the relevant version of
cwm-keda-external-scaler
you want to test with (e.g.git pull origin main
to get latest version).
-
Create a file at
.values.yaml
with the following content:minio: image: minio tag: latest initDebugEnable: true enableServiceMonitors: false metricsLogger: image: cwm-worker-logger tag: latest DEPLOYMENT_API_METRICS_FLUSH_INTERVAL_SECONDS: "5" LOG_LEVEL: debug externalscaler: enabled: true image: cwm-keda-external-scaler scaledobject: enabled: false nginx: image: nginx tag: latest enableNginxAntiAffinityRequired: false
-
You can apply additional configurations to override the configuration at
helm/values.yaml
. -
Deploy:
helm upgrade -f .values.yaml --install cwm-worker-deployment-minio ./helm
-
Verify that the minio pod is running:
kubectl get pods
-
Start port-forwards to the nginx service:
kubectl port-forward service/minio-nginx 8080:8080
kubectl port-forward service/minio-nginx 8443:8443
Add aliases:
./mc alias set http http://localhost:8080 dummykey dummypass
./mc alias set https https://localhost:8443 dummykey dummypass --insecure
Create a bucket and upload a file
./mc mb http/test
./mc cp README.md http/test/
List the files from https endpoint
./mc ls https/test --insecure
Set download policy on the bucket
./mc policy set download http/test
Add to /etc/hosts file:
127.0.0.1 example003.com example002.com
Check virtual hosts serving
curl -k -v https://example003.com:8443/test/README.md -o/dev/null 2>&1 | grep CN=example003.com
curl -k -v https://example002.com:8443/test/README.md -o/dev/null 2>&1 | grep CN=example002.com
Start Redis CLI and check the recorded metrics:
kubectl exec deployment/minio-logger -c redis -it -- redis-cli
keys *
get deploymentid:minio-metrics:minio1:num_requests_in
For these tests, we will use AWS to provide all the required log backends.
- Amazon Elasticsearch -> Create a new domain
- Deployment type: Development and testing
- Elasticsearch version: 7.9
- Elasticsearch domain name: cwm-worker-logger-tests
- Instance type: t2.small.elasticsearch
- Network configuration: public access
- Domain access policy: custom: ipv4 address: allow your IP
Add the following to the default .values.yaml
file (as described in using helm
section above):
# under metricsLogger:
LOG_PROVIDER: elasticsearch
ES_HOST:
ES_PORT:
Deploy the helm chart according to instructions for using Helm.
- Amazon S3 -> Create bucket
Add the following to the default .values.yaml
file (as described in using helm
section above):
# under metricsLogger:
LOG_PROVIDER: s3
AWS_KEY_ID:
AWS_SECRET_KEY:
S3_BUCKET_NAME:
S3_REGION:
Deploy the helm chart according to instructions for using Helm.
It disables the logger pod and runs without logging.
Add the following to the default .values.yaml
file (as described in using helm
section above):
# under minio:
auditWebhookEndpoint: ""
# under metricsLogger:
enable: false
Deploy the helm chart according to instructions for using Helm.
Deploy a Minio instance which will be used to store the logs:
helm upgrade --install cwm-worker-deployment-minio ./helm -n logs --create-namespace \
--set minio.auditWebhookEndpoint="" \
--set minio.metricsLogger.enable=false \
--set minio.image=minio
Verify from the logs that the Minio pod is ready.
Add the following to the default .values.yaml
file (as described in using helm
section above):
# under metricsLogger:
LOGS_FLUSH_INTERVAL: 5s
LOGS_FLUSH_RETRY_WAIT: 10s
LOG_PROVIDER: s3
S3_NON_AWS_TARGET: true
S3_ENDPOINT: http://minio.logs:8080
Deploy to storage namespace:
helm upgrade -f .values.yaml -n storage --create-namespace --install cwm-worker-deployment-minio ./helm
Start a port-forward to storage minio service:
kubectl -n storage port-forward service/minio-server 8080
Make some actions (upload/download objects)
Start a port-forward to logs minio service:
kubectl -n logs port-forward service/minio-server 8080
Logs should appear in bucket test123
.
Following types of scaling via ScaledObjects are supported:
external
(external scaler must be enabled and deployed)cpu
memory
For scaling with the external metrics, a custom KEDA external scaler cwm-keda-external-scaler is used.
Make sure that the KEDA has already been deployed before proceeding with the
ScaledObject
. Use install with YAML
method.
By default, the external scaler is disabled i.e. no scaling.
To enable it, use a custom .values.yaml
and deploy accordingly:
minio:
externalscaler:
enabled: true
Deploy: helm upgrade -f .values.yaml --install cwm-worker-deployment-minio ./helm
The external scaler should be up and running.
Now, the ScaledObject
can be configured and deployed:
minio:
scaledobject:
enabled: true
type: external
pollingInterval: 10
cooldownPeriod: 60
minReplicaCount: 1
maxReplicaCount: 10
# advanced:
# restoreToOriginalReplicaCount: true
# horizontalPodAutoscalerConfig:
# behavior:
# scaleDown:
# stabilizationWindowSeconds: 30
# policies:
# - type: Percent
# value: 80
# periodSeconds: 15
isActiveTtlSeconds: "60"
scalePeriodSeconds: "60"
scaleMetricName: "num_requests_misc"
targetValue: "10"
For the detailed configuration under the spec
, please refer to the
Sample Configuration
section.
The cpu
or memory
scaler can be configured like this:
minio:
# ...
scaledobject:
enabled: true
type: cpu # Supported types: [cpu, memory]
metricType: Utilization # Supported metric types: [Utilization, Value, AverageValue]
metricValue: "80"
Deploy: helm upgrade -f .values.yaml --install cwm-worker-deployment-minio ./helm
In this mode, the Minio instance acts as a gateway to the other S3-compatible service.
You can start a docker-compose environment which includes 2 minio instances - one acting as the gateway and one as the source instance:
docker-compose -f docker-compose-gateway.yaml up --build
Add aliases for the instances:
./mc alias set source http://localhost:8080 accesskey secretkey
./mc alias set gateway http://localhost:8082 12345678 12345678
Create a bucket and upload a file to source instance:
./mc mb source/test
echo hi | ./mc pipe source/test/hello.txt
Get the file from gateway instance
./mc cat gateway/test/hello.txt
See log data in Redis:
docker-compose -f docker-compose-gateway.yaml exec redis redis-cli keys '*'
See GATEWAY.md for how to get the required credentials and set them in env vars:
export GOOGLE_PROJECT_ID=
export GOOGLE_APPLICATION_CREDENTIALS_JSON='{}'
Start the docker-compose
environment:
docker-compose -f docker-compose-gateway-google.yaml up --build
Add mc alias
./mc alias set minio http://localhost:8080 12345678 12345678
See GATEWAY.md for how to get the required credentials and set them in env vars:
export AZURE_STORAGE_ACCOUNT_NAME=
export AZURE_STORAGE_ACCOUNT_KEY=
Start the docker-compose
environment:
docker-compose -f docker-compose-gateway-azure.yaml up --build
Add mc alias
./mc alias set minio http://localhost:8080 12345678 12345678
See GATEWAY.md for how to get the required credentials and set them in env vars:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
Start the docker-compose
environment:
docker-compose -f docker-compose-gateway-aws.yaml up --build
Add mc alias
./mc alias set minio http://localhost:8080 12345678 12345678
It is an optional caching layer that acts as a CDN and caches download requests for a given TTL.
Set the following in .env
file:
CDN_CACHE_ENABLE=yes
Run the docker-compose
environment:
docker-compose up --build
Add mc alias
./mc alias set minio http://localhost:8080 12345678 12345678
Create a bucket named test
and upload a file:
./mc mb minio/test
./mc cp ./README.md minio/test/README.md
Try to download the file (it should fail):
curl http://localhost:8080/test/README.md
Set the download bucket policy to allow unauthenticated download of files:
./mc policy set download minio/test
Try to download the file (it should succeed):
curl http://localhost:8080/test/README.md
Download again and check the headers:
curl -v http://localhost:8080/test/README.md > /dev/null
You should see X-Cache-Status: HIT
Delete the file:
./mc rm minio/test/README.md
File should still be available from cache
Wait 1 minute for cache to expire, then file will not be available.
Run the docker-compose
environment:
docker-compose up --build
Add mc alias, create a bucket, upload a file and set download policy
./mc alias set minio http://localhost:8080 12345678 12345678
./mc mb minio/test
./mc cp README.md minio/test/
./mc policy set download minio/test
Add this in /etc/hosts
file:
127.0.0.1 example001.com test.example001.com
Download with path-style
request i.e. http://domain/bucket/object
:
curl 'http://example001.com:8080/test/README.md'
With virtual-host-style
request i.e. http://bucket.domain/object
:
curl 'http://test.example001.com:8080/README.md'
The generated files are committed to Git, so you don't need to re-run the following steps, but they are documented here for reference.
Generate DH key:
openssl dhparam -out tests/hostnames/dhparam.pem 2048
Generate self-signed certificates:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tests/hostnames/hostname2.privkey \
-out tests/hostnames/hostname2.fullchain \
-subj "/C=IL/ST=Center/L=Tel-Aviv/O=Acme/OU=DevOps/CN=example002.com" &&\
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tests/hostnames/hostname3.privkey \
-out tests/hostnames/hostname3.fullchain \
-subj "/C=IL/ST=Center/L=Tel-Aviv/O=Acme/OU=DevOps/CN=example003.com" &&\
cp tests/hostnames/hostname3.fullchain tests/hostnames/hostname3.chain
See CI workflow.
- Fork the project.
- Check out the latest
main
branch. - Create a feature or bugfix branch from
main
. - Commit and push your changes.
- Make sure to add tests.
- Submit the PR.