GoodData CN is almost ready for running on clusters with Istio Gateway (in sidecar mode), so you don't need to use Nginx Ingress controller.
Recent helm charts to not require any modifications, but a few things need to be taken into account when using Istio:
- port names: gooddata-cn services follow Istio recommendation on naming ports. However, etcd and pulsar charts do not.
Fortunately it is possible to override default port names. Refer to gdcn-values.yaml and
pulsar-values.yaml and see how to add
tcp-
prefix to ports and make Istio protocol discovery happy. - gooddata-cn defines
nginx
as defaultingressClassName
. I assume this ingress class is NOT available on your istio-enabled cluster. This is expected until we update our apps to support Istio's custom resources natively. So when you create a new Organization, a new Ingress will be created but will not be used (because of unregistered ingress class name). You will need to create VirtualService for your organization manually.
- Running Docker daemon
- curl
- KinD binary
- cloud-provider-kind
- Valid GoodData CN License key, stored in
GDCN_LICENSE
env variable - istioctl binary
- kubectl
- Optionally helm binary, if you want Kiali UI for Istio
- Create KIND cluster
kind create cluster --name kind
# get it from https://github.com/kubernetes-sigs/cloud-provider-kind/releases
cloud-provider-kind -v 0 &
- Install Istio and related stuff I tested with "native sidecar" mode, so it requires k8s 1.29+. It resolves strange issues with jobs not being terminated.
istioctl install --set values.pilot.env.ENABLE_NATIVE_SIDECARS=true -y --set meshConfig.accessLogFile=/dev/stdout
# These are optional but recommended for better visiblity
kubectl apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/addons/prometheus.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/addons/grafana.yaml
helm upgrade --install -n istio-system kiali-server --repo https://kiali.org/helm-charts kiali-server --set auth.strategy=anonymous
- Install Apache Pulsar and GoodData CN
kubectl apply -f namespaces.yaml
kind load docker-image apachepulsar/pulsar:3.3.3
helm -n pulsar upgrade --install \
--repo https://pulsar.apache.org/charts pulsar pulsar --version 3.5.0 \
--values pulsar-values.yaml
# GD CN License key
kubectl -n gooddata create secret generic gdcn-license --from-literal=license=$GDCN_LICENSE
# provision certificate and key for *.example.com, keep in files _.example.com.crt and _.example.com.key
# I'm using my own local CA, so cacert should be passed to k8s secret as well.
# secrets must be stored in istio-system so SDS can find them (?)
kubectl -n istio-system create secret generic star.example.com \
--from-file=cert=_.example.com.crt \
--from-file=key=_.example.com.key \
--from-file=cacert=ca.crt
# Install official gooddata chart
helm -n gooddata upgrade --install \
--repo https://charts.gooddata.com/ \
gooddata-cn gooddata-cn --version 3.25.0 \
--values gdcn-values.yaml
# OR from local chart files
helm -n gooddata upgrade --install \
gooddata-cn ./folder-with-extracted-gooddata-cn-chart \
--values gdcn-values.yaml --set image.defaultTag=3.25.0
- Create Istio Ingress GW in "gooddata" namespace.
kubectl apply -f gateway.yaml
- Create VirtualService for Dex
kubectl apply -f istio-virtual-service-dex.yaml
- Create delegate VS (shared VS without hosts or attached gateway)
kubectl apply -f istio-virtual-service.yaml
- Update
/etc/hosts
with example hostnames. This guide uses hostnames in example domain (RFC-6761), so we update local dns resolver.
LB_IP=$(kubectl get svc -n istio-system -l istio=ingressgateway -o jsonpath='{.items[0]..status.loadBalancer.ingress[0].ip}')
echo "$LB_IP auth.example.com org1.example.com org2.example.com" | sudo tee -a /etc/hosts
- Create org1 and org2 Organziations
kubectl apply -f organizations.yaml
- Create VS for these two organizations Note these VirtualServices are very simple, they just hold the hostname and gateway reference. Routes are stored in delegated VS created earilier. It allows us to keep configuration clean and DRY.
kubectl apply -f orgs-vs.yaml
- Create user
-
In dex, create one user (dex is shared by both orgs, so any orgnanization hostname will work):
curl -X POST -H 'Content-type: application/json' \ -d '{"email": "[email protected]","password": "mypassword","displayName": "John Doe"}' \ -H 'Authorization: Bearer YWRtaW46Ym9vdHN0cmFwOkdkY05hczEyMw' -k https://org1.example.com/api/v1/auth/users
Note the authentication Id returned by the API. Use it in the following 2 requests:
-
Map dex user to
admin
user in both organizations:curl -X PATCH -k https://org1.example.com/api/v1/entities/users/admin -H "Authorization: Bearer YWRtaW46Ym9vdHN0cmFwOkdkY05hczEyMw" \ -H "Content-Type: application/vnd.gooddata.api+json" -d '{ "data": { "id": "admin", "type": "user", "attributes": { "authenticationId": "<<authenticationId-returned-above>>", "email": "[email protected]", "firstname": "John", "lastname": "Doe" } } }' curl -X PATCH -k https://org2.example.com/api/v1/entities/users/admin -H "Authorization: Bearer YWRtaW46Ym9vdHN0cmFwOkdkY05hczEyMw" \ -H "Content-Type: application/vnd.gooddata.api+json" -d '{ "data": { "id": "admin", "type": "user", "attributes": { "authenticationId": "<<authenticationId-returned-above>>", "email": "[email protected]", "firstname": "John", "lastname": "Doe" } } }'
- Login to UI with username
[email protected]
and passwordmypassword
to https://org1.example.com/ or https://org2.example.com/.
-
job pods keep running, the main container is "Completed" but "istio-proxy" container remains running.RESOLVED by using native sidecars. -
Readiness probe of calcique and afm-exec-api fail for a long time, becuase they are unable to resolve headless service to metadata-api or calcique, respectively. They will recover enventually, but it takes a few minutes.
-
mTLS setup
-
TLS setup on Gateway(DONE) -
cert-manager integration
-
How to handle organization hostnames that do not match
*.exmaple.com
wildcard?