Skip to content

xcoulon/toolchain-infra

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pre-requisites

Install AWS CLI

You can find more information about how to install aws CLI [here](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html) OR Simply install using following bash commands:

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "tmp/awscli-bundle.zip"
unzip /tmp/awscli-bundle.zip
sudo ./tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

Verify that aws cli is installed correctly usig aws --version

Configure AWS

Configuring Profiles

Toolchain permanent cluster

Toolchain AWS already has robot account crt-robot with required minimum permissions to create Openshift cluster. With available access and secret key configure AWS profile by name crt-robot using following

aws configure --profile crt-robot
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: text

48 Hrs temporary cluster

If you want to setup toolchain on 48 hrs temporary cluster, you should configure AWS with profile openshift-dev using following

aws configure --profile openshift-dev
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: text

Install openshift-installer

We need to setup openshift-install binary to create Openshift cluster

wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux-4.2.4.tar.gz -P /tmp/
tar -xvf /tmp/openshift-install-linux-4.2.4.tar.gz
sudo mv /tmp/openshift-install /usr/local/bin/

You can download latest openshift-installer from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/

Set Required environment variables

CLIENT_SECRET

To setup RHD Identity Provider, you need to register a client with RHD and with client’s secret, create a secret in openshift-config namespace to be used by OAuth cluster config. So to create required secret, user needs to set CLIENT_SECRET environment variable to it’s base64 encoded value.

export CLIENT_SECRET=base64_encoded_client_secret

PULL_SECRET

We are storing host, member and 48 hrs temp clusters configuration files under /config directory, for which we need pull secrets to be set by environment varibale PULL_SECRET

export PULL_SECRET='{"auths":{"cloud.openshift.com":{"auth":"HSADJDFJJLDFbhf345==","email":"[email protected]"},"quay.io":{"auth":"jkfdsjfTH78==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"jhfkjdjfjdADSDS398njdnfj==","email":"[email protected]"},"registry.redhat.io":{"auth":"jdfjfdhfADSDSFDSF67dsgh==","email":"[email protected]"}}}'

You can download/copy required pull_secret from https://cloud.redhat.com/openshift/install/aws/installer-provisioned

SSH_PUBLIC_KEY

We need to add ssh keys under authorized keys for all the nodes created by the installer, for which we are passing ssh public keys by setting environment variable SSH_PUBLIC_KEY

export SSH_PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/BWDSUGPl+nafzlHDTYW7hdI4yZ5ew18JH4JW9jbhUFrviQzM7xlELEVf4h9lFX5QVkbPppSwg0cda3Pbv7kOdJ/MTyBlWXFCR+HAo3FXRitBqxiX1nKhXpHAZsMciLq8V6RjsNAQwdsdMFvSlVK/7XAt3FaoJoAsncM1Q9x5+3V0Ww68/eIFmb1zuUFljQJKprrX88XypNDvjYNby6vw/Pb0rwert/EnmZ+AW4OZPnTPI89ZPmVMLuayrD2cE86Z/il8b+gw3r3+1nKatmIkjn2so1d01QraTlMqVSsbxNrRFi9wrf+M7Q== [email protected]"

Setting up Toolchain on Host and Member cluster

To setup a hosted toolchain on multiple clusters (currently we are using 2 clusters i.e. host and member), We need to do following things

  1. Create host and member cluster

  2. Setup RHD identity provider

  3. Create admin users with cluster-admin roles

  4. Remove self-provisioner cluster role from authenticated users

  5. Deploy registration service, host-operator on host cluster

  6. Deploy member-operator on member cluster

  7. Create/setup KubeFedCluster

With Single step

Permanent Clusters

In order to achieve all above things on permanent clusters use following

./setup_toolchain.sh
Overwriting Operator’s namespace

If you want to overwrite operator’s namespace, you can use respective flags like following:

./setup_toolchain.sh -hs my_host_ns -mn my_member_ns

48 Hrs Dev clusters

In order to achieve all above things on temporary clusters for 48 hrs, use following

./setup_toolchain.sh -d

With Multiple steps

With default namespace for operators

If you want to try this setup one step at a time, you can follow the following steps:

./setup_cluster.sh -t host
./setup_cluster.sh -t member
./setup_kubefed.sh

With overriding operators namespace

If you want to ovewrite operators namespace, you can use respective flags or environamene variable like following steps:

./setup_cluster.sh -t host -hs my_host_ns -mn my_member_ns
./setup_cluster.sh -t member -hs my_host_ns -mn my_member_ns
./setup_kubefed.sh
MEMBER_OPERATOR_NS=my_member_ns HOST_OPERATOR_NS=my_host_ns ./setup_kubefed.sh

Cleaning UP Default kubeadmin

Once host and member clusters are setup with all required things and you confirm that all crt-admin can login and they have required access for cluster scoped resources you can remove default kube-admin user using following step:

oc delete secret kubeadmin -n kube-system

Destroying cluster

Make sure to export required AWS profile. - If your cluster is created for 48 hrs then export AWS_PROFILE=openshift-dev - If your cluster is permanant cluster, then export AWS_PROFILE=crt-robot

From the directory which stores metadata for Openshift 4 Cluster

openshift-install destroy cluster

If you lost metadata required to destroy Openshift 4 Cluster

If the OpenShift 4 cluster are deployed by installer and you lost the metadata, there is no way to delete the cluster using the OpenShift installer without the metadata In order to destroy the cluster using the installer, you should generate metadata.json file.

Set required variables using following

CLUSTER_NAME=NAME
AWS_REGION=REGION
CLUSTER_UUID=$(oc get clusterversions.config.openshift.io version -o jsonpath='{.spec.clusterID}{"\n"}')
INFRA_ID=$(oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.infrastructureName}{"\n"}')

Generate metadata.json

echo "{\"clusterName\":\"${CLUSTER_NAME}\",\"clusterID\":\"${CLUSTER_UUID}\",\"infraID\":\"${INFRA_ID}\",\"aws\":{\"region\":\"${AWS_REGION}\",\"identifier\":[{\"kubernetes.io/cluster/${INFRA_ID}\":\"owned\"},{\"openshiftClusterID\":\"${CLUSTER_UUID}\"}]}}" > metadata.json

Destroy cluster with the generated metadata.json file

openshift-install destroy cluster

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 100.0%