Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding s390x arch support in k8s 1.28 provider #1201

Closed
2 changes: 1 addition & 1 deletion K8S.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ make cluster-up
# Attach to node01 console
docker exec -it ${KUBEVIRT_PROVIDER}-node01 screen /dev/pts/0
```
Use `vagrant:vagrant` to login.
Use `vagrant:vagrant` for x86 and cloud-user:cloud-user for s390x to login
Note: it is sometimes `/dev/pts/1` or `/dev/pts/2`, try them in case you don't get a prompt.

Make sure you don't leave open screens, else the next screen will be messed up.
Expand Down
18 changes: 11 additions & 7 deletions KUBEVIRTCI_LOCAL_TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ cd $KUBEVIRTCI_DIR

```bash
# Build a provider. This includes starting it with cluster-up for verification and shutting it down for cleanup.
(cd cluster-provision/k8s/1.27; ../provision.sh)
(cd cluster-provision/k8s/1.28; ../provision.sh)
```

Note:
Expand All @@ -34,7 +34,7 @@ please use `export BYPASS_PMAN_CHANGE_CHECK=true` to bypass provision-manager ch
# set local provision test flag (mandatory)
export KUBEVIRTCI_PROVISION_CHECK=1
```

This ensures to set container-registry to quay.io and container-suffix to :latest
If `KUBEVIRTCI_PROVISION_CHECK` is not used,
you can set `KUBEVIRTCI_CONTAINER_REGISTRY` (default: `quay.io`), `KUBEVIRTCI_CONTAINER_ORG` (default: `kubevirtci`) and `KUBEVIRTCI_CONTAINER_SUFFIX` (default: according gocli tag),
in order to use a custom image.
Expand All @@ -48,7 +48,7 @@ export KUBEVIRTCI_GOCLI_CONTAINER=quay.io/kubevirtci/gocli:latest
### start cluster

```bash
export KUBEVIRT_PROVIDER=k8s-1.30
export KUBEVIRT_PROVIDER=k8s-1.28
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you dropping the version here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brianmcarey I just made the version consistent across this file :)
I saw in a place it is 1.27

(cd cluster-provision/k8s/1.28; ../provision.sh)
In other two places as 1.30
export KUBEVIRT_PROVIDER=k8s-1.30
export KUBEVIRT_PROVIDER=k8s-1.30
In other two places 1.21
export PHASES=k8s; (cd cluster-provision/k8s/1.21; ../provision.sh)

So everywhere I changed to 1.28

Let me change all these to 1.30

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in k8s-1.30-provider-slim-s390x branch

export KUBECONFIG=$(./cluster-up/kubeconfig.sh)
export KUBEVIRT_NUM_NODES=2

Expand All @@ -59,7 +59,7 @@ make cluster-up
#### start cluster with prometheus, alertmanager and grafana
To enable prometheus, please also export the following variables before running `make cluster-up`:
```bash
export KUBEVIRT_PROVIDER=k8s-1.30
export KUBEVIRT_PROVIDER=k8s-1.28
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can probably stay at k8-1.30

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in k8s-1.30-provider-slim-s390x branch

export KUBEVIRT_DEPLOY_PROMETHEUS=true
export KUBEVIRT_DEPLOY_PROMETHEUS_ALERTMANAGER=true
export KUBEVIRT_DEPLOY_GRAFANA=true
Expand Down Expand Up @@ -134,12 +134,16 @@ For that we have phased mode.
Usage: export the required mode, i.e `export PHASES=linux` or `export PHASES=k8s`
and then run the provision. the full flow will be:

`export PHASES=linux; (cd cluster-provision/k8s/1.21; ../provision.sh)`
`export PHASES=k8s; (cd cluster-provision/k8s/1.21; ../provision.sh)`
`export PHASES=linux; (cd cluster-provision/k8s/1.28; ../provision.sh)`
`export PHASES=k8s; (cd cluster-provision/k8s/1.28; ../provision.sh)`
Run the `k8s` step as much as needed. It reuses the intermediate image that was created
by the `linux` phase.
Note :
1. By default when you run `k8s` phase alone, it uses centos9 image specified in cluster-provision/k8s/base-image, not the one built locally in the `linux` phase. So, to make `k8s` phase use the locally built centos9 image, update cluster-provision/k8s/base-image with the locally built image name and tag (default: quay.io/kubevirtci/centos9:latest)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for adding this

2. Also note if you run both `linux,k8s` phases, then it doesn't save the intermediate container image generated post linux image. So, for the centos9 image required for k8s stage, you've to run the linux phase alone.

Once you are done, either check the cluster manually, or use:
`export PHASES=k8s; export CHECK_CLUSTER=true; (cd cluster-provision/k8s/1.21; ../provision.sh)`
`export PHASES=k8s; export CHECK_CLUSTER=true; (cd cluster-provision/k8s/1.28; ../provision.sh)`

### provision without pre-pulling images

Expand Down
57 changes: 40 additions & 17 deletions cluster-provision/centos9/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,33 +1,56 @@
FROM quay.io/fedora/fedora:39 AS base

FROM quay.io/kubevirtci/fedora@sha256:e3a6087f62f288571db14defb7e0e10ad7fe6f973f567b0488d3aac5e927035a
RUN dnf -y install jq iptables iproute dnsmasq qemu socat openssh-clients screen bind-utils tcpdump iputils libguestfs-tools-c && dnf clean all

ARG centos_version
FROM base AS imageartifactdownload

ARG BUILDARCH

RUN dnf -y install jq iptables iproute dnsmasq qemu openssh-clients screen bind-utils tcpdump iputils && dnf clean all
ARG centos_version

WORKDIR /

COPY vagrant.key /vagrant.key
RUN echo "Centos9 version $centos_version"

RUN chmod 700 vagrant.key
COPY scripts/download_box.sh /

ENV DOCKERIZE_VERSION v0.6.1
RUN if test "$BUILDARCH" != "s390x"; then \
/download_box.sh https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-Vagrant-9-$centos_version.x86_64.vagrant-libvirt.box && \
curl -L -o /initramfs-amd64.img http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/initrd.img && \
curl -L -o /vmlinuz-amd64 http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/vmlinuz; \
else \
/download_box.sh https://cloud.centos.org/centos/9-stream/s390x/images/CentOS-Stream-GenericCloud-9-$centos_version.s390x.qcow2 && \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can't use the same generic cloud image for x86?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brianmcarey I think we can change, but just want to not to mix those changes for x86 here. As this is already XXL, wanted to have that as a separate PR. I will create that as a follow-up PR for this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the placeholder PR #1242 so that this can be further tracked there.

# Access virtual machine disk images directly by using LIBGUESTFS_BACKEND=direct, instead of libvirt
export LIBGUESTFS_BACKEND=direct && \
guestfish --ro --add box.qcow2 --mount /dev/sda1:/ ls /boot/ | grep -E '^vmlinuz-|^initramfs-' | xargs -I {} guestfish --ro --add box.qcow2 -i copy-out /boot/{} / ; \
fi

RUN curl -LO https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& chmod u+x dockerize \
&& mv dockerize /usr/local/bin/

COPY scripts/download_box.sh /
FROM base as nodecontainer

RUN echo "Centos9 version $centos_version"
ARG BUILDARCH

WORKDIR /

ENV CENTOS_URL https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-Vagrant-9-$centos_version.x86_64.vagrant-libvirt.box
COPY vagrant.key /vagrant.key

RUN /download_box.sh ${CENTOS_URL}
RUN chmod 700 vagrant.key

ENV DOCKERIZE_VERSION v0.6.1

RUN curl -L -o /initrd.img http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/initrd.img
RUN curl -L -o /vmlinuz http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/vmlinuz
RUN if test "$BUILDARCH" != "s390x"; then \
curl -L -o dockerize-linux-$BUILDARCH.tar.gz https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz; \
else \
# Temporary till s390x support is upstreamed to dockerize (https://github.com/jwilder/dockerize/pull/200)
curl -L -o dockerize-linux-$BUILDARCH.tar.gz https://github.com/ibm-jitendra/kubevirt_pkgs/raw/main/dockerize-linux-s390x.tar.gz; \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any timeline for the upstreaming of this? I would prefer not to rely on a random tarball.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brianmcarey : No update as of now. We are tried to reach upstream owner on PR, but no luck. Now, Vamsi is trying to reach him on LinkedIn.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for @vamsikrishna-siddu for the followup. Now changes are upstreamed.
This is addressed in k8s-1.30-provider-slim-s390x branch

fi && \
tar -xzvf dockerize-linux-$BUILDARCH.tar.gz && \
rm dockerize-linux-$BUILDARCH.tar.gz && \
chmod u+x dockerize && \
mv dockerize /usr/local/bin/

COPY --from=imageartifactdownload /box.qcow2 box.qcow2
COPY --from=imageartifactdownload /vmlinuz-* /vmlinuz
COPY --from=imageartifactdownload /initramfs-* /initrd.img

COPY scripts/* /
14 changes: 11 additions & 3 deletions cluster-provision/centos9/scripts/download_box.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@
set -e
set -o pipefail

curl -L $1 | tar -zxvf - box.img
qemu-img convert -O qcow2 box.img box.qcow2
rm box.img

ARCH=$(uname -m)

#For the s390x architecture, instead of vagrant box image, generic cloud (qcow2) image is used directly.
if [ "$ARCH" == "s390x" ]; then
curl -L $1 -o box.qcow2
else
curl -L $1 | tar -zxvf - box.img
qemu-img convert -O qcow2 box.img box.qcow2
rm box.img
fi
1 change: 1 addition & 0 deletions cluster-provision/centos9/scripts/kernel.s390x.args
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
root=/dev/vda1 ro no_timer_check console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to have a separate commit for these kernel args that explains why some of the different options are required.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These were the defaults within the generic cloud image of centos9 for s390x. I've copied them to this file to mainly keep parity with x86 based kernel args, which I think exists for configuring args for kernel.

Will try to come up with a readme around this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this as separate commit in k8s-1.30-provider-slim-s390x branch, where in commit I've explained all the details.
6f3f133

120 changes: 107 additions & 13 deletions cluster-provision/centos9/scripts/vm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ KERNEL_ARGS=""
NEXT_DISK=""
BLOCK_DEV=""
BLOCK_DEV_SIZE=""
#TODO: Check other places where vagrant as username is used
VM_USER=$( [ "$(uname -m)" = "s390x" ] && echo "cloud-user" || echo "vagrant" )
VM_USER_SSH_KEY="vagrant.key"

while true; do
case "$1" in
Expand Down Expand Up @@ -38,6 +41,12 @@ function calc_next_disk {
if [ -n "$NEXT_DISK" ]; then next=${NEXT_DISK}; fi
if [ "$last" = "00" ]; then
last="box.qcow2"
# Customize qcow2 image using virt-sysprep (with KVM accelerator)
if [ "$(uname -m)" = "s390x" ]; then
export LIBGUESTFS_BACKEND=direct
export LIBGUESTFS_BACKEND_SETTINGS=force_kvm
virt-sysprep -a box.qcow2 --run-command 'useradd -m cloud-user' --append '/etc/cloud/cloud.cfg:runcmd:' --append '/etc/cloud/cloud.cfg: - hostnamectl set-hostname ""' --root-password password:Zxc@123 --ssh-inject cloud-user:string:"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
fi
else
last=$(printf "/disk%02d.qcow2" $last)
fi
Expand All @@ -50,7 +59,7 @@ cat >/usr/local/bin/ssh.sh <<EOL
#!/bin/bash
set -e
dockerize -wait tcp://192.168.66.1${n}:22 -timeout 300s &>/dev/null
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vagrant@192.168.66.1${n} -i vagrant.key -p 22 -q \$@
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${VM_USER}@192.168.66.1${n} -i ${VM_USER_SSH_KEY} -p 22 -q \$@
EOL
chmod u+x /usr/local/bin/ssh.sh
echo "done" >/ssh_ready
Expand Down Expand Up @@ -184,15 +193,100 @@ if [ "${NUMA}" -gt 1 ]; then
done
fi

exec qemu-system-x86_64 -enable-kvm -drive format=qcow2,file=${next},if=virtio,cache=unsafe ${block_dev_arg} \
-device virtio-net-pci,netdev=network0,mac=52:55:00:d1:55:${n} \
-netdev tap,id=network0,ifname=tap${n},script=no,downscript=no \
-device virtio-rng-pci \
-initrd /initrd.img \
-kernel /vmlinuz \
-append "$(cat /kernel.args) $(cat /additional.kernel.args) ${KERNEL_ARGS}" \
-vnc :${n} -cpu host,migratable=no,+invtsc -m ${MEMORY} -smp ${CPU} ${numa_arg} \
-serial pty -M q35,accel=kvm,kernel_irqchip=split \
-device intel-iommu,intremap=on,caching-mode=on -device intel-hda -device hda-duplex -device AC97 \
-uuid $(cat /proc/sys/kernel/random/uuid) \
${QEMU_ARGS}
if [ "$(uname -m)" != "s390x" ]; then
#Docs: https://www.qemu.org/docs/master/system/invocation.html
qemu_system_cmd="qemu-system-x86_64 \
-enable-kvm \
-drive format=qcow2,file=${next},if=virtio,cache=unsafe ${block_dev_arg} \
-device virtio-net-pci,netdev=network0,mac=52:55:00:d1:55:${n} \
-netdev tap,id=network0,ifname=tap${n},script=no,downscript=no \
-device virtio-rng-pci \
-initrd /initrd.img \
-kernel /vmlinuz \
-append \"$(cat /kernel.args) $(cat /additional.kernel.args) ${KERNEL_ARGS}\" \
-vnc :${n} \
-cpu host,migratable=no,+invtsc \
-m ${MEMORY} \
-smp ${CPU} ${numa_arg} \
-serial pty \
-machine q35,accel=kvm,kernel_irqchip=split \
-device intel-iommu,intremap=on,caching-mode=on \
-device intel-hda \
-device hda-duplex \
-device AC97 \
-uuid $(cat /proc/sys/kernel/random/uuid) \
${QEMU_ARGS}"
else
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer that the else would cover the more common x86_64 case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand you right, have s390x case in if block and x86_64 case in else block. I will change it accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is addressed in k8s-1.30-provider-slim-s390x branch

# As per https://www.qemu.org/docs/master/system/s390x/bootdevices.html#booting-without-bootindex-parameter -drive if=virtio can't be specified with bootindex for s390x
qemu_system_cmd="qemu-system-s390x \
-enable-kvm \
-drive format=qcow2,file=${next},if=none,cache=unsafe,id=drive1 ${block_dev_arg} \
-device virtio-blk,drive=drive1,bootindex=1 \
-device virtio-net-ccw,netdev=network0,mac=52:55:00:d1:55:${n} \
-netdev tap,id=network0,ifname=tap${n},script=no,downscript=no \
-device virtio-rng \
-initrd /initrd.img \
-kernel /vmlinuz \
-append \"$(cat /kernel.s390x.args) $(cat /additional.kernel.args) ${KERNEL_ARGS}\" \
-vnc :${n} \
-cpu host \
-m ${MEMORY} \
-smp ${CPU} ${numa_arg} \
-serial pty \
-machine s390-ccw-virtio,accel=kvm \
-uuid $(cat /proc/sys/kernel/random/uuid) \
${QEMU_ARGS}"
fi

# Remove secondary network devices from qemu_system_cmd and move them to qemu_monitor_cmds, so
# that those devices are later added after VM is started using qemu monitor to avoid
# primary network interface to be named other than eth0. This is mainly required for s390x, as
# otherwise if primary interface is other than eth0, it is not getting the IP from dhcp server.
qemu_monitor_cmds=()
IFS=' ' read -r -a qemu_parts <<< "$qemu_system_cmd"
for ((i = 0; i < ${#qemu_parts[@]}; i++)); do
part="${qemu_parts[$i]}"
nxtpart="${qemu_parts[$i + 1]}"
# Check for secondary network devices and move them to qemu_monitor_cmds
if { [ "$part" == "-netdev" ] && [[ "$nxtpart" == *"secondarynet"* ]]; } || \
{ [ "$part" == "-device" ] && [[ "$nxtpart" == *"virtio-net-ccw"* ]] && [[ "$nxtpart" == *"secondarynet"* ]]; }; then
qemu_system_cmd=$(echo "$qemu_system_cmd" | sed "s/ -$part $nxtpart//")
qemu_monitor_cmds+=("${part}_add $nxtpart")
fi
done

qemu_system_cmd+=" -monitor unix:/tmp/qemu-monitor.sock,server,nowait"
echo "qemu_system_cmd is ${qemu_system_cmd}"
echo "qemu_monitor_cmds is ${qemu_monitor_cmds}"

PID=0
eval "nohup $qemu_system_cmd &"
PID=$!

# Function to check if QEMU monitor socket is ready
is_qemu_monitor_ready() {
socat - UNIX-CONNECT:/tmp/qemu-monitor.sock < /dev/null > /dev/null 2>&1
}

# Wait for the QEMU monitor socket to be ready
elapsed=0
while ! is_qemu_monitor_ready; do
if [ $elapsed -ge 60 ]; then
echo "QEMU monitor socket did not become available within 60 seconds."
exit 1
fi
sleep 1
elapsed=$((elapsed + 1))
done
echo "QEMU monitor socket is ready."

# Send commands to QEMU monitor
if [ "${#qemu_monitor_cmds[@]}" -gt 0 ]; then
# Sort commands in reverse alphabetical order so that -netdev are passed first then -dev
IFS=$'\t' qemu_monitor_cmds_sorted=($(printf "%s\n" "${qemu_monitor_cmds[@]}" | sort -r))
for qemu_monitor_cmd in "${qemu_monitor_cmds_sorted[@]}"; do
echo "$qemu_monitor_cmd" | socat - UNIX-CONNECT:/tmp/qemu-monitor.sock
done
fi

wait $PID
3 changes: 2 additions & 1 deletion cluster-provision/gocli/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ SHELL := /bin/bash

IMAGES_FILE ?= images.json
KUBEVIRTCI_IMAGE_REPO ?= quay.io/kubevirtci
GOARCH ?= $$(uname -m | grep -q s390x && echo s390x || echo amd64)

export GO111MODULE=on
export GOPROXY=direct
Expand All @@ -19,7 +20,7 @@ test:

.PHONY: gocli
cli:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 $(GO) build -ldflags "-X 'kubevirt.io/kubevirtci/cluster-provision/gocli/images.SUFFIX=:$(KUBEVIRTCI_TAG)'" -o $(BIN_DIR)/cli ./cmd/cli
CGO_ENABLED=0 GOOS=linux GOARCH=${GOARCH} $(GO) build -ldflags "-X 'kubevirt.io/kubevirtci/cluster-provision/gocli/images.SUFFIX=:$(KUBEVIRTCI_TAG)'" -o $(BIN_DIR)/cli ./cmd/cli
.PHONY: fmt
fmt:
$(GO) fmt ./cmd/...
Expand Down
15 changes: 9 additions & 6 deletions cluster-provision/gocli/cmd/provision.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
"strings"

Expand Down Expand Up @@ -51,6 +52,7 @@ func NewProvisionCommand() *cobra.Command {

func provisionCluster(cmd *cobra.Command, args []string) (retErr error) {
var base string
sshUser := utils.GetSSHUserByArchitecture(runtime.GOARCH)
packagePath := args[0]
versionBytes, err := os.ReadFile(filepath.Join(packagePath, "version"))
if err != nil {
Expand Down Expand Up @@ -228,13 +230,14 @@ func provisionCluster(cmd *cobra.Command, args []string) (retErr error) {
}

// Wait for ssh.sh script to exist
logrus.Info("Wait for ssh.sh script to exist")
err = _cmd(cli, nodeContainer(prefix, nodeName), "while [ ! -f /ssh_ready ] ; do sleep 1; done", "checking for ssh.sh script")
if err != nil {
logrus.Info("Error: Wait for ssh.sh script to exist")
return err
}

// Wait for the VM to be up
err = _cmd(cli, nodeContainer(prefix, nodeName), "ssh.sh echo VM is up", "waiting for node to come up")
err = waitForVMToBeUp(cli, prefix, nodeName)
if err != nil {
return err
}
Expand All @@ -252,21 +255,21 @@ func provisionCluster(cmd *cobra.Command, args []string) (retErr error) {
if err != nil {
return err
}
err = _cmd(cli, nodeContainer(prefix, nodeName), "if [ -f /scripts/extra-pre-pull-images ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/extra-pre-pull-images vagrant@192.168.66.101:/tmp/extra-pre-pull-images; fi", "copying /scripts/extra-pre-pull-images if existing")
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("if [ -f /scripts/extra-pre-pull-images ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/extra-pre-pull-images %s@192.168.66.101:/tmp/extra-pre-pull-images; fi", sshUser), "copying /scripts/extra-pre-pull-images if existing")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think some of this ssh handling has changed since #1209 was merged

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brianmcarey This PR changes are already part of my branch. But will check if I need to adjust it. As PR#1209 is quite big digesting it still.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't require to change this as in #1029 these function calls have not changed.

if err != nil {
return err
}
err = _cmd(cli, nodeContainer(prefix, nodeName), "if [ -f /scripts/fetch-images.sh ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/fetch-images.sh vagrant@192.168.66.101:/tmp/fetch-images.sh; fi", "copying /scripts/fetch-images.sh if existing")
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("if [ -f /scripts/fetch-images.sh ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/fetch-images.sh %s@192.168.66.101:/tmp/fetch-images.sh; fi", sshUser), "copying /scripts/fetch-images.sh if existing")
if err != nil {
return err
}

err = _cmd(cli, nodeContainer(prefix, nodeName), "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key vagrant@192.168.66.101 'mkdir -p /tmp/ceph /tmp/cnao /tmp/nfs-csi /tmp/nodeports /tmp/prometheus /tmp/whereabouts /tmp/kwok'", "Create required manifest directories before copy")
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key %s@192.168.66.101 'mkdir -p /tmp/ceph /tmp/cnao /tmp/nfs-csi /tmp/nodeports /tmp/prometheus /tmp/whereabouts /tmp/kwok'", sshUser), "Create required manifest directories before copy")
if err != nil {
return err
}
// Copy manifests to the VM
err = _cmd(cli, nodeContainer(prefix, nodeName), "scp -r -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/manifests/* vagrant@192.168.66.101:/tmp", "copying manifests to the VM")
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("scp -r -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/manifests/* %s@192.168.66.101:/tmp", sshUser), "copying manifests to the VM")
if err != nil {
return err
}
Expand Down
Loading