Skip to content

Commit

Permalink
Add another smaller 2i2c federation member + k3s docs
Browse files Browse the repository at this point in the history
I wanted to understand what it would take to run a federation
member on a small machine that does *not* have object storage
nearby (I'm trying to get some new members where this is true).
I also wanted to run through adding a new k3s member one more time
so I can document it appropriately

Sooooooo I bought a small server (16 cores AMD Ryzen with 64G of RAM
and 1TB RAID1 SSD) via the wonderful [hetzner auction](https://www.hetzner.com/sb/)
system. They sell older systems that are decomissioned from 'production'
users but those are perfect for us. This costs 2i2c approximately
60$ a month in costs.

I installed Ubuntu 24.04 and ran through k3s setup again, documenting
every step I did! This helps future people joining the federation this
way.

Since this is in the same datacenter as the regular bigger 2i2c
federation member, we can reuse the same object store backend for
the registry! I can experiment with using filesystem for the registry
in a future separate commit.

I've added this to the federation with a 5% weight so it gets a little
traffic but not much.

Thanks to 2i2c for sponsoring this!
  • Loading branch information
yuvipanda committed Jan 25, 2025
1 parent 20cba1f commit 354de95
Show file tree
Hide file tree
Showing 3 changed files with 51 additions and 8 deletions.
8 changes: 7 additions & 1 deletion config/prod.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -234,10 +234,16 @@ federationRedirect:
weight: 70
health: https://2i2c.mybinder.org/health
versions: https://2i2c.mybinder.org/versions
hetzner-2i2c-bare:
prime: false
url: https://2i2c-bare.mybinder.org
weight: 5
health: https://2i2c-bare.mybinder.org/health
versions: https://2i2c-bare.mybinder.org/versions
gesis:
prime: false
url: https://notebooks.gesis.org/binder
weight: 30
weight: 25
health: https://notebooks.gesis.org/binder/health
versions: https://notebooks.gesis.org/binder/versions
ovh2:
Expand Down
4 changes: 2 additions & 2 deletions deploy.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
}

# Projects using raw KUBECONFIG files
KUBECONFIG_CLUSTERS = {"ovh2", "hetzner-2i2c"}
KUBECONFIG_CLUSTERS = {"ovh2", "hetzner-2i2c", "hetzner-2i2c-bare"}

# Mapping of config name to cluster name for AWS EKS deployments
AWS_DEPLOYMENTS = {"curvenote": "binderhub"}
Expand Down Expand Up @@ -437,7 +437,7 @@ def main():
argparser.add_argument(
"release",
help="Release to deploy",
choices=["staging", "prod", "ovh", "ovh2", "curvenote", "hetzner-2i2c"],
choices=list(KUBECONFIG_CLUSTERS) + list(GCP_PROJECTS.keys()) + list(AWS_DEPLOYMENTS.keys()) + list(AZURE_RGs.keys())
)
argparser.add_argument(
"--name",
Expand Down
47 changes: 42 additions & 5 deletions docs/source/deployment/k3s.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,30 @@ do not need traefik.

## Extracting authentication information via a `KUBECONFIG` file

Follow https://docs.k3s.io/cluster-access#accessing-the-cluster-from-outside-with-kubectl
Next, we extract the `KUBECONFIG` file that the `mybinder.org-deploy` repo and team members can use to access
this cluster externally by following [upstream documentation](https://docs.k3s.io/cluster-access#accessing-the-cluster-from-outside-with-kubectl).
The short version is:

1. Copy the `/etc/rancher/k3s/k3s.yaml` into the `secrets/` directory in this repo:

```bash
scp root@<public-ip>:/etc/rancher/k3s/k3s.yaml secrets/<cluster-name>-kubeconfig.yml
```

Pick a `<cluster-name>` that describes what cluster this is - we will be consistently using it for other files too.

Note the `.yml` here - everything else is `.yaml`!

2. Change the `server` field under `clusters.0.cluster` from `https://127.0.0.1:6443` to `https://<public-ip>:6443`.

## Create a new ssh key for mybinder team members

For easy access to this node for mybinder team members, we create and check-in an ssh key as
a secret.

1. Run `ssh-keygen -t ed25519 -f secrets/<cluster-name>.key` to create the ssh key. Leave the passphrase blank.
2. Set appropriate permissions with `chmod 0400 secrets/<cluster-name>.key`.
3. Copy `secrets/<cluster-name>.key.pub` (**NOTE THE .pub**) and paste it as a **new line** in `/root/.ssh/authorized_keys` on your server. Do not replace any existing lines in this file.

## Setup DNS entries

Expand All @@ -70,16 +93,30 @@ Add the following entries:

Give this a few minutes because it may take a while to propagate.

## Make a config copy for this new member
## Make a config + secret copy for this new member

TODO
Now we gotta start a config file and a secret config file for this new member. We can start off by copying an existing one!

## Make a secret config for this new member
Let's copy `config/hetzner-2i2c.yaml` to `config/<cluster-name>.yaml` and make changes!

TODO
1. Find all hostnames, and change them to point to the DNS entries you made in the previous step.
2. Change `ingress-nginx.controller.service.loadbalancerIP` to be the external public IP of your cluster
3. Adjust the following parameters based on the size of the server:
a. `binderhub.config.LaunchQuota.total_quota`
b. `dind.resources`
c. `imageCleaner`
4. TODO: Something about the registry.

We also need a secrets file, so let's copy `secrets/config/hetzner-2i2c.yaml` to `secrets/config/<cluster-name>.yaml` and make changes!

1. Find all hostnames, and change them to point to the DNS entries you made in the previous step.
2. TODO: Something about the registry

## Deploy binder!

Let's tell `deploy.py` script that we have a new cluster by adding `<cluster-name>` to `KUBECONFIG_CLUSTERS` variable in `deploy.py`.

Once done, you can do a deployment with `./deploy.py <cluster-name>`! If it errors out, tweak and debug until it works.
## Test and validate

## Add to the redirector

0 comments on commit 354de95

Please sign in to comment.