Skip to content

Commit

Permalink
Merge pull request #458 from AutomatingSciencePipeline/tilt-dev
Browse files Browse the repository at this point in the history
Tilt Dev Setup
  • Loading branch information
rhit-windsors authored Feb 5, 2025
2 parents 6e28ac0 + 3e55e6b commit d4d61df
Show file tree
Hide file tree
Showing 16 changed files with 375 additions and 82 deletions.
44 changes: 44 additions & 0 deletions Tiltfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Setup the needed k8s yamls
k8s_yaml([
"kubernetes_init/tilt/cluster-role-job-creator.yaml",
"kubernetes_init/tilt/role-binding-job-creator.yaml",
"kubernetes_init/kubernetes_secrets/secret.yaml",
"kubernetes_init/tilt/deployment-frontend.yaml",
"kubernetes_init/tilt/service-frontend.yaml",
"kubernetes_init/tilt/deployment-backend.yaml",
"kubernetes_init/tilt/service-backend-dev.yaml",
"kubernetes_init/tilt/watch-runner-cronjob.yaml",
])

# Setup the k8s_resource
k8s_resource("glados-frontend", port_forwards="3000", labels=["frontend"])
k8s_resource("glados-backend", port_forwards="5050", labels=["backend"])
k8s_resource("watch-runner-changes", labels=["runner"])

# Build the frontend
docker_build("frontend",
context='./apps/frontend',
live_update=[
sync("./apps/frontend", "/usr/src/app")
],
dockerfile='./apps/frontend/frontend-dev.Dockerfile')

# Build the backend
docker_build("backend",
context='./apps/backend',
live_update=[
sync("./apps/backend", "/app"),
run('cd /app && pip install -r requirements.txt',
trigger='./requirements.txt'),

],
dockerfile='./apps/backend/backend-dev.Dockerfile')

# Build the runner
docker_build("runner",
context='./apps/runner',
dockerfile='./apps/runner/runner.Dockerfile',
match_in_env_vars=True)

# Ignore the runner not being used
update_settings(suppress_unused_image_warnings=["runner"])
29 changes: 29 additions & 0 deletions apps/backend/backend-dev.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# syntax=docker/dockerfile:1
FROM python:3.8-slim AS base

# RUN apt-get update && \
# apt-get install -y ca-certificates curl gnupg && \
# install -m 0755 -d /etc/apt/keyrings && \
# curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg && \
# chmod a+r /etc/apt/keyrings/docker.gpg

WORKDIR /app
COPY . /app

FROM base AS python_dependencies
RUN pip install pipenv
COPY Pipfile .
COPY Pipfile.lock .

# =====================================================================================================
FROM python_dependencies AS production
# Args explanation: https://stackoverflow.com/a/49705601
# https://pipenv-fork.readthedocs.io/en/latest/basics.html#pipenv-install
RUN pipenv install --system --deploy --ignore-pipfile

ADD . .

USER root
ENV FLASK_DEBUG True
EXPOSE $BACKEND_PORT
CMD flask run --host=0.0.0.0 -p $BACKEND_PORT
1 change: 1 addition & 0 deletions apps/backend/job-runner.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ apiVersion: batch/v1
kind: Job
metadata:
name: runner
namespace: default
spec:
template:
metadata:
Expand Down
6 changes: 6 additions & 0 deletions apps/backend/spawn_runner.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
"""Module that provides functionality to create a job for the runner"""

import os
import time
import sys
import yaml
Expand All @@ -20,6 +21,11 @@ def create_job_object(experiment_data):

runner_body['metadata']['name'] = job_name
runner_body['spec']['template']['spec']['containers'][0]['command'] = job_command

if os.getenv("IMAGE_RUNNER"):
# Get the image name
image_name = str(os.getenv("IMAGE_RUNNER"))
runner_body['spec']['template']['spec']['containers'][0]['image'] = image_name

return runner_body

Expand Down
23 changes: 23 additions & 0 deletions apps/frontend/frontend-dev.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
FROM node:20-alpine AS base

# Add essential utilities
RUN apk add --no-cache bash libc6-compat

WORKDIR /usr/src/app

# Install dependencies
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm install -g pnpm
RUN pnpm import
RUN pnpm install --frozen-lockfile

# Install sharp to optimize images
RUN pnpm add sharp

ADD . .

# Expose the frontend port
EXPOSE $FRONTEND_WEBSERVER_PORT

CMD ["npm", "run", "dev"]
148 changes: 71 additions & 77 deletions docs/docs/tutorial/local_testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,28 @@ This guide is meant for developers contributing to GLADOS that want to test chan
Helm is needed for this installation. Install with:

```bash
brew install helm #(Intel Mac)
winget install Helm.Helm
brew install helm # (Intel based Mac)
winget install Helm.Helm # (Windows)
```

Also ensure that Docker Desktop is installed and open.

Finally, ensure that kubectl and minikube are installed using:

```bash
brew install kubectl #(Intel Mac)
winget install kubectl
brew install kubectl # (Intel based Mac)
winget install kubectl # (Windows)

brew install minikube #(Intel Mac)
winget install minikube
brew install minikube # (Intel based Mac)
winget install minikube # (Windows)
```

## Setup Kubernetes Cluster on Minikube

First, start minkube using:
!!! Warning
If you are using [Tilt](#tilt), click the link to jump to that section or you will have to redo some of your work!

First, start Minikube using:

```bash
minikube start
Expand Down Expand Up @@ -74,7 +77,7 @@ Now, use Helm to install the GLADOS MongoDB registry from DockerHub. This is nec
### Ensure that Mongo Replica Set Has Proper Permissions

```bash
minkube ssh
minikube ssh

cd /srv/data

Expand Down Expand Up @@ -112,6 +115,8 @@ db.updateUser("adminuser", { roles: [ { role: "root", db: "admin" } ] })
exit
```
Return to [Tilt Setup](#running-tilt).
## Pull Frontend and Backend Images
From the root of the Monorepo, run this command:
Expand All @@ -129,123 +134,112 @@ This command is used to expose the frontend pod so that it can be accessed via p
kubectl expose pod <POD-NAME> --type=NodePort --name=glados-frontend-nodeport --port=3000 --target-port=3000
```
Then you must expose the newly created service on minkube to actually access the website from the url that is provided in the terminal.
Then you must expose the newly created service on minikube to actually access the website from the url that is provided in the terminal.
```bash
minikube service glados-frontend-nodeport --url
```
This will have a version of the mainline glados running in your minikube environment.
## Push and Use Docker Images
## Tilt
In order to use the code in your local environment, you will need to build and push the docker images you have made changes to.
As of February of 2025 we have switched to using Tilt for our local development.
### Building and Pushing Docker Images
### What is Tilt
In a terminal CD to the component you wish to test. For example, if I made changes to the frontend I would
```bash
cd apps/frontend
```
Tilt for Kubernetes is a tool that streamlines the local development of Kubernetes applications. It automates building container images, deploying them to a cluster, and live-reloading changes in real time. It watches for code updates, rebuilds affected services, and provides a dashboard to monitor logs and resource statuses, making it easier to iterate quickly without manually managing Kubernetes configurations.
Next we need to build the docker image. This is done with the following command:
### Why use Tilt
```bash
docker build -t {DOCKER HUB USERNAME}/glados-frontend:main . -f frontend.Dockerfile
```
The biggest reason for using Tilt is that is has a feature called "live_update". Live update allows us to use the hot reload feature in NextJS to see changes almost instantly. Tilt will update running pods with new files to reflect changes.
This may make a couple of minutes to build.
### How to use Tilt
You can do the same for the backend and runner.
If you have followed the guide up to this point you will have the main line GLADOS running on your system. Unfortunately we are going to have to discard that progress.
Next we need to push our docker images to docker hub. Sign into your docker hub account with the following command:
### If you already have Minikube running on your system
```bash
docker login
```
!!! Warning
Run the following commands *only* if you have currently have a Minikube cluster running on your machine.
Follow the instructions in the terminal to login.
Make sure that Docker Desktop is running.
Now we can push this image!
Run the following command:
```bash
docker push {DOCKER HUB USERNAME}/glados-frontend:main
minikube delete
```
Our image is now on docker's image library!
### Using our published docker images
Inside of the kubernetes_init folder, there are a couple of items of interest.
### Start here if you do not have a Minikube instance
1. backend/deployment-backend.yaml
2. frontend/deployment-frontend.yaml
We will have to install a couple of prerequisite programs.
If you open these files you will see that on line 20 the image is set. Change this to your image that you published to your docker hub.
#### Windows
Now we need to make sure we are in the project root in our terminal and execute:
If you have Scoop installed, skip the Scoop install commands.
```bash
python3 ./kubernetes_init/init.py --hard
```
# Install Tilt
iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.ps1'))
You can then use the command:
# Now we need to install Scoop, skip if Scoop is already installed
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Invoke-RestMethod -Uri https://get.scoop.sh | Invoke-Expression
```bash
kubectl get pods
```
# Next we are going to install ctlptl
scoop bucket add tilt-dev https://github.com/tilt-dev/scoop-bucket
scoop install ctlptl
Which will show something like
# Run the following to make sure everything is working
tilt version
ctlptl version
# Restart your terminal if any errors are displayed
```bash
NAME READY STATUS RESTARTS AGE
glados-backend-687fc6b7ff-dld2p 1/1 Running 0 74s
glados-frontend-5f575b99b7-9q9ml 1/1 Running 0 74s
glados-mongodb-0 1/1 Running 1 (2m9s ago) 20d
glados-mongodb-1 1/1 Running 1 (2m9s ago) 20d
glados-mongodb-arbiter-0 1/1 Running 2 (20d ago) 20d
# Now we need to start the Minikube cluster
# This will also create a image registry in Docker Desktop
ctlptl create cluster minikube --registry=ctlptl-registry
```
Using the image that you replaced run:
#### MacOS
```bash
kubectl describe pod {POD NAME FROM LAST STEP}
```
# Install Tilt
curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash
This will then show the image information. Make sure this points to your docker hub.
# Install ctlptl, this is dependent on you having Homebrew installed
brew install tilt-dev/tap/ctlptl
!!! Warning
Make sure to change the deployment yaml back before merging!!!!
# Run the following to make sure everything is working
tilt version
ctlptl version
# Restart your terminal if any errors are displayed
In order to update the runner image, go to the apps/backend folder, and update the image in job-runner.yaml following the steps above.
# Now we need to start the Minikube cluster
# This will also create a image registry in Docker Desktop
ctlptl create cluster minikube --registry=ctlptl-registry
```
Now you can use locally built images to run GLADOS!
#### Running Tilt
## Prebuilt Script for Docker Image Management
Now that you have the Minikube cluster running, we need to [setup MongoDB](#setup-mongodb-cluster). Come back to here once you have MongoDB running in the Minikube Cluster.
Due to the complexity of getting Minikube to behave, I have created a python script to run the needed commands for you.
Open a terminal window and navigate to the root of the Monorepo
From the root of the Monorepo run the command:
Run the following:
```bash
python3 .\development_scripts\local\setup_local.py <args>
tilt up
```
You can provide arguments for which elements of the project you would like to build.
Options are: frontend, backend, runner, all
In the python3 file you will need to set a couple of values to make sure that it is setup for your environment. Update those values and run the python script with the pieces you would like to build and push.
Now you can press space bar to open the Tilt GUI in your web browser.
Note: You still need to make sure to follow the steps above for changing the image which you are running the cluster from.
The GLADOS frontend will be accessible at <http://localhost:3000>.
After running the python script you will see something like:
Any code changes made will automatically update the running pods.
```bash
Frontend is now running at: http://localhost:64068
```
Updating the frontend and backend cause live updates, updating the runner will cause a rebuild to take place.
Opening that link will bring you to a local version of GLADOS.
In the Tilt GUI there are refresh buttons to manually refresh running pods, use this button if you see any weirdness with the backend and frontend not talking to each other properly.
!!!Warning
With the local version of GLADOS being HTTP you may have weird networking issues due to the max number of connections to an HTTP/1.1 host. This will be fixed in a later update.
Congrats! You now have the development environment all setup!
4 changes: 1 addition & 3 deletions helm_packages/mongodb-helm/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,4 @@ affinity:
- key: "kubernetes.io/hostname" # Key for the node selector
operator: In
values:
- "glados-db" # Replace with the name of the node you want to use


- "glados-db" # Replace with the name of the node you want to use
2 changes: 1 addition & 1 deletion kubernetes_init/backend/cluster-role-job-creator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ metadata:
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create"]
verbs: ["create", "delete"]
2 changes: 1 addition & 1 deletion kubernetes_init/frontend/service-frontend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ spec:
tier: frontend
ports:
- protocol: TCP
port: 3000 # The port exposed by the Service
port: 3000 # The port exposed by the Service
targetPort: 3000 # The port your Next.js app listens on
type: ClusterIP
Loading

0 comments on commit d4d61df

Please sign in to comment.