storctl
is a command-line tool for managing demo and lab environments in cloud infrastructure or on the local host using virtual machines.
The main focus of this tool is on MinIO AIStor testing, training, and demonstration.
- Create and manage lab environments with multiple servers and volumes
- Manage DNS records with Cloudflare
- Use Lima virtual machines on macOS or Hetzner Cloud infrastructure (currently)
- Manage SSH keys to access cloud VMs
- Manage cloud resource lifecycle with TTL (Time To Live)
- Use YAML-based configuration and resource definitions similar to Kubernetes
-
kubectl
is installed. If it's not installed on your machine, follow these instructions -
Krew is installed. If it's not installed, follow these instructions
-
DirectPV plugin is installed. If it's not installed, follow these instructions
-
Helm is installed. If it's not installed on your machine, follow these instructions. On a Mac, the easiest way is to use
brew install helm
.
Local AIStor installation uses Lima to manage virtual machines, QEMU as a virtualization engine, and socket_vmnet
for the network.
We have to use QEMU with socket_vmnet
shared network to allow the VMs to talk to each other and being able to access the VMs from the host.
-
Install Lima.
brew install lima
-
Install QEMU
brew install qemu
-
Check if you have already installed Xcode command tools (which is very likely)
xcode-select -p
Expected output:
/Library/Developer/CommandLineTools
If it's not installed, run:
xcode-select --install
-
Build and install the network driver for
socket_vmnet
. The full instructions and explanation is provided on the official Lima site. Here is a short version of it:# Install socket_vmnet as root from source to /opt/socket_vmnet # using instructions on https://github.com/lima-vm/socket_vmnet # This assumes that Xcode Command Line Tools are already installed git clone https://github.com/lima-vm/socket_vmnet cd socket_vmnet # Change "v1.2.1" to the actual latest release in https://github.com/lima-vm/socket_vmnet/releases git checkout v1.2.1 make sudo make PREFIX=/opt/socket_vmnet install.bin # Set up the sudoers file for launching socket_vmnet from Lima limactl sudoers >etc_sudoers.d_lima less etc_sudoers.d_lima # verify that the file looks correct sudo install -o root etc_sudoers.d_lima /etc/sudoers.d/lima rm etc_sudoers.d_lima
-
Note: Lima might give you an error message about the
docker.sock
file. In that case, just delete the file mentioned in the error message.
-
Get a Hetzner Cloud account and API token. Ask the Traning team for access to the MinIO shared project.
-
Get a Cloudflare account and API token (for DNS management) from the Training team. You don't need it if you prefer to use your own domain.
Download binaries for your OS/arch from the Releases page.
git clone https://github.com/pavelanni/storctl
cd storctl
go build -o storctl .
# Move the resulting binary to your PATH
mv storctl $HOME/.local/bin # or any other directory in your PATH
- Initialize the configuration:
storctl init
This creates a default configuration directory at ~/.storctl
with the following structure:
config.yaml
-- Main configuration filetemplates/
-- Lab environment templateskeys/
-- SSH key storageansible/
-- for Ansible playbooks and inventory fileslima/
-- for Lima configs
- Edit the configuration file at
~/.storctl/config.yaml
:
providers:
- name: "hetzner"
token: "your-hetzner-token" # add your Hetzner Cloud token if you are going to use cloud installation
location: "nbg1" # EU locations: nbd1, fsn1, hel1; US locations: ash, hil; APAC locations: sin
- name: "lima"
dns: # this section is not used by local installation
provider: "cloudflare"
token: "your-cloudflare-token" # add your Cloudflare token if you're going to use cloud installation
zone_id: "your-zone-id" # add your Cloudflare Zone ID if you're going to use cloud installation
domain: "aistorlabs.com" # feel free to use your own domain
# this section is not used by local installation
email: "[email protected]"
organization: "your-organization"
owner: "your-name"
# View current configuration
storctl config view
# Create a new lab environment
storctl create lab mylab --template lab.yaml
# List all labs
storctl get lab
# Get details about a specific lab
storctl get lab mylab
# Delete a lab
storctl delete lab mylab
# Create a new SSH key (you need it only for cloud installation)
storctl create key mykey
# Create a new server (usually servers are created automatically)
storctl create server myserver
# Create a new volume (usually volumes are created automatically)
storctl create volume myvolume
You can also create resources using YAML definition files. Those files use a format used by Kubernetes manifests.
storctl create -f lab.yaml
storctl create -f server.yaml
storctl create -f volume.yaml
Example lab template:
apiVersion: v1
kind: Lab
metadata:
name: aistor-lab
labels:
project: aistor
spec:
ttl: 24h
provider: hetzner
location: nbg1
servers:
- name: cp
serverType: cx22
image: ubuntu-24.04
- name: node-01
serverType: cx22
image: ubuntu-24.04
volumes:
- name: volume-01
server: node-01
size: 100
automount: false
format: xfs
-
At the end of the Ansible playbook output find the location of the Kubernetes config file. It should include the phrase "You can use it by running: export KUBECONFIG=". Copy the file path and run the command mentioned above:
export KUBECONFIG=$HOME/.storctl/kubeconfigs/mylab-kubeconfig
-
Check if you can see the cluster nodes:
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION mylab-cp Ready control-plane,master 6m55s v1.31.5+k3s1 mylab-node-01 Ready <none> 6m48s v1.31.5+k3s1
-
Check if the AIStor pod is running:
kubectl get pod -n aistor
Expected output:
NAME READY STATUS RESTARTS AGE aistor-0 1/1 Running 0 20s
-
Check if the AIStor service has been created:
kubectl get svc -n aistor
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE aistor ClusterIP 10.43.182.85 <none> 8444/TCP,7899/TCP 70s aistor-hl ClusterIP None <none> 8444/TCP,7899/TCP 70s
-
Run the
port-forward
command to be able to access the cluster from the browser:kubectl port-forward -n aistor svc/aistor 8444:8444
Expected output:
Forwarding from 127.0.0.1:8444 -> 8444 Forwarding from [::1]:8444 -> 8444
Don't close this terminal session and keep it running while configuring AIStor.
-
Open the URL:
http://localhost:8444
in the browser. You should see the first AIStor page where you should enter the license. -
Enter the license key. If you don't have it, obtain it from your MinIO representative.
-
Create the first user that will the the cluster admin.
-
Create your first object store. Answer "No" to both questions (about drive manager and encryption).
-
Add a user to the Object Store you just created. Click Access in the menu:
And create the admin user. Assign the
consoleAdmin
policy to that user. -
Add an an Inbound traffic configuration to access the cluster via NodePort. Click Inbound Traffic in the menu:
Enable Direct Access. Set the port for Object API to 30001 and for Console UI to 31001.
-
In another terminal session (not the one running the
kubectl port-forward
command) create a new alias for the first Object Store. Use the credentials you give the first user. In the example below the user isadmin
and the password islearn-by-doing
.export MC_INSECURE=true mc alias set aistor-first https://localhost:30001 admin learn-by-doing
-
Open this URL in your browser:
https://localhost:31001
-
Enter your
admin
user credentials in the login form. These is the user you created in the first Object Store, NOT the first user you created after installing AIStor. -
Use the Object Store console the usual way.
All resources support:
- Labels for organization and filtering
- TTL (Time To Live) for automatic cleanup
- Provider-specific configurations
- YAML/JSON manifest files
-
In multi-node configurations (with more than one worker node in the cluster) sometimes DirectPV doesn't discover drives on all nodes properly. Before starting using AIStor after installation, check DirectPV status with this command:
kubectl directpv info
If in the output you don't see all your nodes and drives, re-run the discovery and initialization commands:
kubectl directpv discover kubectl directpv init drives.yaml --dangerous
And check the status again with the
kubectl directpv info
command.
Contributions are welcome! Please feel free to submit a Pull Request.