You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
+ gatling-sample01-runner-6bjs2 › gatling-runner
gatling-sample01-runner-6bjs2 gatling-runner Wait until 2023-03-01 16:26:32
gatling-sample01-runner-6bjs2 gatling-runner GATLING_HOME is set to /opt/gatling
gatling-runner is completely stuck when exeucting gatling.sh - as I suppose.
Also, by checking the k8s resources, I see that gatling operator has RUNNED 0/1
kns default
k get gatling,job,pod
Context "3-node" modified.
Active namespace is "default".
NAME RUNNED REPORTED NOTIFIED REPORTURL AGE
gatling.gatling-operator.tech.zozo.com/gatling-sample01 0/1 3m17s
NAME COMPLETIONS DURATION AGE
job.batch/gatling-sample01-runner 0/1 3m17s 3m17s
NAME READY STATUS RESTARTS AGE
pod/gatling-sample01-runner-6bjs2 1/1 Running 0 3m17s
I reduced parallelism: 1 as I thought that this could be an issue, but not differences at all.
Any idea why is the issue?
Thanks
The text was updated successfully, but these errors were encountered:
vladimirsvicevicsrb
changed the title
Gatling runner is stuck before it starts test
Gatling runner is stuck before it starts test on M1 / minikube
Mar 2, 2023
Thank you for creating the issue.
I think it is probably due to the fact that IMAGE(ghcr.io/st-tech/gatling:latest) does not support multiple architectures, so I will address it.
Hi,
I am trying to test gatling in locally running
minikube
onM1 Pro
machine, and after installing operator and starting sample as:kustomize build config/samples | kubectl apply -f -
and checking the logs:
gatling-runner is completely stuck when exeucting
gatling.sh
- as I suppose.Also, by checking the k8s resources, I see that
gatling operator
hasRUNNED 0/1
I reduced
parallelism: 1
as I thought that this could be an issue, but not differences at all.Any idea why is the issue?
Thanks
The text was updated successfully, but these errors were encountered: