Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in the local deployment of the Data Space Connector #33

Open
flopezag opened this issue Dec 5, 2024 · 3 comments
Open

Problem in the local deployment of the Data Space Connector #33

flopezag opened this issue Dec 5, 2024 · 3 comments
Assignees

Comments

@flopezag
Copy link
Member

flopezag commented Dec 5, 2024

When I try to execute the command

$ mvn -X clean deploy -Plocal

I get the following error message:

[INFO] Still waiting for: [consumer/release-name-keycloak, provider/data-service-scorpio, provider/release-name-apisix-data-plane, provider/verifier]
[INFO] deployment provider/verifier ... ready
[INFO] Still waiting for: [consumer/release-name-keycloak, provider/data-service-scorpio, provider/release-name-apisix-data-plane]
[INFO] deployment provider/data-service-scorpio ... ready
[INFO] Still waiting for: [consumer/release-name-keycloak, provider/release-name-apisix-data-plane]
[INFO] statefulset consumer/release-name-keycloak ... ready
[INFO] Still waiting for: [provider/release-name-apisix-data-plane]
[ERROR] >>> docker exec k3s-maven-plugin kubectl rollout status deployment release-name-apisix-data-plane --namespace=provider --timeout=500s (timeout: PT8M30S)
[ERROR] <<< Waiting for deployment "release-name-apisix-data-plane" rollout to finish: 0 of 1 updated replicas are available...
[ERROR] <<< error: timed out waiting for the condition

Checking all the namespaces, I get the following info:

$ kubectl get all --all-namespaces

NAMESPACE      NAME                                                                  READY   STATUS             RESTARTS       AGE
provider       pod/release-name-apisix-data-plane-85664dfb7-mrhmq                    1/2     CrashLoopBackOff   12 (26s ago)   39m
provider       pod/tmf-api-registration-5lpn6                                        0/1     Error              0              37m
provider       pod/tmf-api-registration-kcgnr                                        0/1     Completed          0              36m
provider       pod/tmf-api-registration-mcfsh                                        0/1     Error              0              38m
provider       pod/tmf-api-registration-pt4xv                                        0/1     Error              0              38m
provider       pod/tmf-api-registration-vv659                                        0/1     Error              0              39m

...

NAMESPACE      NAME                                                                     READY   UP-TO-DATE   AVAILABLE   AGE
provider       deployment.apps/release-name-apisix-data-plane                           0/1     1            0           39m

NAMESPACE      NAME                                                                                DESIRED   CURRENT   READY   AGE
provider       replicaset.apps/release-name-apisix-data-plane-85664dfb7                            1         1         0       39m

Show me that there is a problem with the release-name-apisix-data-plane, so checking the log of this APISIX container, I get the following info:

$ kubectl logs release-name-apisix-data-plane-85664dfb7-mrhmq -n provider -f

Defaulted container "apisix" out of: apisix, open-policy-agent, wait-for-control-plane (init), prepare-apisix (init)
2024/12/04 17:15:10 [warn] 1#1: [lua] config_yaml.lua:117: read_apisix_yaml(): config file /usr/local/apisix/conf/apisix.yaml reloaded.
nginx: [warn] [lua] config_yaml.lua:117: read_apisix_yaml(): config file /usr/local/apisix/conf/apisix.yaml reloaded.
2024/12/04 17:15:10 [emerg] 1#1: bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
nginx: [emerg] bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
2024/12/04 17:15:10 [emerg] 1#1: socket() [::]:9080 failed (97: Address family not supported by protocol)
nginx: [emerg] socket() [::]:9080 failed (97: Address family not supported by protocol)

I do not know the issue but maybe it is related to this issue.

@pulledtim
Copy link
Contributor

Hi Fernando,

Just a quick question to help me reproduce the issue: Which linux distribution/version are you using and which docker version?

The linked issue from Carlos has a recent reply that mentions steps to reproduce the issue with the apisix chart, so we have to upvote the issue and develop a cleanup/workaround

@flopezag
Copy link
Member Author

flopezag commented Dec 9, 2024

Linux distribution/version:

TUXEDO OS 2
Kernel version: 6.11.0-107009-tuxedo (64-bit) #9tuxjammy1 SMP PREEMPT_DYNAMIC Fri Nov  8 22:02:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Docker Version:

Client: Docker Engine - Community
 Cloud integration: v1.0.29
 Version:           27.3.1
 API version:       1.47
 Go version:        go1.22.7
 Git commit:        ce12230
 Built:             Fri Sep 20 11:41:00 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.3.1
  API version:      1.47 (minimum version 1.24)
  Go version:       go1.22.7
  Git commit:       41ca978
  Built:            Fri Sep 20 11:41:00 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.24
  GitCommit:        88bf19b2105c8b17560993bee28a01ddc2f97182
 runc:
  Version:          1.2.2
  GitCommit:        v1.2.2-0-g7cb3632
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

@wistefan
Copy link
Collaborator

@flopezag Could you please try to set "apisix.dataPlane.resourcesPreset" to "none" or to "medium"? This usually happens because one of the apisix containers gets killed due to resource limitations. The issue @pulledtim mentions would help with restarting the container, but there is a chance it would get killed again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants