Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snakemake fails with executor SLURM with slurm_persist_conn_open_without_init #114

Closed
mincej opened this issue Jul 16, 2024 · 5 comments
Closed

Comments

@mincej
Copy link

mincej commented Jul 16, 2024

PROBLEM

When I use snakemake with a cluster profile and leave it running overnight, trying both with a screen process or with a nohup snakemake ... & detachment, I will run into the following error below for each currently running rule/job:

The job status query failed with command: sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime 2024-06-06T03:00 --endtime now --name jobname1
Error message: sacct: error: slurm_persist_conn_open_without_init: failed to open persistent connection to host:hostname: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused

The job status query failed with command: sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime 2024-06-06T03:00 --endtime now --name jobname2
Error message: sacct: error: slurm_persist_conn_open_without_init: failed to open persistent connection to host:hostname: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused

The job status query failed with command: sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime 2024-06-06T03:00 --endtime now --name jobname3
Error message: sacct: error: slurm_persist_conn_open_without_init: failed to open persistent connection to host:hostname: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused

Snakemake version: 8.11.0
Snakemake Slurm Executor Plugin version: 0.5.0
Below is the configuration profile being used to run Snakemake with the Slurm plugin:

executor: slurm
jobs: 20
retries: 3
rerun-incomplete: true

rerun-triggers:
- mtime

resources:
- threads=150
- mem_mb=350000

default-resources:
- slurm_account=my-acct
- slurm_partition=my-partition
- mem_mb=8000*attempt
- tmpdir="/path/to/my/tmpdir"

set-resources:
  big_rule: &id001
    mem_mb: 64000*attempt
  another_big_rule: *id001
  more_big_rule: *id001

Noticably, this error has occurred multiple times in the past, and these jobs always fail at 3:00AM of the following morning. Note this line in the error statement:
sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime **2024-06-06T03:00** --endtime now --name jobname1

Also of note, an IT representative I've communicated with from our HPC team noted that they have had success with an overnight Nextflow workflow using screen. I have since tried their recommendation using screen, but again encountered the error above.

ATTEMPTED SOLUTIONS

I have run the same workflow using a "local" profile, using a high-resource interactive node on the same HPC, to confirm that the workflow completes as normal outside of a Slurm environment.

The following Github commit indicates that this problem was addressed in release 0.1.3 of the Snakemake Slurm Executor Plugin: #5 . Yet, the issue persists with my later version.

QUESTION

Is this problem likely to be caused by Snakemake, or is it more likely to do with how my institution's HPC is configured and how Snakemake interacts with it? Or is there more information that I could provide that could help me pinpoint the cause of this issue?

@cmeesters
Copy link
Member

Also of note, an IT representative I've communicated with from our HPC team noted that they have had success with an overnight Nextflow workflow using screen.

We are clearly dealing with a cluster issue of some kind, here (after all, sacct reports that it cannot connect). Yet, did your friendly admin check that there had been some hiccup during this time? Are there scheduled jobs interfering with the db stability, perhaps? A short failure of the connection or the database itself might be noticeable in the logs, but not necessarily lead to a workflow crash. So, the chance is high, that some issue in the logs can be detected. They ought to look a) on their master node and b) on your login node.

Snakemake only triggers sacct with all the flags you see. It is a client program like any other.

Allow me an additional question: What is the output of sinfo --version?

@mincej
Copy link
Author

mincej commented Jul 17, 2024

[user]$ sinfo --version
slurm 23.11.1

I am not sure about hiccups during this time, but it is a consistent issue; this has happened 3AM on multiple different days.

I am closing this issue, as this problem is seemingly outside of the scope of Snakemake development, so I appreciate the time to help and troubleshoot and will continue this conversation with our representatives given your suggested followups. Cheers!

@mincej mincej closed this as completed Jul 17, 2024
@cmeesters
Copy link
Member

Thanks!

Your SLURM is fairly recent. That probably is not an issue, then.

Perhaps, at least, you are able to restart with --rerun-incomplete?

@mincej
Copy link
Author

mincej commented Jul 18, 2024

Yes, I can still continue with the --rerun-incomplete line, so while not ideal it is not the end of the world. Thanks!

@mincej
Copy link
Author

mincej commented Jul 23, 2024

I have learned that our CHPC systems shut down for a brief moment in order to make a backup every night. Is there any way of specifying "onerror" handlers that attempt to resubmit jobs within some amount of time? Most of what I know about resubmission behavior has to do with the rules themselves failing, and not necessarily errors with communicating with the Slurm system. Thanks for any help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants