-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rule multiqc_historical_raw: Stuck on 'Searching' #49
Comments
Update: when I convert it into a sbatch script it completes in less than 10 minutes. I still use the same singularity image (see rule converted to sbatch script below). Not sure what is going on here...
|
Hi @johannanvs, |
Hi @johnne, Thanks for getting back to me! In my initial post you'll find the log file and slurm .err file from when I ran it with snakemake, as intended. Below is the log file from when I ran it as a simple sbatch script outside of snakemake. I'm afraid I can't locate the slurm outfile for that run, but I suspect it would just have echoed what I listed in the sbatch script and nothing else, since everything worked.
|
I don't know if this is also related somehow, but I also always get an out-of-memory fail when I run qualimap with snakemake. I've tried playing around with booking different number of cores, and allowing more or less memory, but the fail remains. When I, in a similar fashion as above, convert it to a sbatch script and submit it directly to the slurm queue, the jobs complete without complaints in a few minutes. This again while using the same singularity image as I would if I used snakemake... |
Sorry but I cannot figure out the source of this. I'll keep this in mind though and ask around a bit to see if something pops up. |
Hi Johanna! Sorry to hear about these issues... Have you followed the latest slurm profile instructions, described here under point 5 in the documentation: https://github.com/NBISweden/GenErode/wiki/2.-Requirements-&-pipeline-configuration#requirements? There might be an issue with the slurm profile here but I'm not sure. |
Hi Verena! I followed the instructions, but I see now that the config.yaml files in config/slurm/profile/ and slurm/ are note the same, which means I must have forgotten to copy the config/slurm/profile/config.yaml to slurm/. Maybe this could be the source of my problems even if the multiqc rules are not listed in config/slurm/profile/ config.yaml. I'll copy it over and see if that fixes it! |
Hi Johanna! That could be the solution, I hope it works! Let me know if you still get an error. The official snakemake slurm profile is regularly updated and the config.yaml file in config/slurm/profile/ might not be up-to-date with the latest changes anymore. |
Hi Verena, Still the same issue I'm afraid. However, I now also included some modern samples downloaded from NCBI, and for these samples the multiQC finished without errors... |
Besides the cluster log file (slurm error file) and the rule log file, which showed the same messages as posted earlier, I also have this in the GenErode/.snakemake/tmp.neihn6us/snakejob.multiqc_historical_raw.6.sh:
Don't know if that helps? |
Hi! I think I found the solution! Re-reading your posts above I realised that you mentioned that you update To provide 5 cores and 4 hours to the rule, please add one line below
and two lines below
(runtime is provided in minutes, mem_mb is calculated based on 6.4 Gb per core on Rackham) I hope this was the issue and that this helps! The same solution might help with the qualimap rules. |
Hi Verena, I'm afraid that did not help. The job is cancelled due to running out of time, and in the log file it shows that multiqc gets stuck on searching for files. Again, it is so strange that it works (and finishes in like 10 mins) when I use the same singularity image and the same commands in a bash script, and that for the modern samples that are included it works perfectly running the rule with snakemake... |
Hi Johanna, I'm sorry to hear that! This is indeed strange... I have no idea right now how to solve this unfortunately but will continue to look for a solution. |
Whenever I try to run multiQC rules, the rule gets stuck on
[INFO ] multiqc : Searching : /path/to/stats/
, and my job gets cancelled due to reaching the requested job time limit. I've tried increasing the time to 6 hours instead of 2, but all this time it's still searching for files. For rule multiqc_historical_raw I've got around 550 files, but still, should it really take that long to search?log file:
slurm standard error file:
The text was updated successfully, but these errors were encountered: