Skip to content

latchbio-nfcore/variantbenchmarking

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nf-core/variantbenchmarking

GitHub Actions CI Status GitHub Actions Linting StatusAWS CICite with Zenodo nf-test

Nextflow run with conda run with docker run with singularity Launch on Seqera Platform

Get help on SlackFollow on TwitterFollow on MastodonWatch on YouTube

Introduction

nf-core/variantbenchmarking is a bioinformatics pipeline that ...

  1. Standardization of SVs in test VCF files
  2. Normalization of SVs in test VCF files
  3. Normalization of SVs in truth VCF files
  4. SV stats and histograms
  5. Germline benchmarking of SVs
  6. Somatic benchmarking of SVs
  7. Final report and comparisons

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

Supported SV callers: Manta, SVaba, Dragen, Delly, Lumpy .. Available Truth samples: HG002, SEQC2

  • If you have unresolved SVs, it is recommended to use only truvari with --pctsim 0.

  • The size filtration parameters provided by truvari and svbenchmark do not apply in the same way. That is why using them is not recommended through the pipeline, but variants can be filtered safely in vcf normalization steps.

  • Please note that it is not possible to use exactly the same parameters for different benchmarking methods.

Now, you can run the pipeline using:

nextflow run nf-core/variantbenchmarking \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --outdir <OUTDIR>

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Credits

nf-core/variantbenchmarking was originally written by [email protected].

We thank the following people for their extensive assistance in the development of this pipeline:

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #variantbenchmarking channel (you can join with this invite).

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

About

A nextflow variant benchmarking pipeline - premature

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Nextflow 69.8%
  • Python 19.7%
  • Shell 6.4%
  • R 2.6%
  • HTML 1.1%
  • Dockerfile 0.4%