diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index ac3a58e80..3d9036509 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -19,7 +19,7 @@ If you'd like to write some code for nf-core/quantms and bigbio/quantms, the sta 1. Check that there isn't already an issue about your idea in the [nf-core/quantms issues](https://github.com/nf-core/quantms/issues) and [bigbio/quantms_issues](https://github.com/bigbio/quantms/issues) to avoid duplicating work. If there isn't one already, please create one so that others know you're working on this 2. [Fork](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) the [bigbio/quantms repository](https://github.com/bigbio/quantms) to your GitHub account 3. Make the necessary changes / additions within your forked repository following [Pipeline conventions](#pipeline-contribution-conventions) -4. Use `nf-core schema build` and add any new parameters to the pipeline JSON schema (requires [nf-core tools](https://github.com/nf-core/tools) >= 1.10). +4. Use `nf-core pipelines schema build` and add any new parameters to the pipeline JSON schema (requires [nf-core tools](https://github.com/nf-core/tools) >= 1.10). 5. Submit a Pull Request against the `dev` branch and wait for the code to be reviewed and merged If you're not used to this workflow with git, you can start with some [docs from GitHub](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests) or even their [excellent `git` resources](https://try.github.io/). @@ -40,7 +40,7 @@ There are typically two types of tests that run: ### Lint tests `nf-core` has a [set of guidelines](https://nf-co.re/developers/guidelines) which all pipelines must adhere to. -To enforce these and ensure that all pipelines stay in sync, we have developed a helper tool which runs checks on the pipeline code. This is in the [nf-core/tools repository](https://github.com/nf-core/tools) and once installed can be run locally with the `nf-core lint ` command. +To enforce these and ensure that all pipelines stay in sync, we have developed a helper tool which runs checks on the pipeline code. This is in the [nf-core/tools repository](https://github.com/nf-core/tools) and once installed can be run locally with the `nf-core pipelines lint ` command. If any failures or warnings are encountered, please follow the listed URL for more documentation. @@ -75,7 +75,7 @@ If you wish to contribute a new step, please use the following coding standards: 2. Write the process block (see below). 3. Define the output channel if needed (see below). 4. Add any new parameters to `nextflow.config` with a default (see below). -5. Add any new parameters to `nextflow_schema.json` with help text (via the `nf-core schema build` tool). +5. Add any new parameters to `nextflow_schema.json` with help text (via the `nf-core pipelines schema build` tool). 6. Add sanity checks and validation for all relevant parameters. 7. Perform local tests to validate that the new code works as expected. 8. If applicable, add a new test command in `.github/workflow/ci.yml`. @@ -86,11 +86,11 @@ If you wish to contribute a new step, please use the following coding standards: Parameters should be initialised / defined with default values in `nextflow.config` under the `params` scope. -Once there, use `nf-core schema build` to add to `nextflow_schema.json`. +Once there, use `nf-core pipelines schema build` to add to `nextflow_schema.json`. ### Default processes resource requirements -Sensible defaults for process resource requirements (CPUs / memory / time) for a process should be defined in `conf/base.config`. These should generally be specified generic with `withLabel:` selectors so they can be shared across multiple processes/steps of the pipeline. A nf-core standard set of labels that should be followed where possible can be seen in the [nf-core pipeline template](https://github.com/nf-core/tools/blob/master/nf_core/pipeline-template/conf/base.config), which has the default process as a single core-process, and then different levels of multi-core configurations for increasingly large memory requirements defined with standardised labels. +Sensible defaults for process resource requirements (CPUs / memory / time) for a process should be defined in `conf/base.config`. These should generally be specified generic with `withLabel:` selectors so they can be shared across multiple processes/steps of the pipeline. A nf-core standard set of labels that should be followed where possible can be seen in the [nf-core pipeline template](https://github.com/nf-core/tools/blob/main/nf_core/pipeline-template/conf/base.config), which has the default process as a single core-process, and then different levels of multi-core configurations for increasingly large memory requirements defined with standardised labels. The process resources can be passed on to the tool dynamically within the process with the `${task.cpus}` and `${task.memory}` variables in the `script:` block. @@ -103,7 +103,7 @@ Please use the following naming schemes, to make it easy to understand what is g ### Nextflow version bumping -If you are using a new feature from core Nextflow, you may bump the minimum required version of nextflow in the pipeline with: `nf-core bump-version --nextflow . [min-nf-version]` +If you are using a new feature from core Nextflow, you may bump the minimum required version of nextflow in the pipeline with: `nf-core pipelines bump-version --nextflow . [min-nf-version]` ### Images and figures diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index e3148c433..195e7b5ea 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -17,7 +17,7 @@ Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/quan - [ ] If you've fixed a bug or added code that should be tested, add tests! - [ ] If you've added a new tool - have you followed the pipeline conventions in the [contribution docs](https://github.com/nf-core/quantms/tree/master/.github/CONTRIBUTING.md) - [ ] If necessary, also make a PR on the nf-core/quantms _branch_ on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository. -- [ ] Make sure your code lints (`nf-core lint`). +- [ ] Make sure your code lints (`nf-core pipelines lint`). - [ ] Ensure the test suite passes (`nextflow run . -profile test,docker --outdir `). - [ ] Check for unexpected warnings in debug mode (`nextflow run . -profile debug,test,docker --outdir `). - [ ] Usage Documentation in `docs/usage.md` is updated. diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 0f0b851e7..b46e6f26f 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -7,9 +7,12 @@ on: pull_request: release: types: [published] + workflow_dispatch: env: NXF_ANSI_LOG: false + NXF_SINGULARITY_CACHEDIR: ${{ github.workspace }}/.singularity + NXF_SINGULARITY_LIBRARYDIR: ${{ github.workspace }}/.singularity concurrency: group: "${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}" @@ -32,36 +35,43 @@ jobs: matrix: # Nextflow versions NXF_VER: - - "23.04.0" + - "24.04.2" - "latest-everything" test_profile: ["test_lfq", "test_lfq_sage", "test_dia", "test_localize", "test_tmt", "test_dda_id", "test_tmt_corr"] exec_profile: ["docker"] exclude: - test_profile: test_dia - exec_profile: conda + exec_profile: "conda" - test_profile: test_localize - exec_profile: conda + exec_profile: "conda" - NXF_VER: "latest-everything" exec_profile: "conda" include: - test_profile: test_latest_dia - exec_profile: singularity + exec_profile: "singularity" - test_profile: test_lfq - exec_profile: conda + exec_profile: "conda" - test_profile: test_dda_id - exec_profile: conda + exec_profile: "conda" steps: - name: Check out pipeline code uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4 - - name: Install Nextflow + - name: Set up Nextflow uses: nf-core/setup-nextflow@v2 with: version: "${{ matrix.NXF_VER }}" - - name: Disk space cleanup - uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1 + - name: Set up Apptainer + if: matrix.exec_profile == 'singularity' + uses: eWaterCycle/setup-apptainer@main + + - name: Set up Singularity + if: matrix.exec_profile == 'singularity' + run: | + mkdir -p $NXF_SINGULARITY_CACHEDIR + mkdir -p $NXF_SINGULARITY_LIBRARYDIR - name: Install micromamba env: @@ -78,6 +88,9 @@ jobs: if: matrix.exec_profile == 'singularity' uses: singularityhub/install-singularity@main + - name: Disk space cleanup + uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1 + - name: Run pipeline with test data in docker/singularity profile if: matrix.exec_profile == 'docker' || matrix.exec_profile == 'singularity' # TODO nf-core: You can customise CI pipeline run tests as required diff --git a/.github/workflows/download_pipeline.yml b/.github/workflows/download_pipeline.yml index 2d20d6442..713dc3e73 100644 --- a/.github/workflows/download_pipeline.yml +++ b/.github/workflows/download_pipeline.yml @@ -1,4 +1,4 @@ -name: Test successful pipeline download with 'nf-core download' +name: Test successful pipeline download with 'nf-core pipelines download' # Run the workflow when: # - dispatched manually @@ -8,7 +8,7 @@ on: workflow_dispatch: inputs: testbranch: - description: "The specific branch you wish to utilize for the test execution of nf-core download." + description: "The specific branch you wish to utilize for the test execution of nf-core pipelines download." required: true default: "dev" pull_request: @@ -39,9 +39,11 @@ jobs: with: python-version: "3.12" architecture: "x64" - - uses: eWaterCycle/setup-singularity@931d4e31109e875b13309ae1d07c70ca8fbc8537 # v7 + + - name: Setup Apptainer + uses: eWaterCycle/setup-apptainer@4bb22c52d4f63406c49e94c804632975787312b3 # v2.0.0 with: - singularity-version: 3.8.3 + apptainer-version: 1.3.4 - name: Install dependencies run: | @@ -54,33 +56,64 @@ jobs: echo "REPOTITLE_LOWERCASE=$(basename ${GITHUB_REPOSITORY,,})" >> ${GITHUB_ENV} echo "REPO_BRANCH=${{ github.event.inputs.testbranch || 'dev' }}" >> ${GITHUB_ENV} + - name: Make a cache directory for the container images + run: | + mkdir -p ./singularity_container_images + - name: Download the pipeline env: - NXF_SINGULARITY_CACHEDIR: ./ + NXF_SINGULARITY_CACHEDIR: ./singularity_container_images run: | - nf-core download ${{ env.REPO_LOWERCASE }} \ + nf-core pipelines download ${{ env.REPO_LOWERCASE }} \ --revision ${{ env.REPO_BRANCH }} \ --outdir ./${{ env.REPOTITLE_LOWERCASE }} \ --compress "none" \ --container-system 'singularity' \ - --container-library "quay.io" -l "docker.io" -l "ghcr.io" \ + --container-library "quay.io" -l "docker.io" -l "community.wave.seqera.io" \ --container-cache-utilisation 'amend' \ - --download-configuration + --download-configuration 'yes' - name: Inspect download run: tree ./${{ env.REPOTITLE_LOWERCASE }} + - name: Count the downloaded number of container images + id: count_initial + run: | + image_count=$(ls -1 ./singularity_container_images | wc -l | xargs) + echo "Initial container image count: $image_count" + echo "IMAGE_COUNT_INITIAL=$image_count" >> ${GITHUB_ENV} + - name: Run the downloaded pipeline (stub) id: stub_run_pipeline continue-on-error: true env: - NXF_SINGULARITY_CACHEDIR: ./ + NXF_SINGULARITY_CACHEDIR: ./singularity_container_images NXF_SINGULARITY_HOME_MOUNT: true run: nextflow run ./${{ env.REPOTITLE_LOWERCASE }}/$( sed 's/\W/_/g' <<< ${{ env.REPO_BRANCH }}) -stub -profile test,singularity --outdir ./results - name: Run the downloaded pipeline (stub run not supported) id: run_pipeline if: ${{ job.steps.stub_run_pipeline.status == failure() }} env: - NXF_SINGULARITY_CACHEDIR: ./ + NXF_SINGULARITY_CACHEDIR: ./singularity_container_images NXF_SINGULARITY_HOME_MOUNT: true run: nextflow run ./${{ env.REPOTITLE_LOWERCASE }}/$( sed 's/\W/_/g' <<< ${{ env.REPO_BRANCH }}) -profile test,singularity --outdir ./results + + - name: Count the downloaded number of container images + id: count_afterwards + run: | + image_count=$(ls -1 ./singularity_container_images | wc -l | xargs) + echo "Post-pipeline run container image count: $image_count" + echo "IMAGE_COUNT_AFTER=$image_count" >> ${GITHUB_ENV} + + - name: Compare container image counts + run: | + if [ "${{ env.IMAGE_COUNT_INITIAL }}" -ne "${{ env.IMAGE_COUNT_AFTER }}" ]; then + initial_count=${{ env.IMAGE_COUNT_INITIAL }} + final_count=${{ env.IMAGE_COUNT_AFTER }} + difference=$((final_count - initial_count)) + echo "$difference additional container images were \n downloaded at runtime . The pipeline has no support for offline runs!" + tree ./singularity_container_images + exit 1 + else + echo "The pipeline can be downloaded successfully!" + fi diff --git a/.github/workflows/linting.yml b/.github/workflows/linting.yml index 1fcafe880..a502573c5 100644 --- a/.github/workflows/linting.yml +++ b/.github/workflows/linting.yml @@ -1,6 +1,6 @@ name: nf-core linting # This workflow is triggered on pushes and PRs to the repository. -# It runs the `nf-core lint` and markdown lint tests to ensure +# It runs the `nf-core pipelines lint` and markdown lint tests to ensure # that the code meets the nf-core guidelines. on: push: @@ -41,17 +41,32 @@ jobs: python-version: "3.12" architecture: "x64" + - name: read .nf-core.yml + uses: pietrobolcato/action-read-yaml@1.1.0 + id: read_yml + with: + config: ${{ github.workspace }}/.nf-core.yml + - name: Install dependencies run: | python -m pip install --upgrade pip - pip install nf-core + pip install nf-core==${{ steps.read_yml.outputs['nf_core_version'] }} + + - name: Run nf-core pipelines lint + if: ${{ github.base_ref != 'master' }} + env: + GITHUB_COMMENTS_URL: ${{ github.event.pull_request.comments_url }} + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + GITHUB_PR_COMMIT: ${{ github.event.pull_request.head.sha }} + run: nf-core -l lint_log.txt pipelines lint --dir ${GITHUB_WORKSPACE} --markdown lint_results.md - - name: Run nf-core lint + - name: Run nf-core pipelines lint --release + if: ${{ github.base_ref == 'master' }} env: GITHUB_COMMENTS_URL: ${{ github.event.pull_request.comments_url }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_PR_COMMIT: ${{ github.event.pull_request.head.sha }} - run: nf-core -l lint_log.txt lint --dir ${GITHUB_WORKSPACE} --markdown lint_results.md + run: nf-core -l lint_log.txt pipelines lint --release --dir ${GITHUB_WORKSPACE} --markdown lint_results.md - name: Save PR number if: ${{ always() }} diff --git a/.github/workflows/linting_comment.yml b/.github/workflows/linting_comment.yml index 40acc23f5..42e519bfa 100644 --- a/.github/workflows/linting_comment.yml +++ b/.github/workflows/linting_comment.yml @@ -11,7 +11,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Download lint results - uses: dawidd6/action-download-artifact@09f2f74827fd3a8607589e5ad7f9398816f540fe # v3 + uses: dawidd6/action-download-artifact@bf251b5aa9c2f7eeb574a96ee720e24f801b7c11 # v6 with: workflow: linting.yml workflow_conclusion: completed diff --git a/.github/workflows/template_version_comment.yml b/.github/workflows/template_version_comment.yml new file mode 100644 index 000000000..e8aafe44d --- /dev/null +++ b/.github/workflows/template_version_comment.yml @@ -0,0 +1,46 @@ +name: nf-core template version comment +# This workflow is triggered on PRs to check if the pipeline template version matches the latest nf-core version. +# It posts a comment to the PR, even if it comes from a fork. + +on: pull_request_target + +jobs: + template_version: + runs-on: ubuntu-latest + steps: + - name: Check out pipeline code + uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4 + with: + ref: ${{ github.event.pull_request.head.sha }} + + - name: Read template version from .nf-core.yml + uses: nichmor/minimal-read-yaml@v0.0.2 + id: read_yml + with: + config: ${{ github.workspace }}/.nf-core.yml + + - name: Install nf-core + run: | + python -m pip install --upgrade pip + pip install nf-core==${{ steps.read_yml.outputs['nf_core_version'] }} + + - name: Check nf-core outdated + id: nf_core_outdated + run: echo "OUTPUT=$(pip list --outdated | grep nf-core)" >> ${GITHUB_ENV} + + - name: Post nf-core template version comment + uses: mshick/add-pr-comment@b8f338c590a895d50bcbfa6c5859251edc8952fc # v2 + if: | + contains(env.OUTPUT, 'nf-core') + with: + repo-token: ${{ secrets.NF_CORE_BOT_AUTH_TOKEN }} + allow-repeats: false + message: | + > [!WARNING] + > Newer version of the nf-core template is available. + > + > Your pipeline is using an old version of the nf-core template: ${{ steps.read_yml.outputs['nf_core_version'] }}. + > Please update your pipeline to the latest version. + > + > For more documentation on how to update your pipeline, please see the [nf-core documentation](https://github.com/nf-core/tools?tab=readme-ov-file#sync-a-pipeline-with-the-template) and [Synchronisation documentation](https://nf-co.re/docs/contributing/sync). + # diff --git a/.gitignore b/.gitignore index b25226dd7..a42ce0162 100644 --- a/.gitignore +++ b/.gitignore @@ -6,16 +6,4 @@ results/ testing/ testing* *.pyc -.idea/ -.idea/* -*.log -/build/ -results*/ -venv/ -node_modules -conversion_inputs -debug_dir -test_out - -lint_log.txt -node_modules +null/ diff --git a/.gitpod.yml b/.gitpod.yml index 105a1821a..461186376 100644 --- a/.gitpod.yml +++ b/.gitpod.yml @@ -4,17 +4,14 @@ tasks: command: | pre-commit install --install-hooks nextflow self-update - - name: unset JAVA_TOOL_OPTIONS - command: | - unset JAVA_TOOL_OPTIONS vscode: extensions: # based on nf-core.nf-core-extensionpack - - esbenp.prettier-vscode # Markdown/CommonMark linting and style checking for Visual Studio Code + #- esbenp.prettier-vscode # Markdown/CommonMark linting and style checking for Visual Studio Code - EditorConfig.EditorConfig # override user/workspace settings with settings found in .editorconfig files - Gruntfuggly.todo-tree # Display TODO and FIXME in a tree view in the activity bar - mechatroner.rainbow-csv # Highlight columns in csv files in different colors - # - nextflow.nextflow # Nextflow syntax highlighting + - nextflow.nextflow # Nextflow syntax highlighting - oderwat.indent-rainbow # Highlight indentation level - streetsidesoftware.code-spell-checker # Spelling checker for source code - charliermarsh.ruff # Code linter Ruff diff --git a/.nf-core.yml b/.nf-core.yml index 094df41e1..d86c822e7 100644 --- a/.nf-core.yml +++ b/.nf-core.yml @@ -1,5 +1,4 @@ -repository_type: pipeline -nf_core_version: "2.14.1" +bump_version: null lint: files_exist: - conf/igenomes.config @@ -7,8 +6,14 @@ lint: - conf/test.config - .github/workflows/awstest.yml - .github/workflows/awsfulltest.yml + - .github/workflows/ci.yml + - .gitignore files_unchanged: - .github/PULL_REQUEST_TEMPLATE.md - .github/CONTRIBUTING.md - docs/README.md - multiqc_config: False + nextflow_config: false + multiqc_config: false +nf_core_version: 3.0.2 +org_path: null +repository_type: pipeline diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 4dc0f1dcd..9e9f0e1c4 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -7,7 +7,7 @@ repos: - prettier@3.2.5 - repo: https://github.com/editorconfig-checker/editorconfig-checker.python - rev: "2.7.3" + rev: "3.0.3" hooks: - id: editorconfig-checker alias: ec diff --git a/CHANGELOG.md b/CHANGELOG.md index d006dfd28..f90118cec 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,6 +12,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### `Changed` - [#423](https://github.com/bigbio/quantms/pull/423) Updated OpenMS==3.2.0 +- [#423]() Update thermorawfileparser==1.4.5 +- [#423]() Update quantms-utils==0.0.12 ### `Fixed` @@ -31,7 +33,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - [#386](https://github.com/bigbio/quantms/pull/386) Make validation of ontology terms optional - [#398](https://github.com/bigbio/quantms/pull/398) Python scripts moved to quantms-utils package - [#389](https://github.com/bigbio/quantms/pull/389) Introduction to DIANN 1.9.1 to the pipeline, only available in Singularity. -- [#396](https://github.com/bigbio/quantms/pull/396) Adds verification step to unpacking tar archives in the DECOMPRESS process +- [#396](https://github.com/bigbio/quantms/pull/396) Adds a verification step to unpacking tar archives in the DECOMPRESS process - [#397](https://github.com/bigbio/quantms/pull/397) More options included in SDRF validation. - [#404](https://github.com/bigbio/quantms/pull/404) Add spectrum SNR features to rescore @@ -170,7 +172,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - [#193](https://github.com/bigbio/quantms/pull/193) - Set the `local_input_type` default parameter to `mzML` - [#212](https://github.com/bigbio/quantms/pull/212) - Set the `min_consensus_support` default parameter to `1` to filter in ConsensusID for peptides identified with both search engines -- [#200](https://github.com/bigbio/quantms/pull/200) - Add `export_mztab` parameter to allow to run PROTEINQUANTIFIER TMT without exporting to mzTab +- [#200](https://github.com/bigbio/quantms/pull/200) - Add `export_mztab` parameter to allow torun PROTEINQUANTIFIER TMT without exporting to mzTab ## [1.0] nfcore/quantms - [05/02/2022] - Havana diff --git a/README.md b/README.md index 0d1110f51..6b07bd840 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ [![GitHub Actions Linting Status](https://github.com/nf-core/quantms/actions/workflows/linting.yml/badge.svg)](https://github.com/nf-core/quantms/actions/workflows/linting.yml)[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/quantms/results)[![Cite with Zenodo](https://img.shields.io/badge/DOI-10.5281/zenodo.7754148-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.7754148) [![nf-test](https://img.shields.io/badge/unit_tests-nf--test-337ab7.svg)](https://www.nf-test.com) -[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A523.04.0-23aa62.svg)](https://www.nextflow.io/) +[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A524.04.2-23aa62.svg)](https://www.nextflow.io/) [![run with conda](https://img.shields.io/badge/run%20with-conda-3EB049?labelColor=000000&logo=anaconda)](https://docs.conda.io/en/latest/) [![run with docker](https://img.shields.io/badge/run%20with-docker-0db7ed?labelColor=000000&logo=docker)](https://www.docker.com/) [![run with singularity](https://img.shields.io/badge/run%20with-singularity-1d355c.svg?labelColor=000000)](https://sylabs.io/docs/) diff --git a/assets/schema_input.json b/assets/schema_input.json index 83c023e52..f9ba804e0 100644 --- a/assets/schema_input.json +++ b/assets/schema_input.json @@ -1,5 +1,5 @@ { - "$schema": "http://json-schema.org/draft-07/schema", + "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "https://raw.githubusercontent.com/nf-core/quantms/master/assets/schema_input.json", "title": "nf-core/quantms pipeline - params.input schema", "description": "Schema for the file provided with params.input", diff --git a/conf/base.config b/conf/base.config index 709b99274..fe7230d55 100644 --- a/conf/base.config +++ b/conf/base.config @@ -11,9 +11,9 @@ process { // TODO nf-core: Check the defaults for all processes - cpus = { check_max( 2 * task.attempt, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 4.h * task.attempt, 'time' ) } + cpus = { 1 * task.attempt } + memory = { 6.GB * task.attempt } + time = { 4.h * task.attempt } errorStrategy = { task.exitStatus in ((130..145) + 104) ? 'retry' : 'finish' } maxRetries = 1 @@ -24,37 +24,39 @@ process { // These labels are used and recognised by default in DSL2 files hosted on nf-core/modules. // If possible, it would be nice to keep the same label naming convention when // adding in your local modules too. + // TODO nf-core: Customise requirements for specific processes. // See https://www.nextflow.io/docs/latest/config.html#config-process-selectors + withLabel:process_single { - cpus = { check_max( 1 , 'cpus' ) } - memory = { check_max( 6.GB * task.attempt, 'memory' ) } - time = { check_max( 4.h * task.attempt, 'time' ) } - } - withLabel:process_very_low { - cpus = { check_max( 2 * task.attempt, 'cpus' ) } - memory = { check_max( 6.GB * task.attempt, 'memory' ) } - time = { check_max( 3.h * task.attempt, 'time' ) } + cpus = { 1 } + memory = { 6.GB * task.attempt } + time = { 4.h * task.attempt } } withLabel:process_low { - cpus = { check_max( 4 * task.attempt, 'cpus' ) } - memory = { check_max( 12.GB * task.attempt, 'memory' ) } - time = { check_max( 6.h * task.attempt, 'time' ) } + cpus = { 4 * task.attempt } + memory = { 12.GB * task.attempt } + time = { 6.h * task.attempt } + } + withLabel:process_very_low { + cpus = { 2 * task.attempt} + memory = { 4.GB * task.attempt} + time = { 3.h * task.attempt} } withLabel:process_medium { - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 36.GB * task.attempt, 'memory' ) } - time = { check_max( 8.h * task.attempt, 'time' ) } + cpus = { 8 * task.attempt } + memory = { 36.GB * task.attempt } + time = { 8.h * task.attempt } } withLabel:process_high { - cpus = { check_max( 12 * task.attempt, 'cpus' ) } - memory = { check_max( 72.GB * task.attempt, 'memory' ) } - time = { check_max( 16.h * task.attempt, 'time' ) } + cpus = { 12 * task.attempt } + memory = { 72.GB * task.attempt } + time = { 16.h * task.attempt } } withLabel:process_long { - time = { check_max( 20.h * task.attempt, 'time' ) } + time = { 20.h * task.attempt } } withLabel:process_high_memory { - memory = { check_max( 200.GB * task.attempt, 'memory' ) } + memory = { 200.GB * task.attempt } } withLabel:error_ignore { errorStrategy = 'ignore' @@ -64,10 +66,3 @@ process { maxRetries = 2 } } - -params { - // Defaults only, expecting to be overwritten - max_memory = 128.GB - max_cpus = 16 - max_time = 240.h -} diff --git a/conf/igenomes_ignored.config b/conf/igenomes_ignored.config new file mode 100644 index 000000000..b4034d824 --- /dev/null +++ b/conf/igenomes_ignored.config @@ -0,0 +1,9 @@ +/* +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + Nextflow config file for iGenomes paths +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + Empty genomes dictionary to use when igenomes is ignored. +---------------------------------------------------------------------------------------- +*/ + +params.genomes = [:] diff --git a/conf/test_dda_id.config b/conf/test_dda_id.config index 0bbd9311c..17ee2eff0 100644 --- a/conf/test_dda_id.config +++ b/conf/test_dda_id.config @@ -10,15 +10,18 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '6.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Real full-size test profile for DDA ID' config_profile_description = 'Real full-size test dataset to check pipeline function of the DDA identification branch of the pipeline' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = "./results_lfq_dda_id" // Input data diff --git a/conf/test_dia.config b/conf/test_dia.config index deb1d03c9..ce041b928 100644 --- a/conf/test_dia.config +++ b/conf/test_dia.config @@ -10,15 +10,17 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '6.GB', + time: '48.h' + ] +} params { config_profile_name = 'Test profile for DIA' config_profile_description = 'Minimal test dataset to check pipeline function for the data-independent acquisition pipeline branch.' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = './results_dia' // Input data diff --git a/conf/test_full_dia.config b/conf/test_full_dia.config index 1cfdd16d5..2bfb1142c 100644 --- a/conf/test_full_dia.config +++ b/conf/test_full_dia.config @@ -10,15 +10,18 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '6.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Real full-size test profile for DIA' config_profile_description = 'Real full-size test dataset to check pipeline function for the data-independent acquisition pipeline branch.' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = './results_dia_full' // Input data diff --git a/conf/test_full_lfq.config b/conf/test_full_lfq.config index 4e4684f16..1b2ccc5d2 100644 --- a/conf/test_full_lfq.config +++ b/conf/test_full_lfq.config @@ -10,15 +10,18 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Real full-size test profile for DDA LFQ' config_profile_description = 'Real full-size test dataset to check pipeline function of the label-free quantification branch of the pipeline' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = "./results_lfq_full" // Input data diff --git a/conf/test_full_tmt.config b/conf/test_full_tmt.config index 11eea647f..32650ed58 100644 --- a/conf/test_full_tmt.config +++ b/conf/test_full_tmt.config @@ -10,16 +10,20 @@ ---------------------------------------------------------------------------------------- */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Real full test profile DDA ISO' config_profile_description = 'Real full test dataset in isotopic labelling mode to check pipeline function and sanity of results' outdir = "./results_iso_full" - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - // Input data for full size test input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata-aws/tmt_full/PXD005486.sdrf.tsv' diff --git a/conf/test_latest_dia.config b/conf/test_latest_dia.config index 14eea22b7..ab99bf593 100644 --- a/conf/test_latest_dia.config +++ b/conf/test_latest_dia.config @@ -10,15 +10,18 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Test profile for latest DIA' config_profile_description = 'Minimal test dataset to check pipeline function for the data-independent acquisition pipeline branch for latest DIA-NN.' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = './results_latest_dia' // Input data diff --git a/conf/test_lfq.config b/conf/test_lfq.config index cd2480ba7..9dadd2963 100644 --- a/conf/test_lfq.config +++ b/conf/test_lfq.config @@ -10,15 +10,18 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Test profile for DDA LFQ' config_profile_description = 'Minimal test dataset to check pipeline function of the label-free quantification branch of the pipeline' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = "./results_lfq" // Input data @@ -26,7 +29,7 @@ params { input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/lfq_ci/BSA/BSA_design_urls.tsv' database = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/lfq_ci/BSA/18Protein_SoCe_Tr_detergents_trace_target_decoy.fasta' posterior_probabilities = "fit_distributions" - search_engines = "msgf,comet" + search_engines = "comet" decoy_string= "rev" add_triqler_output = true protein_level_fdr_cutoff = 1.0 diff --git a/conf/test_lfq_sage.config b/conf/test_lfq_sage.config index 8f7073b43..69234b6fb 100644 --- a/conf/test_lfq_sage.config +++ b/conf/test_lfq_sage.config @@ -10,15 +10,18 @@ ------------------------------------------------------------------------------------------------ */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Test profile for DDA LFQ with Sage' config_profile_description = 'Minimal test dataset to check pipeline function of the label-free quantification branch of the pipeline with the search engine Sage' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - outdir = "./results_lfq" tracedir = "${params.outdir}/pipeline_info" diff --git a/conf/test_localize.config b/conf/test_localize.config index ef32bddda..32d6bb6fa 100644 --- a/conf/test_localize.config +++ b/conf/test_localize.config @@ -10,15 +10,18 @@ ---------------------------------------------------------------------------------------------------- */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Test PTM-localization profile' config_profile_description = 'Minimal test dataset to check pipeline function for PTM-localization, SDRF parsing and ConsensusID.' - // Limit resources so that this can run on Travis - max_cpus = 2 - max_memory = 6.GB - max_time = 1.h - outdir = "./results_localize" // Input data diff --git a/conf/test_tmt.config b/conf/test_tmt.config index 7184f1a49..c220c3f6e 100644 --- a/conf/test_tmt.config +++ b/conf/test_tmt.config @@ -10,16 +10,20 @@ ---------------------------------------------------------------------------------------- */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Full test profile DDA ISO' config_profile_description = 'Full test dataset in isotopic labelling mode to check pipeline function and sanity of results' outdir = "./results_iso" - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - // Input data for full size test input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/tmt_ci/PXD000001.sdrf.tsv' diff --git a/conf/test_tmt_corr.config b/conf/test_tmt_corr.config index 561d1f31b..276b675d1 100644 --- a/conf/test_tmt_corr.config +++ b/conf/test_tmt_corr.config @@ -10,16 +10,20 @@ ---------------------------------------------------------------------------------------- */ +process { + resourceLimits = [ + cpus: 4, + memory: '12.GB', + time: '48.h' + ] +} + params { config_profile_name = 'Full test profile DDA ISO' config_profile_description = 'Full test dataset in isotopic labelling mode to check pipeline function and sanity of results' outdir = "./results_iso" - max_cpus = 2 - max_memory = 6.GB - max_time = 48.h - // Input data for full size test input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/tmt_ci/PXD000001.sdrf.tsv' diff --git a/docs/images/mqc_fastqc_adapter.png b/docs/images/mqc_fastqc_adapter.png deleted file mode 100755 index 361d0e47a..000000000 Binary files a/docs/images/mqc_fastqc_adapter.png and /dev/null differ diff --git a/docs/images/mqc_fastqc_counts.png b/docs/images/mqc_fastqc_counts.png deleted file mode 100755 index cb39ebb80..000000000 Binary files a/docs/images/mqc_fastqc_counts.png and /dev/null differ diff --git a/docs/images/mqc_fastqc_quality.png b/docs/images/mqc_fastqc_quality.png deleted file mode 100755 index a4b89bf56..000000000 Binary files a/docs/images/mqc_fastqc_quality.png and /dev/null differ diff --git a/main.nf b/main.nf index 340aad40d..6dada7cc8 100644 --- a/main.nf +++ b/main.nf @@ -9,8 +9,6 @@ ---------------------------------------------------------------------------------------- */ -nextflow.enable.dsl = 2 - /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ IMPORT FUNCTIONS / MODULES / SUBWORKFLOWS / WORKFLOWS @@ -21,8 +19,6 @@ include { QUANTMS } from './workflows/quantms' include { PIPELINE_INITIALISATION } from './subworkflows/local/utils_nfcore_quantms_pipeline' include { PIPELINE_COMPLETION } from './subworkflows/local/utils_nfcore_quantms_pipeline' - - // // WORKFLOW: Run main nf-core/quantms analysis pipeline // @@ -42,8 +38,6 @@ workflow NFCORE_QUANTMS { ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ - - // // WORKFLOW: Execute a single named workflow for the pipeline // See: https://github.com/nf-core/rnaseq/issues/619 diff --git a/modules.json b/modules.json index 8bd8a9c6e..fbff41a19 100644 --- a/modules.json +++ b/modules.json @@ -10,9 +10,14 @@ "git_sha": "8ec825f465b9c17f9d83000022995b4f7de6fe93", "installed_by": ["modules"] }, + "fastqc": { + "branch": "master", + "git_sha": "49b18b1639f4f7104187058866a8fab33332bdfe", + "installed_by": ["modules"] + }, "multiqc": { "branch": "master", - "git_sha": "b7ebe95761cd389603f9cc0e0dc384c0f663815a", + "git_sha": "cf17ca47590cc578dfb47db1c2a44ef86f89976d", "installed_by": ["modules"] } } @@ -21,17 +26,17 @@ "nf-core": { "utils_nextflow_pipeline": { "branch": "master", - "git_sha": "5caf7640a9ef1d18d765d55339be751bb0969dfa", + "git_sha": "3aa0aec1d52d492fe241919f0c6100ebf0074082", "installed_by": ["subworkflows"] }, "utils_nfcore_pipeline": { "branch": "master", - "git_sha": "92de218a329bfc9a9033116eb5f65fd270e72ba3", + "git_sha": "1b6b9a3338d011367137808b49b923515080e3ba", "installed_by": ["subworkflows"] }, - "utils_nfvalidation_plugin": { + "utils_nfschema_plugin": { "branch": "master", - "git_sha": "5caf7640a9ef1d18d765d55339be751bb0969dfa", + "git_sha": "bbd5a41f4535a8defafe6080e00ea74c45f4f96c", "installed_by": ["subworkflows"] } } diff --git a/modules/local/add_sage_feat/main.nf b/modules/local/add_sage_feat/main.nf index ce30e5d5e..ce7f53db7 100644 --- a/modules/local/add_sage_feat/main.nf +++ b/modules/local/add_sage_feat/main.nf @@ -12,7 +12,7 @@ process SAGEFEATURE { output: tuple val(meta), path("${id_file.baseName}_feat.idXML"), emit: id_files_feat - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/assemble_empirical_library/main.nf b/modules/local/assemble_empirical_library/main.nf index 69acbcb4f..2aa1d9d53 100644 --- a/modules/local/assemble_empirical_library/main.nf +++ b/modules/local/assemble_empirical_library/main.nf @@ -20,7 +20,7 @@ process ASSEMBLE_EMPIRICAL_LIBRARY { output: path "empirical_library.tsv", emit: empirical_library path "assemble_empirical_library.log", emit: log - path "versions.yml", emit: version + path "versions.yml", emit: versions when: task.ext.when == null || task.ext.when diff --git a/modules/local/decompress_dotd/main.nf b/modules/local/decompress_dotd/main.nf index 7529e17fb..88752b524 100644 --- a/modules/local/decompress_dotd/main.nf +++ b/modules/local/decompress_dotd/main.nf @@ -32,7 +32,7 @@ process DECOMPRESS { output: tuple val(meta), path('*.d'), emit: decompressed_files - path 'versions.yml', emit: version + path 'versions.yml', emit: versions path '*.log', emit: log script: diff --git a/modules/local/diann_preliminary_analysis/main.nf b/modules/local/diann_preliminary_analysis/main.nf index 0d4a5c0da..01699a93f 100644 --- a/modules/local/diann_preliminary_analysis/main.nf +++ b/modules/local/diann_preliminary_analysis/main.nf @@ -16,7 +16,7 @@ process DIANN_PRELIMINARY_ANALYSIS { output: path "*.quant", emit: diann_quant tuple val(meta), path("*_diann.log"), emit: log - path "versions.yml", emit: version + path "versions.yml", emit: versions when: task.ext.when == null || task.ext.when diff --git a/modules/local/diannconvert/main.nf b/modules/local/diannconvert/main.nf index 6f4a78df9..40b273b1d 100644 --- a/modules/local/diannconvert/main.nf +++ b/modules/local/diannconvert/main.nf @@ -22,7 +22,7 @@ process DIANNCONVERT { path "*triqler_in.tsv", emit: out_triqler path "*.mzTab", emit: out_mztab path "*.log", emit: log - path "versions.yml", emit: version + path "versions.yml", emit: versions exec: log.info "DIANNCONVERT is based on the output of DIA-NN 1.8.1 and 1.9.beta.1, other versions of DIA-NN do not support mzTab conversion." diff --git a/modules/local/diannsummary/main.nf b/modules/local/diannsummary/main.nf index dc76e8171..c85476912 100644 --- a/modules/local/diannsummary/main.nf +++ b/modules/local/diannsummary/main.nf @@ -31,7 +31,7 @@ process DIANNSUMMARY { // Different library files format are exported due to different DIA-NN versions path "empirical_library.tsv", emit: final_speclib optional true path "empirical_library.tsv.skyline.speclib", emit: skyline_speclib optional true - path "versions.yml", emit: version + path "versions.yml", emit: versions when: task.ext.when == null || task.ext.when diff --git a/modules/local/extract_psm/main.nf b/modules/local/extract_psm/main.nf index b5f6ff32b..ce166ae33 100644 --- a/modules/local/extract_psm/main.nf +++ b/modules/local/extract_psm/main.nf @@ -13,7 +13,7 @@ process PSMCONVERSION { output: path "*_psm.parquet", emit: psm_info - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/generate_diann_cfg/main.nf b/modules/local/generate_diann_cfg/main.nf index 10aea5e20..3e4b5a916 100644 --- a/modules/local/generate_diann_cfg/main.nf +++ b/modules/local/generate_diann_cfg/main.nf @@ -12,7 +12,7 @@ process GENERATE_DIANN_CFG { output: path 'diann_config.cfg', emit: diann_cfg - path 'versions.yml', emit: version + path 'versions.yml', emit: versions path '*.log' script: diff --git a/modules/local/individual_final_analysis/main.nf b/modules/local/individual_final_analysis/main.nf index cb9036a06..35efa3dab 100644 --- a/modules/local/individual_final_analysis/main.nf +++ b/modules/local/individual_final_analysis/main.nf @@ -16,7 +16,7 @@ process INDIVIDUAL_FINAL_ANALYSIS { output: path "*.quant", emit: diann_quant path "*_final_diann.log", emit: log - path "versions.yml", emit: version + path "versions.yml", emit: versions when: task.ext.when == null || task.ext.when diff --git a/modules/local/msstats/main.nf b/modules/local/msstats/main.nf index 53f45d764..1c4467d3d 100644 --- a/modules/local/msstats/main.nf +++ b/modules/local/msstats/main.nf @@ -16,7 +16,7 @@ process MSSTATS { path "*.pdf" optional true path "*.csv", emit: msstats_csv path "*.log", emit: log - path "versions.yml" , emit: version + path "versions.yml" , emit: versions script: def args = task.ext.args ?: '' diff --git a/modules/local/msstatstmt/main.nf b/modules/local/msstatstmt/main.nf index 458704b8a..51f6349d0 100644 --- a/modules/local/msstatstmt/main.nf +++ b/modules/local/msstatstmt/main.nf @@ -16,7 +16,7 @@ process MSSTATSTMT { path "*.pdf" optional true path "*.csv", emit: msstats_csv path "*.log" - path "versions.yml" , emit: version + path "versions.yml" , emit: versions script: def args = task.ext.args ?: '' diff --git a/modules/local/mzmlstatistics/main.nf b/modules/local/mzmlstatistics/main.nf index 3134262e9..da83cb40f 100644 --- a/modules/local/mzmlstatistics/main.nf +++ b/modules/local/mzmlstatistics/main.nf @@ -1,6 +1,6 @@ process MZMLSTATISTICS { tag "$meta.mzml_id" - label 'process_medium' + label 'process_very_low' label 'process_single' conda "bioconda::quantms-utils=0.0.11" @@ -14,7 +14,7 @@ process MZMLSTATISTICS { output: path "*_ms_info.parquet", emit: ms_statistics tuple val(meta), path("*_spectrum_df.parquet"), emit: spectrum_df, optional: true - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/consensusid/main.nf b/modules/local/openms/consensusid/main.nf index 4ee09d7a3..323bc0508 100644 --- a/modules/local/openms/consensusid/main.nf +++ b/modules/local/openms/consensusid/main.nf @@ -13,7 +13,7 @@ process CONSENSUSID { output: tuple val(meta), path("${meta.mzml_id}_consensus.idXML"), emit: consensusids - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/decoydatabase/main.nf b/modules/local/openms/decoydatabase/main.nf index de0f9dc91..2f38aec63 100644 --- a/modules/local/openms/decoydatabase/main.nf +++ b/modules/local/openms/decoydatabase/main.nf @@ -12,7 +12,7 @@ process DECOYDATABASE { output: path "*.fasta", emit: db_decoy - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/epifany/main.nf b/modules/local/openms/epifany/main.nf index 9275959ce..0763e0f13 100644 --- a/modules/local/openms/epifany/main.nf +++ b/modules/local/openms/epifany/main.nf @@ -12,7 +12,7 @@ process EPIFANY { output: tuple val(meta), path("${consus_file.baseName}_epi.consensusXML"), emit: epi_inference - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/extractpsmfeatures/main.nf b/modules/local/openms/extractpsmfeatures/main.nf index c6a1dcd6d..8270f4687 100644 --- a/modules/local/openms/extractpsmfeatures/main.nf +++ b/modules/local/openms/extractpsmfeatures/main.nf @@ -14,7 +14,7 @@ process EXTRACTPSMFEATURES { output: tuple val(meta), path("${id_file.baseName}_feat.idXML"), emit: id_files_feat - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/falsediscoveryrate/main.nf b/modules/local/openms/falsediscoveryrate/main.nf index 37316096c..8bb5758e2 100644 --- a/modules/local/openms/falsediscoveryrate/main.nf +++ b/modules/local/openms/falsediscoveryrate/main.nf @@ -14,7 +14,7 @@ process FALSEDISCOVERYRATE { output: tuple val(meta), path("${id_file.baseName}_fdr.idXML"), emit: id_files_idx_ForIDPEP_FDR - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/filemerge/main.nf b/modules/local/openms/filemerge/main.nf index 855c37839..6fee0ebcb 100644 --- a/modules/local/openms/filemerge/main.nf +++ b/modules/local/openms/filemerge/main.nf @@ -13,7 +13,7 @@ process FILEMERGE { output: tuple val([:]), path("ID_mapper_merge.consensusXML"), emit: id_merge - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idconflictresolver/main.nf b/modules/local/openms/idconflictresolver/main.nf index ee2eb2d44..95a0d5e63 100644 --- a/modules/local/openms/idconflictresolver/main.nf +++ b/modules/local/openms/idconflictresolver/main.nf @@ -12,7 +12,7 @@ process IDCONFLICTRESOLVER { output: path "${consus_file.baseName}_resconf.consensusXML", emit: pro_resconf - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idfilter/main.nf b/modules/local/openms/idfilter/main.nf index 451cab6c4..c0f5ebe56 100644 --- a/modules/local/openms/idfilter/main.nf +++ b/modules/local/openms/idfilter/main.nf @@ -14,7 +14,7 @@ process IDFILTER { output: tuple val(meta), path("${id_file.baseName}_filter$task.ext.suffix"), emit: id_filtered - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idmapper/main.nf b/modules/local/openms/idmapper/main.nf index bc155da0f..a03250e3c 100644 --- a/modules/local/openms/idmapper/main.nf +++ b/modules/local/openms/idmapper/main.nf @@ -13,7 +13,7 @@ process IDMAPPER { output: path "${id_file.baseName}_map.consensusXML", emit: id_map - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idmerger/main.nf b/modules/local/openms/idmerger/main.nf index 2e20d3a8a..2fd67ad3e 100644 --- a/modules/local/openms/idmerger/main.nf +++ b/modules/local/openms/idmerger/main.nf @@ -13,7 +13,7 @@ process IDMERGER { output: tuple val(meta), path("*_merged.idXML"), emit: id_merged - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idpep/main.nf b/modules/local/openms/idpep/main.nf index f414d7a03..22eda8e56 100644 --- a/modules/local/openms/idpep/main.nf +++ b/modules/local/openms/idpep/main.nf @@ -13,7 +13,7 @@ process IDPEP { output: tuple val(meta), path("${id_file.baseName}_idpep.idXML"), val("q-value_score"), emit: id_files_ForIDPEP - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idripper/main.nf b/modules/local/openms/idripper/main.nf index bb91d9b17..2a888db4b 100644 --- a/modules/local/openms/idripper/main.nf +++ b/modules/local/openms/idripper/main.nf @@ -15,7 +15,7 @@ process IDRIPPER { val(meta), emit: meta path("*.idXML"), emit: id_rippers val("MS:1001491"), emit: qval_score - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/idscoreswitcher/main.nf b/modules/local/openms/idscoreswitcher/main.nf index 058bbd192..85866af7c 100644 --- a/modules/local/openms/idscoreswitcher/main.nf +++ b/modules/local/openms/idscoreswitcher/main.nf @@ -14,7 +14,7 @@ process IDSCORESWITCHER { output: tuple val(meta), path("${id_file.baseName}_pep.idXML"), emit: id_score_switcher - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/indexpeptides/main.nf b/modules/local/openms/indexpeptides/main.nf index f3830c862..e55f806d9 100644 --- a/modules/local/openms/indexpeptides/main.nf +++ b/modules/local/openms/indexpeptides/main.nf @@ -14,7 +14,7 @@ process INDEXPEPTIDES { output: tuple val(meta), path("${id_file.baseName}_idx.idXML"), emit: id_files_idx - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/isobaricanalyzer/main.nf b/modules/local/openms/isobaricanalyzer/main.nf index 4e12ef00c..3e47ec220 100644 --- a/modules/local/openms/isobaricanalyzer/main.nf +++ b/modules/local/openms/isobaricanalyzer/main.nf @@ -13,7 +13,7 @@ process ISOBARICANALYZER { output: tuple val(meta), path("${mzml_file.baseName}_iso.consensusXML"), emit: id_files_consensusXML - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/msstatsconverter/main.nf b/modules/local/openms/msstatsconverter/main.nf index a5e87f325..4b5e48e97 100644 --- a/modules/local/openms/msstatsconverter/main.nf +++ b/modules/local/openms/msstatsconverter/main.nf @@ -15,7 +15,7 @@ process MSSTATSCONVERTER { output: path "*_msstats_in.csv", emit: out_msstats - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/mzmlindexing/main.nf b/modules/local/openms/mzmlindexing/main.nf index d2504b26a..3ba7a80ac 100644 --- a/modules/local/openms/mzmlindexing/main.nf +++ b/modules/local/openms/mzmlindexing/main.nf @@ -13,7 +13,7 @@ process MZMLINDEXING { output: tuple val(meta), path("out/*.mzML"), emit: mzmls_indexed - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/openmspeakpicker/main.nf b/modules/local/openms/openmspeakpicker/main.nf index d7834b9ef..df1750d58 100644 --- a/modules/local/openms/openmspeakpicker/main.nf +++ b/modules/local/openms/openmspeakpicker/main.nf @@ -13,7 +13,7 @@ process OPENMSPEAKPICKER { output: tuple val(meta), path("*.mzML"), emit: mzmls_picked - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/proteininference/main.nf b/modules/local/openms/proteininference/main.nf index 0ef2e1cdd..8d05ec6af 100644 --- a/modules/local/openms/proteininference/main.nf +++ b/modules/local/openms/proteininference/main.nf @@ -12,7 +12,7 @@ process PROTEININFERENCE { output: tuple val(meta), path("${consus_file.baseName}_epi.consensusXML"), emit: protein_inference - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/proteinquantifier/main.nf b/modules/local/openms/proteinquantifier/main.nf index 33170cd04..a6170c4e5 100644 --- a/modules/local/openms/proteinquantifier/main.nf +++ b/modules/local/openms/proteinquantifier/main.nf @@ -17,7 +17,7 @@ process PROTEINQUANTIFIER { path "*peptide_openms.csv", emit: peptide_out path "*.mzTab", optional: true, emit: out_mztab path "*.log" - path "versions.yml", emit: version + path "versions.yml", emit: versions script: def args = task.ext.args ?: '' diff --git a/modules/local/openms/proteomicslfq/main.nf b/modules/local/openms/proteomicslfq/main.nf index 4f36c98e7..00fecdb50 100644 --- a/modules/local/openms/proteomicslfq/main.nf +++ b/modules/local/openms/proteomicslfq/main.nf @@ -26,7 +26,7 @@ process PROTEOMICSLFQ { path "debug_mergedIDsGreedyResolvedFDRFiltered.idXML", emit: debug_mergedIDsGreedyResolvedFDRFiltered optional true path "debug_mergedIDsFDRFilteredStrictlyUniqueResolved.idXML", emit: debug_mergedIDsFDRFilteredStrictlyUniqueResolved optional true path "*.log", emit: log - path "versions.yml", emit: version + path "versions.yml", emit: versions script: def args = task.ext.args ?: '' diff --git a/modules/local/openms/thirdparty/luciphoradapter/main.nf b/modules/local/openms/thirdparty/luciphoradapter/main.nf index e09172d26..06f67e4c5 100644 --- a/modules/local/openms/thirdparty/luciphoradapter/main.nf +++ b/modules/local/openms/thirdparty/luciphoradapter/main.nf @@ -14,7 +14,7 @@ process LUCIPHORADAPTER { output: tuple val(meta), path("${id_file.baseName}_luciphor.idXML"), emit: ptm_in_id_luciphor - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/thirdparty/msgfdb_indexing/main.nf b/modules/local/openms/thirdparty/msgfdb_indexing/main.nf index fca18e6d1..000f2fa8a 100644 --- a/modules/local/openms/thirdparty/msgfdb_indexing/main.nf +++ b/modules/local/openms/thirdparty/msgfdb_indexing/main.nf @@ -13,7 +13,7 @@ process MSGFDBINDEXING { output: tuple path("${database.baseName}.cnlcp"), path("${database.baseName}.canno"), path("${database.baseName}.csarr"), path("${database.baseName}.cseq"), emit: msgfdb_idx - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/thirdparty/percolator/main.nf b/modules/local/openms/thirdparty/percolator/main.nf index 83cda1a57..98820e1d5 100644 --- a/modules/local/openms/thirdparty/percolator/main.nf +++ b/modules/local/openms/thirdparty/percolator/main.nf @@ -13,7 +13,7 @@ process PERCOLATOR { output: tuple val(meta), path("*_perc.idXML"), val("MS:1001491"), emit: id_files_perc - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/thirdparty/searchenginecomet/main.nf b/modules/local/openms/thirdparty/searchenginecomet/main.nf index 9af0af878..41af36a78 100644 --- a/modules/local/openms/thirdparty/searchenginecomet/main.nf +++ b/modules/local/openms/thirdparty/searchenginecomet/main.nf @@ -13,7 +13,7 @@ process SEARCHENGINECOMET { output: tuple val(meta), path("${mzml_file.baseName}_comet.idXML"), emit: id_files_comet - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: @@ -113,7 +113,7 @@ process SEARCHENGINECOMET { cat <<-END_VERSIONS > versions.yml "${task.process}": CometAdapter: \$(CometAdapter 2>&1 | grep -E '^Version(.*)' | sed 's/Version: //g' | cut -d ' ' -f 1) - Comet: \$(comet 2>&1 | grep -E "Comet version.*" | sed 's/Comet version //g' | sed 's/"//g') + Comet: \$(comet 2>&1 | grep -E "Comet version.*" | sed 's/ Comet version //g' | sed 's/"//g') END_VERSIONS """ } diff --git a/modules/local/openms/thirdparty/searchenginemsgf/main.nf b/modules/local/openms/thirdparty/searchenginemsgf/main.nf index 641267e88..6a5dace12 100644 --- a/modules/local/openms/thirdparty/searchenginemsgf/main.nf +++ b/modules/local/openms/thirdparty/searchenginemsgf/main.nf @@ -13,7 +13,7 @@ process SEARCHENGINEMSGF { output: tuple val(meta), path("${mzml_file.baseName}_msgf.idXML"), emit: id_files_msgf - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/openms/thirdparty/searchenginesage/main.nf b/modules/local/openms/thirdparty/searchenginesage/main.nf index 1a6359f72..3cfcf1bd8 100644 --- a/modules/local/openms/thirdparty/searchenginesage/main.nf +++ b/modules/local/openms/thirdparty/searchenginesage/main.nf @@ -13,7 +13,7 @@ process SEARCHENGINESAGE { output: tuple val(metas), path(meta_order_files), emit: id_files_sage - path "versions.yml" , emit: version + path "versions.yml" , emit: versions path "*.log" , emit: log script: diff --git a/modules/local/pmultiqc/main.nf b/modules/local/pmultiqc/main.nf index a689c17b0..8636ddd9f 100644 --- a/modules/local/pmultiqc/main.nf +++ b/modules/local/pmultiqc/main.nf @@ -31,10 +31,8 @@ process PMULTIQC { # leaving here to ease debugging ls -lcth * - echo ">>>>>>>>> Experimental Design <<<<<<<<<" cat results/*openms_design.tsv - echo ">>>>>>>>> Running Multiqc <<<<<<<<<" multiqc \\ -f \\ --config ./results/multiqc_config.yml \\ diff --git a/modules/local/sdrfparsing/main.nf b/modules/local/sdrfparsing/main.nf index 451121e76..53899f0b6 100644 --- a/modules/local/sdrfparsing/main.nf +++ b/modules/local/sdrfparsing/main.nf @@ -14,7 +14,7 @@ process SDRFPARSING { path "${sdrf.baseName}_openms_design.tsv", emit: ch_expdesign path "${sdrf.baseName}_config.tsv" , emit: ch_sdrf_config_file path "*.log" , emit: log - path "versions.yml" , emit: version + path "versions.yml" , emit: versions script: def args = task.ext.args ?: '' diff --git a/modules/local/silicolibrarygeneration/main.nf b/modules/local/silicolibrarygeneration/main.nf index d51ab973f..50b16e782 100644 --- a/modules/local/silicolibrarygeneration/main.nf +++ b/modules/local/silicolibrarygeneration/main.nf @@ -15,7 +15,7 @@ process SILICOLIBRARYGENERATION { file(diann_config) output: - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.predicted.speclib", emit: predict_speclib path "silicolibrarygeneration.log", emit: log diff --git a/modules/local/spectrum2features/main.nf b/modules/local/spectrum2features/main.nf index 7e5a7cc0d..7deaf981e 100644 --- a/modules/local/spectrum2features/main.nf +++ b/modules/local/spectrum2features/main.nf @@ -12,7 +12,7 @@ process SPECTRUM2FEATURES { output: tuple val(meta), path("${id_file.baseName}_snr.idXML"), emit: id_files_snr - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/tdf2mzml/main.nf b/modules/local/tdf2mzml/main.nf index a31332700..3b0521096 100644 --- a/modules/local/tdf2mzml/main.nf +++ b/modules/local/tdf2mzml/main.nf @@ -12,7 +12,7 @@ process TDF2MZML { output: tuple val(meta), path("*.mzML"), emit: mzmls_converted tuple val(meta), path("*.d"), emit: dotd_files - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/local/thermorawfileparser/main.nf b/modules/local/thermorawfileparser/main.nf index dba9e3284..8c9e8ab39 100644 --- a/modules/local/thermorawfileparser/main.nf +++ b/modules/local/thermorawfileparser/main.nf @@ -4,10 +4,10 @@ process THERMORAWFILEPARSER { label 'process_single' label 'error_retry' - conda "conda-forge::mono bioconda::thermorawfileparser=1.4.3" + conda "conda-forge::mono bioconda::thermorawfileparser=1.4.5" container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? - 'https://depot.galaxyproject.org/singularity/thermorawfileparser:1.4.3--ha8f3691_0' : - 'biocontainers/thermorawfileparser:1.4.3--ha8f3691_0' }" + 'https://depot.galaxyproject.org/singularity/thermorawfileparser:1.4.5--ha8f3691_0' : + 'biocontainers/thermorawfileparser:1.4.5--ha8f3691_0' }" stageInMode { if (task.attempt == 1) { @@ -32,7 +32,7 @@ process THERMORAWFILEPARSER { output: tuple val(meta), path("*.mzML"), emit: mzmls_converted - path "versions.yml", emit: version + path "versions.yml", emit: versions path "*.log", emit: log script: diff --git a/modules/nf-core/custom/dumpsoftwareversions/environment.yml b/modules/nf-core/custom/dumpsoftwareversions/environment.yml index 9b3272bc1..08ea06740 100644 --- a/modules/nf-core/custom/dumpsoftwareversions/environment.yml +++ b/modules/nf-core/custom/dumpsoftwareversions/environment.yml @@ -2,6 +2,5 @@ name: custom_dumpsoftwareversions channels: - conda-forge - bioconda - - defaults dependencies: - bioconda::multiqc=1.19 diff --git a/conf/test.config b/modules/nf-core/fastqc/main.nf similarity index 100% rename from conf/test.config rename to modules/nf-core/fastqc/main.nf diff --git a/modules/nf-core/fastqc/meta.yml b/modules/nf-core/fastqc/meta.yml new file mode 100644 index 000000000..e69de29bb diff --git a/modules/nf-core/fastqc/tests/main.nf.test b/modules/nf-core/fastqc/tests/main.nf.test new file mode 100644 index 000000000..e69de29bb diff --git a/modules/nf-core/fastqc/tests/main.nf.test.snap b/modules/nf-core/fastqc/tests/main.nf.test.snap new file mode 100644 index 000000000..d5db3092f --- /dev/null +++ b/modules/nf-core/fastqc/tests/main.nf.test.snap @@ -0,0 +1,392 @@ +{ + "sarscov2 custom_prefix": { + "content": [ + [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ] + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:02:16.374038" + }, + "sarscov2 single-end [fastq] - stub": { + "content": [ + { + "0": [ + [ + { + "id": "test", + "single_end": true + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "1": [ + [ + { + "id": "test", + "single_end": true + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "2": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "html": [ + [ + { + "id": "test", + "single_end": true + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "versions": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "zip": [ + [ + { + "id": "test", + "single_end": true + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ] + } + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:02:24.993809" + }, + "sarscov2 custom_prefix - stub": { + "content": [ + { + "0": [ + [ + { + "id": "mysample", + "single_end": true + }, + "mysample.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "1": [ + [ + { + "id": "mysample", + "single_end": true + }, + "mysample.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "2": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "html": [ + [ + { + "id": "mysample", + "single_end": true + }, + "mysample.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "versions": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "zip": [ + [ + { + "id": "mysample", + "single_end": true + }, + "mysample.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ] + } + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:03:10.93942" + }, + "sarscov2 interleaved [fastq]": { + "content": [ + [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ] + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:01:42.355718" + }, + "sarscov2 paired-end [bam]": { + "content": [ + [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ] + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:01:53.276274" + }, + "sarscov2 multiple [fastq]": { + "content": [ + [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ] + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:02:05.527626" + }, + "sarscov2 paired-end [fastq]": { + "content": [ + [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ] + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:01:31.188871" + }, + "sarscov2 paired-end [fastq] - stub": { + "content": [ + { + "0": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "1": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "2": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "html": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "versions": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "zip": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ] + } + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:02:34.273566" + }, + "sarscov2 multiple [fastq] - stub": { + "content": [ + { + "0": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "1": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "2": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "html": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "versions": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "zip": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ] + } + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:03:02.304411" + }, + "sarscov2 single-end [fastq]": { + "content": [ + [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ] + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:01:19.095607" + }, + "sarscov2 interleaved [fastq] - stub": { + "content": [ + { + "0": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "1": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "2": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "html": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "versions": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "zip": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ] + } + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:02:44.640184" + }, + "sarscov2 paired-end [bam] - stub": { + "content": [ + { + "0": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "1": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "2": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "html": [ + [ + { + "id": "test", + "single_end": false + }, + "test.html:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ], + "versions": [ + "versions.yml:md5,e1cc25ca8af856014824abd842e93978" + ], + "zip": [ + [ + { + "id": "test", + "single_end": false + }, + "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e" + ] + ] + } + ], + "meta": { + "nf-test": "0.9.0", + "nextflow": "24.04.3" + }, + "timestamp": "2024-07-22T11:02:53.550742" + } +} \ No newline at end of file diff --git a/modules/nf-core/multiqc/environment.yml b/modules/nf-core/multiqc/environment.yml index ca39fb67e..6f5b867b7 100644 --- a/modules/nf-core/multiqc/environment.yml +++ b/modules/nf-core/multiqc/environment.yml @@ -1,7 +1,5 @@ -name: multiqc channels: - conda-forge - bioconda - - defaults dependencies: - - bioconda::multiqc=1.21 + - bioconda::multiqc=1.25.1 diff --git a/modules/nf-core/multiqc/main.nf b/modules/nf-core/multiqc/main.nf index 47ac352f9..cc0643e1d 100644 --- a/modules/nf-core/multiqc/main.nf +++ b/modules/nf-core/multiqc/main.nf @@ -3,14 +3,16 @@ process MULTIQC { conda "${moduleDir}/environment.yml" container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? - 'https://depot.galaxyproject.org/singularity/multiqc:1.21--pyhdfd78af_0' : - 'biocontainers/multiqc:1.21--pyhdfd78af_0' }" + 'https://depot.galaxyproject.org/singularity/multiqc:1.25.1--pyhdfd78af_0' : + 'biocontainers/multiqc:1.25.1--pyhdfd78af_0' }" input: path multiqc_files, stageAs: "?/*" path(multiqc_config) path(extra_multiqc_config) path(multiqc_logo) + path(replace_names) + path(sample_names) output: path "*multiqc_report.html", emit: report @@ -23,16 +25,22 @@ process MULTIQC { script: def args = task.ext.args ?: '' + def prefix = task.ext.prefix ? "--filename ${task.ext.prefix}.html" : '' def config = multiqc_config ? "--config $multiqc_config" : '' def extra_config = extra_multiqc_config ? "--config $extra_multiqc_config" : '' - def logo = multiqc_logo ? /--cl-config 'custom_logo: "${multiqc_logo}"'/ : '' + def logo = multiqc_logo ? "--cl-config 'custom_logo: \"${multiqc_logo}\"'" : '' + def replace = replace_names ? "--replace-names ${replace_names}" : '' + def samples = sample_names ? "--sample-names ${sample_names}" : '' """ multiqc \\ --force \\ $args \\ $config \\ + $prefix \\ $extra_config \\ $logo \\ + $replace \\ + $samples \\ . cat <<-END_VERSIONS > versions.yml @@ -44,7 +52,7 @@ process MULTIQC { stub: """ mkdir multiqc_data - touch multiqc_plots + mkdir multiqc_plots touch multiqc_report.html cat <<-END_VERSIONS > versions.yml diff --git a/modules/nf-core/multiqc/meta.yml b/modules/nf-core/multiqc/meta.yml index 45a9bc35e..b16c18792 100644 --- a/modules/nf-core/multiqc/meta.yml +++ b/modules/nf-core/multiqc/meta.yml @@ -1,5 +1,6 @@ name: multiqc -description: Aggregate results from bioinformatics analyses across many samples into a single report +description: Aggregate results from bioinformatics analyses across many samples into + a single report keywords: - QC - bioinformatics tools @@ -12,40 +13,59 @@ tools: homepage: https://multiqc.info/ documentation: https://multiqc.info/docs/ licence: ["GPL-3.0-or-later"] + identifier: biotools:multiqc input: - - multiqc_files: - type: file - description: | - List of reports / files recognised by MultiQC, for example the html and zip output of FastQC - - multiqc_config: - type: file - description: Optional config yml for MultiQC - pattern: "*.{yml,yaml}" - - extra_multiqc_config: - type: file - description: Second optional config yml for MultiQC. Will override common sections in multiqc_config. - pattern: "*.{yml,yaml}" - - multiqc_logo: - type: file - description: Optional logo file for MultiQC - pattern: "*.{png}" + - - multiqc_files: + type: file + description: | + List of reports / files recognised by MultiQC, for example the html and zip output of FastQC + - - multiqc_config: + type: file + description: Optional config yml for MultiQC + pattern: "*.{yml,yaml}" + - - extra_multiqc_config: + type: file + description: Second optional config yml for MultiQC. Will override common sections + in multiqc_config. + pattern: "*.{yml,yaml}" + - - multiqc_logo: + type: file + description: Optional logo file for MultiQC + pattern: "*.{png}" + - - replace_names: + type: file + description: | + Optional two-column sample renaming file. First column a set of + patterns, second column a set of corresponding replacements. Passed via + MultiQC's `--replace-names` option. + pattern: "*.{tsv}" + - - sample_names: + type: file + description: | + Optional TSV file with headers, passed to the MultiQC --sample_names + argument. + pattern: "*.{tsv}" output: - report: - type: file - description: MultiQC report file - pattern: "multiqc_report.html" + - "*multiqc_report.html": + type: file + description: MultiQC report file + pattern: "multiqc_report.html" - data: - type: directory - description: MultiQC data dir - pattern: "multiqc_data" + - "*_data": + type: directory + description: MultiQC data dir + pattern: "multiqc_data" - plots: - type: file - description: Plots created by MultiQC - pattern: "*_data" + - "*_plots": + type: file + description: Plots created by MultiQC + pattern: "*_data" - versions: - type: file - description: File containing software versions - pattern: "versions.yml" + - versions.yml: + type: file + description: File containing software versions + pattern: "versions.yml" authors: - "@abhi18av" - "@bunop" diff --git a/modules/nf-core/multiqc/tests/main.nf.test b/modules/nf-core/multiqc/tests/main.nf.test index f1c4242ef..33316a7dd 100644 --- a/modules/nf-core/multiqc/tests/main.nf.test +++ b/modules/nf-core/multiqc/tests/main.nf.test @@ -8,6 +8,8 @@ nextflow_process { tag "modules_nfcore" tag "multiqc" + config "./nextflow.config" + test("sarscov2 single-end [fastqc]") { when { @@ -17,6 +19,8 @@ nextflow_process { input[1] = [] input[2] = [] input[3] = [] + input[4] = [] + input[5] = [] """ } } @@ -41,6 +45,8 @@ nextflow_process { input[1] = Channel.of(file("https://github.com/nf-core/tools/raw/dev/nf_core/pipeline-template/assets/multiqc_config.yml", checkIfExists: true)) input[2] = [] input[3] = [] + input[4] = [] + input[5] = [] """ } } @@ -66,6 +72,8 @@ nextflow_process { input[1] = [] input[2] = [] input[3] = [] + input[4] = [] + input[5] = [] """ } } diff --git a/modules/nf-core/multiqc/tests/main.nf.test.snap b/modules/nf-core/multiqc/tests/main.nf.test.snap index bfebd8029..2fcbb5ff7 100644 --- a/modules/nf-core/multiqc/tests/main.nf.test.snap +++ b/modules/nf-core/multiqc/tests/main.nf.test.snap @@ -2,14 +2,14 @@ "multiqc_versions_single": { "content": [ [ - "versions.yml:md5,21f35ee29416b9b3073c28733efe4b7d" + "versions.yml:md5,41f391dcedce7f93ca188f3a3ffa0916" ] ], "meta": { - "nf-test": "0.8.4", - "nextflow": "23.10.1" + "nf-test": "0.9.0", + "nextflow": "24.04.4" }, - "timestamp": "2024-02-29T08:48:55.657331" + "timestamp": "2024-10-02T17:51:46.317523" }, "multiqc_stub": { "content": [ @@ -17,25 +17,25 @@ "multiqc_report.html", "multiqc_data", "multiqc_plots", - "versions.yml:md5,21f35ee29416b9b3073c28733efe4b7d" + "versions.yml:md5,41f391dcedce7f93ca188f3a3ffa0916" ] ], "meta": { - "nf-test": "0.8.4", - "nextflow": "23.10.1" + "nf-test": "0.9.0", + "nextflow": "24.04.4" }, - "timestamp": "2024-02-29T08:49:49.071937" + "timestamp": "2024-10-02T17:52:20.680978" }, "multiqc_versions_config": { "content": [ [ - "versions.yml:md5,21f35ee29416b9b3073c28733efe4b7d" + "versions.yml:md5,41f391dcedce7f93ca188f3a3ffa0916" ] ], "meta": { - "nf-test": "0.8.4", - "nextflow": "23.10.1" + "nf-test": "0.9.0", + "nextflow": "24.04.4" }, - "timestamp": "2024-02-29T08:49:25.457567" + "timestamp": "2024-10-02T17:52:09.185842" } } \ No newline at end of file diff --git a/modules/nf-core/multiqc/tests/nextflow.config b/modules/nf-core/multiqc/tests/nextflow.config new file mode 100644 index 000000000..c537a6a3e --- /dev/null +++ b/modules/nf-core/multiqc/tests/nextflow.config @@ -0,0 +1,5 @@ +process { + withName: 'MULTIQC' { + ext.prefix = null + } +} diff --git a/nextflow.config b/nextflow.config index b95895132..ec761e09d 100644 --- a/nextflow.config +++ b/nextflow.config @@ -259,51 +259,20 @@ params { // Config options config_profile_name = null config_profile_description = null + custom_config_version = 'master' custom_config_base = "https://raw.githubusercontent.com/nf-core/configs/${params.custom_config_version}" config_profile_description = null config_profile_contact = null config_profile_url = null - - // Max resource options - // Defaults only, expecting to be overwritten - max_memory = '128.GB' - max_cpus = 16 - max_time = '240.h' - // Schema validation default options - validationFailUnrecognisedParams = false - validationLenientMode = false - validationSchemaIgnoreParams = 'genomes,igenomes_base' - validationShowHiddenParams = false - validate_params = true - + validate_params = true } // Load base.config by default for all pipelines includeConfig 'conf/base.config' -// Load the dev.config to work with openms containers in dev, comment during release to use the latest stable version -// includeConfig 'conf/dev.config' - - -// Load nf-core custom profiles from different Institutions -try { - includeConfig "${params.custom_config_base}/nfcore_custom.config" -} catch (Exception e) { - System.err.println("WARNING: Could not load nf-core/config profiles: ${params.custom_config_base}/nfcore_custom.config") -} - -// Load nf-core/quantms custom profiles from different institutions. -// Warning: Uncomment only if a pipeline-specific institutional config already exists on nf-core/configs! -// try { -// includeConfig "${params.custom_config_base}/pipeline/quantms.config" -// } catch (Exception e) { -// System.err.println("WARNING: Could not load nf-core/config/quantms profiles: ${params.custom_config_base}/pipeline/quantms.config") -// } - - profiles { debug { dumpHashes = true @@ -318,7 +287,7 @@ profiles { podman.enabled = false shifter.enabled = false charliecloud.enabled = false - conda.channels = ['conda-forge', 'bioconda', 'defaults'] + conda.channels = ['conda-forge', 'bioconda'] apptainer.enabled = false } mamba { @@ -331,7 +300,6 @@ profiles { shifter.enabled = false charliecloud.enabled = false apptainer.enabled = false - conda.createTimeout = '1 h' } docker { docker.enabled = true @@ -438,21 +406,23 @@ profiles { test_full { includeConfig 'conf/test_full_lfq.config' } test_dda_id { includeConfig 'conf/test_dda_id.config' } mambaci { includeConfig 'conf/mambaci.config' } - } -// Set default registry for Apptainer, Docker, Podman and Singularity independent of -profile -// Will not be used unless Apptainer / Docker / Podman / Singularity are enabled -// Set to your registry if you have a mirror of containers -apptainer.registry = 'quay.io' -docker.registry = 'quay.io' -podman.registry = 'quay.io' -singularity.registry = 'quay.io' +// Load nf-core custom profiles from different Institutions +includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/nfcore_custom.config" : "/dev/null" -// Nextflow plugins -plugins { - id 'nf-validation@1.1.3' // Validation of pipeline parameters and creation of an input channel from a sample sheet -} +// Load nf-core/quantms custom profiles from different institutions. +// TODO nf-core: Optionally, you can add a pipeline-specific nf-core config at https://github.com/nf-core/configs +// includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/pipeline/quantms.config" : "/dev/null" + +// Set default registry for Apptainer, Docker, Podman, Charliecloud and Singularity independent of -profile +// Will not be used unless Apptainer / Docker / Podman / Charliecloud / Singularity are enabled +// Set to your registry if you have a mirror of containers +apptainer.registry = 'quay.io' +docker.registry = 'quay.io' +podman.registry = 'quay.io' +singularity.registry = 'quay.io' +charliecloud.registry = 'quay.io' // Export these variables to prevent local Python/R libraries from conflicting with those in the container // The JULIA depot path has been adjusted to a fixed path `/usr/local/share/julia` that needs to be used for packages in the container. @@ -465,8 +435,15 @@ env { JULIA_DEPOT_PATH = "/usr/local/share/julia" } -// Capture exit codes from upstream processes when piping -process.shell = ['/bin/bash', '-euo', 'pipefail'] +// Set bash options +process.shell = """\ +bash + +set -e # Exit if a tool returns a non-zero status/exit code +set -u # Treat unset variables and parameters as an error +set -o pipefail # Returns the status of the last command to exit with a non-zero status or zero if all successfully execute +set -C # No clobber - prevent output redirection from overwriting files. +""" // Disable process selector warnings by default. Use debug profile to enable warnings. nextflow.enable.configProcessNamesValidation = false @@ -492,46 +469,49 @@ dag { manifest { name = 'nf-core/quantms' author = """Yasset Perez-Riverol""" - homePage = 'https://github.com/nf-core/quantms' + homePage = 'https://github.com/bigbio/quantms' description = """Quantitative Mass Spectrometry nf-core workflow""" mainScript = 'main.nf' - nextflowVersion = '!>=23.04.0' + nextflowVersion = '!>=24.04.2' version = '1.3.1dev' doi = '10.5281/zenodo.7754148' } -// Load modules.config for DSL2 module specific options -includeConfig 'conf/modules.config' +// Nextflow plugins +plugins { + id 'nf-schema@2.1.1' // Validation of pipeline parameters and creation of an input channel from a sample sheet +} -// Function to ensure that resource requirements don't go beyond -// a maximum limit -def check_max(obj, type) { - if (type == 'memory') { - try { - if (obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1) - return params.max_memory as nextflow.util.MemoryUnit - else - return obj - } catch (all) { - println " ### ERROR ### Max memory '${params.max_memory}' is not valid! Using default value: $obj" - return obj - } - } else if (type == 'time') { - try { - if (obj.compareTo(params.max_time as nextflow.util.Duration) == 1) - return params.max_time as nextflow.util.Duration - else - return obj - } catch (all) { - println " ### ERROR ### Max time '${params.max_time}' is not valid! Using default value: $obj" - return obj - } - } else if (type == 'cpus') { - try { - return Math.min( obj, params.max_cpus as int ) - } catch (all) { - println " ### ERROR ### Max cpus '${params.max_cpus}' is not valid! Using default value: $obj" - return obj - } +validation { + defaultIgnoreParams = ["genomes"] + help { + enabled = true + command = "nextflow run $manifest.name -profile --input samplesheet.csv --outdir " + fullParameter = "help_full" + showHiddenParameter = "show_hidden" + beforeText = """ +-\033[2m----------------------------------------------------\033[0m- + \033[0;32m,--.\033[0;30m/\033[0;32m,-.\033[0m +\033[0;34m ___ __ __ __ ___ \033[0;32m/,-._.--~\'\033[0m +\033[0;34m |\\ | |__ __ / ` / \\ |__) |__ \033[0;33m} {\033[0m +\033[0;34m | \\| | \\__, \\__/ | \\ |___ \033[0;32m\\`-._,-`-,\033[0m + \033[0;32m`._,._,\'\033[0m +\033[0;35m ${manifest.name} ${manifest.version}\033[0m +-\033[2m----------------------------------------------------\033[0m- +""" + afterText = """${manifest.doi ? "* The pipeline\n" : ""}${manifest.doi.tokenize(",").collect { " https://doi.org/${it.trim().replace('https://doi.org/','')}"}.join("\n")}${manifest.doi ? "\n" : ""} +* The nf-core framework + https://doi.org/10.1038/s41587-020-0439-x + +* Software dependencies + https://github.com/${manifest.name}/blob/master/CITATIONS.md +""" + } + summary { + beforeText = validation.help.beforeText + afterText = validation.help.afterText } } + +// Load modules.config for DSL2 module specific options +includeConfig 'conf/modules.config' diff --git a/nextflow_schema.json b/nextflow_schema.json index 70933632e..59722a71b 100644 --- a/nextflow_schema.json +++ b/nextflow_schema.json @@ -1,10 +1,10 @@ { - "$schema": "http://json-schema.org/draft-07/schema", + "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "https://raw.githubusercontent.com/nf-core/quantms/master/nextflow_schema.json", "title": "nf-core/quantms pipeline parameters", "description": "Quantitative Mass Spectrometry nf-core workflow", "type": "object", - "definitions": { + "$defs": { "input_output_options": { "title": "Input/output options", "type": "object", @@ -1297,41 +1297,6 @@ } } }, - "max_job_request_options": { - "title": "Max job request options", - "type": "object", - "fa_icon": "fab fa-acquisitions-incorporated", - "description": "Set the top limit for requested resources for any single job.", - "help_text": "If you are running on a smaller system, a pipeline step requesting more resources than are available may cause the Nextflow to stop the run with an error. These options allow you to cap the maximum resources requested by any single job so that the pipeline will run on your system.\n\nNote that you can not _increase_ the resources requested by any job using these options. For that you will need your own configuration file. See [the nf-core website](https://nf-co.re/usage/configuration) for details.", - "properties": { - "max_cpus": { - "type": "integer", - "description": "Maximum number of CPUs that can be requested for any single job.", - "default": 16, - "fa_icon": "fas fa-microchip", - "hidden": true, - "help_text": "Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. `--max_cpus 1`" - }, - "max_memory": { - "type": "string", - "description": "Maximum amount of memory that can be requested for any single job.", - "default": "128 GB", - "fa_icon": "fas fa-memory", - "pattern": "^\\d+(\\.\\d+)?\\.?\\s*(K|M|G|T)?B$", - "hidden": true, - "help_text": "Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. `--max_memory '8.GB'`" - }, - "max_time": { - "type": "string", - "description": "Maximum amount of time that can be requested for any single job.", - "default": "10d", - "fa_icon": "far fa-clock", - "pattern": "^(\\d+\\.?\\s*(s|m|h|d|day)\\s*)+$", - "hidden": true, - "help_text": "Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. `--max_time '2.h'`" - } - } - }, "generic_options": { "title": "Generic options", "type": "object", @@ -1339,12 +1304,6 @@ "description": "Less common options for the pipeline, typically set in a config file.", "help_text": "These options are common to all nf-core pipelines and allow you to customise some of the core preferences for how the pipeline runs.\n\nTypically these options would be set in a Nextflow config file loaded for all pipeline runs, such as `~/.nextflow/config`.", "properties": { - "help": { - "type": "boolean", - "description": "Display help text.", - "fa_icon": "fas fa-question-circle", - "hidden": true - }, "version": { "type": "boolean", "description": "Display version and exit.", @@ -1426,27 +1385,6 @@ "fa_icon": "fas fa-check-square", "hidden": true }, - "validationShowHiddenParams": { - "type": "boolean", - "fa_icon": "far fa-eye-slash", - "description": "Show all params when using `--help`", - "hidden": true, - "help_text": "By default, parameters set as _hidden_ in the schema are not shown on the command line when a user runs with `--help`. Specifying this option will tell the pipeline to show all parameters." - }, - "validationFailUnrecognisedParams": { - "type": "boolean", - "fa_icon": "far fa-check-circle", - "description": "Validation of parameters fails when an unrecognised parameter is found.", - "hidden": true, - "help_text": "By default, when an unrecognised parameter is found, it returns a warinig." - }, - "validationLenientMode": { - "type": "boolean", - "fa_icon": "far fa-check-circle", - "description": "Validation of parameters in lenient more.", - "hidden": true, - "help_text": "Allows string values that are parseable as numbers or booleans. For further information see [JSONSchema docs](https://github.com/everit-org/json-schema#lenient-mode)." - }, "pipelines_testdata_base_path": { "type": "string", "fa_icon": "far fa-check-circle", @@ -1459,70 +1397,67 @@ }, "allOf": [ { - "$ref": "#/definitions/input_output_options" - }, - { - "$ref": "#/definitions/protein_database" + "$ref": "#/$defs/input_output_options" }, { - "$ref": "#/definitions/sdrf_validation" + "$ref": "#/$defs/protein_database" }, { - "$ref": "#/definitions/spectrum_preprocessing" + "$ref": "#/$defs/sdrf_validation" }, { - "$ref": "#/definitions/database_search" + "$ref": "#/$defs/spectrum_preprocessing" }, { - "$ref": "#/definitions/modification_localization" + "$ref": "#/$defs/database_search" }, { - "$ref": "#/definitions/peptide_re_indexing" + "$ref": "#/$defs/modification_localization" }, { - "$ref": "#/definitions/psm_re_scoring_general" + "$ref": "#/$defs/peptide_re_indexing" }, { - "$ref": "#/definitions/psm_re_scoring_percolator" + "$ref": "#/$defs/psm_re_scoring_general" }, { - "$ref": "#/definitions/psm_re_scoring_distribution_fitting" + "$ref": "#/$defs/psm_re_scoring_percolator" }, { - "$ref": "#/definitions/consensus_id" + "$ref": "#/$defs/psm_re_scoring_distribution_fitting" }, { - "$ref": "#/definitions/feature_mapper" + "$ref": "#/$defs/consensus_id" }, { - "$ref": "#/definitions/isobaric_analyzer" + "$ref": "#/$defs/feature_mapper" }, { - "$ref": "#/definitions/protein_inference" + "$ref": "#/$defs/isobaric_analyzer" }, { - "$ref": "#/definitions/protein_quantification_dda" + "$ref": "#/$defs/protein_inference" }, { - "$ref": "#/definitions/protein_quantification_lfq" + "$ref": "#/$defs/protein_quantification_dda" }, { - "$ref": "#/definitions/DIA-NN" + "$ref": "#/$defs/protein_quantification_lfq" }, { - "$ref": "#/definitions/statistical_post_processing" + "$ref": "#/$defs/DIA-NN" }, { - "$ref": "#/definitions/quality_control" + "$ref": "#/$defs/statistical_post_processing" }, { - "$ref": "#/definitions/institutional_config_options" + "$ref": "#/$defs/quality_control" }, { - "$ref": "#/definitions/max_job_request_options" + "$ref": "#/$defs/institutional_config_options" }, { - "$ref": "#/definitions/generic_options" + "$ref": "#/$defs/generic_options" } ] } diff --git a/subworkflows/local/create_input_channel.nf b/subworkflows/local/create_input_channel.nf index 32f58c1d4..0c25ecfb1 100644 --- a/subworkflows/local/create_input_channel.nf +++ b/subworkflows/local/create_input_channel.nf @@ -20,14 +20,15 @@ workflow CREATE_INPUT_CHANNEL { if (is_sdrf.toString().toLowerCase().contains("true")) { SDRFPARSING ( ch_sdrf_or_design ) - ch_versions = ch_versions.mix(SDRFPARSING.out.version) + ch_versions = ch_versions.mix(SDRFPARSING.out.versions) ch_config = SDRFPARSING.out.ch_sdrf_config_file ch_expdesign = SDRFPARSING.out.ch_expdesign } else { PREPROCESS_EXPDESIGN( ch_sdrf_or_design ) - ch_config = PREPROCESS_EXPDESIGN.out.ch_config + ch_versions = ch_versions.mix(PREPROCESS_EXPDESIGN.out.versions) + ch_config = PREPROCESS_EXPDESIGN.out.ch_config ch_expdesign = PREPROCESS_EXPDESIGN.out.ch_expdesign } @@ -61,8 +62,7 @@ workflow CREATE_INPUT_CHANNEL { ch_meta_config_lfq // [meta, [spectra_files ]] ch_meta_config_dia // [meta, [spectra files ]] ch_expdesign - - version = ch_versions + versions = ch_versions } // Function to get list of [meta, [ spectra_files ]] diff --git a/subworkflows/local/databasesearchengines.nf b/subworkflows/local/databasesearchengines.nf index 0c1209c61..6154b3fa4 100644 --- a/subworkflows/local/databasesearchengines.nf +++ b/subworkflows/local/databasesearchengines.nf @@ -17,14 +17,16 @@ workflow DATABASESEARCHENGINES { if (params.search_engines.contains("msgf")) { MSGFDBINDEXING(ch_searchengine_in_db) + ch_versions = ch_versions.mix(MSGFDBINDEXING.out.versions) + SEARCHENGINEMSGF(ch_mzmls_search.combine(ch_searchengine_in_db).combine(MSGFDBINDEXING.out.msgfdb_idx)) - ch_versions = ch_versions.mix(SEARCHENGINEMSGF.out.version) + ch_versions = ch_versions.mix(SEARCHENGINEMSGF.out.versions) ch_id_msgf = ch_id_msgf.mix(SEARCHENGINEMSGF.out.id_files_msgf) } if (params.search_engines.contains("comet")) { SEARCHENGINECOMET(ch_mzmls_search.combine(ch_searchengine_in_db)) - ch_versions = ch_versions.mix(SEARCHENGINECOMET.out.version) + ch_versions = ch_versions.mix(SEARCHENGINECOMET.out.versions) ch_id_comet = ch_id_comet.mix(SEARCHENGINECOMET.out.id_files_comet) } @@ -61,13 +63,12 @@ workflow DATABASESEARCHENGINES { ch_meta_mzml_db_chunked = ch_meta_mzml_db.groupTuple(by: [0,1]) SEARCHENGINESAGE(ch_meta_mzml_db_chunked.combine(ch_searchengine_in_db)) - ch_versions = ch_versions.mix(SEARCHENGINESAGE.out.version) + ch_versions = ch_versions.mix(SEARCHENGINESAGE.out.versions) // we can safely use merge here since it is the same process ch_id_sage = ch_id_sage.mix(SEARCHENGINESAGE.out.id_files_sage.transpose()) } emit: ch_id_files_idx = ch_id_msgf.mix(ch_id_comet).mix(ch_id_sage) - versions = ch_versions } diff --git a/subworkflows/local/dda_id.nf b/subworkflows/local/dda_id.nf index 397a56a6e..16e8a3abc 100644 --- a/subworkflows/local/dda_id.nf +++ b/subworkflows/local/dda_id.nf @@ -1,7 +1,6 @@ // // MODULE: Local to the pipeline // -include { DECOYDATABASE } from '../../modules/local/openms/decoydatabase/main' include { CONSENSUSID } from '../../modules/local/openms/consensusid/main' include { EXTRACTPSMFEATURES } from '../../modules/local/openms/extractpsmfeatures/main' include { PERCOLATOR } from '../../modules/local/openms/thirdparty/percolator/main' @@ -72,29 +71,31 @@ workflow DDA_ID { EXTRACTPSMFEATURES(ch_ms2rescore_branched.nosage) SAGEFEATURE(ch_ms2rescore_branched.sage) ch_id_files_feats = EXTRACTPSMFEATURES.out.id_files_feat.mix(SAGEFEATURE.out.id_files_feat) - ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.version) + ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.versions, SAGEFEATURE.out.versions) } else { EXTRACTPSMFEATURES(ch_id_files_branched.nosage) ch_id_files_feats = ch_id_files_branched.sage.mix(EXTRACTPSMFEATURES.out.id_files_feat) - ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.version) + ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.versions) } // Add SNR features to percolator if (params.add_snr_feature_percolator) { SPECTRUM2FEATURES(ch_id_files_feats.combine(ch_file_preparation_results, by: 0)) ch_id_files_feats = SPECTRUM2FEATURES.out.id_files_snr - ch_software_versions = ch_software_versions.mix(SPECTRUM2FEATURES.out.version) + ch_software_versions = ch_software_versions.mix(SPECTRUM2FEATURES.out.versions) } // Rescoring for independent run, Sample or whole experiments if (params.rescore_range == "independent_run") { PERCOLATOR(ch_id_files_feats) - ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.version) + ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.versions) ch_consensus_input = PERCOLATOR.out.id_files_perc } else if (params.rescore_range == "by_sample") { // Sample map GETSAMPLE(ch_expdesign) + ch_software_versions = ch_software_versions.mix(GETSAMPLE.out.versions) + ch_expdesign_sample = GETSAMPLE.out.ch_expdesign_sample ch_expdesign_sample.splitCsv(header: true, sep: '\t') .map { get_sample_map(it) }.set{ sample_map_idv } @@ -118,10 +119,10 @@ workflow DDA_ID { IDMERGER(ch_id_files_feat_branched.comet.groupTuple(by: 2) .mix(ch_id_files_feat_branched.msgf.groupTuple(by: 2)) .mix(ch_id_files_feat_branched.sage.groupTuple(by: 2))) - ch_software_versions = ch_software_versions.mix(IDMERGER.out.version) + ch_software_versions = ch_software_versions.mix(IDMERGER.out.versions) PERCOLATOR(IDMERGER.out.id_merged) - ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.version) + ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.versions) // Currently only ID runs on exactly one mzML file are supported in CONSENSUSID. Split idXML by runs IDRIPPER(PERCOLATOR.out.id_files_perc) @@ -130,7 +131,7 @@ workflow DDA_ID { meta.combine(id_rippers, by: 0) .map{ [it[1], it[2], "MS:1001491"]} .set{ ch_consensus_input } - ch_software_versions = ch_software_versions.mix(IDRIPPER.out.version) + ch_software_versions = ch_software_versions.mix(IDRIPPER.out.versions) } else if (params.rescore_range == "by_project"){ ch_id_files_feats.map {[it[0].experiment_id, it[0], it[1]]}.set { ch_id_files_feats} @@ -149,10 +150,10 @@ workflow DDA_ID { IDMERGER(ch_id_files_feat_branched.comet.groupTuple(by: 2) .mix(ch_id_files_feat_branched.msgf.groupTuple(by: 2)) .mix(ch_id_files_feat_branched.sage.groupTuple(by: 2))) - ch_software_versions = ch_software_versions.mix(IDMERGER.out.version) + ch_software_versions = ch_software_versions.mix(IDMERGER.out.versions) PERCOLATOR(IDMERGER.out.id_merged) - ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.version) + ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.versions) // Currently only ID runs on exactly one mzML file are supported in CONSENSUSID. Split idXML by runs IDRIPPER(PERCOLATOR.out.id_files_perc) @@ -161,7 +162,7 @@ workflow DDA_ID { meta.combine(id_rippers, by: 0) .map{ [it[1], it[2], "MS:1001491"]} .set{ ch_consensus_input } - ch_software_versions = ch_software_versions.mix(IDRIPPER.out.version) + ch_software_versions = ch_software_versions.mix(IDRIPPER.out.versions) } @@ -171,19 +172,19 @@ workflow DDA_ID { MS2RESCORE(ch_id_files.combine(ch_file_preparation_results, by: 0)) ch_software_versions = ch_software_versions.mix(MS2RESCORE.out.versions) IDSCORESWITCHER(MS2RESCORE.out.idxml.combine(Channel.value("PEP"))) - ch_software_versions = ch_software_versions.mix(IDSCORESWITCHER.out.version) + ch_software_versions = ch_software_versions.mix(IDSCORESWITCHER.out.versions) ch_consensus_input = IDSCORESWITCHER.out.id_score_switcher.combine(Channel.value("MS:1001491")) ch_rescoring_results = IDSCORESWITCHER.out.ch_consensus_input } else { ch_fdridpep = Channel.empty() if (params.search_engines.split(",").size() == 1) { FDRIDPEP(ch_id_files) - ch_software_versions = ch_software_versions.mix(FDRIDPEP.out.version) + ch_software_versions = ch_software_versions.mix(FDRIDPEP.out.versions) ch_id_files = Channel.empty() ch_fdridpep = FDRIDPEP.out.id_files_idx_ForIDPEP_FDR } IDPEP(ch_fdridpep.mix(ch_id_files)) - ch_software_versions = ch_software_versions.mix(IDPEP.out.version) + ch_software_versions = ch_software_versions.mix(IDPEP.out.versions) ch_consensus_input = IDPEP.out.id_files_ForIDPEP ch_rescoring_results = ch_consensus_input } @@ -195,7 +196,7 @@ workflow DDA_ID { ch_consensus_results = Channel.empty() if (params.search_engines.split(",").size() > 1) { CONSENSUSID(ch_consensus_input.groupTuple(size: params.search_engines.split(",").size())) - ch_software_versions = ch_software_versions.mix(CONSENSUSID.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(CONSENSUSID.out.versions.ifEmpty(null)) ch_psmfdrcontrol = CONSENSUSID.out.consensusids ch_psmfdrcontrol .map { it -> it[1] } @@ -205,10 +206,11 @@ workflow DDA_ID { } PSMFDRCONTROL(ch_psmfdrcontrol) - ch_software_versions = ch_software_versions.mix(PSMFDRCONTROL.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(PSMFDRCONTROL.out.versions.ifEmpty(null)) // Extract PSMs and export parquet format PSMCONVERSION(PSMFDRCONTROL.out.id_filtered.combine(ch_spectrum_data, by: 0)) + ch_software_versions = ch_software_versions.mix(PSMCONVERSION.out.versions) ch_rescoring_results .map { it -> it[1] } @@ -221,7 +223,7 @@ workflow DDA_ID { emit: ch_pmultiqc_ids = ch_pmultiqc_ids ch_pmultiqc_consensus = ch_pmultiqc_consensus - version = ch_software_versions + versions = ch_software_versions } // Function to add file prefix diff --git a/subworkflows/local/featuremapper.nf b/subworkflows/local/featuremapper.nf index 43f27aa90..e924f0338 100644 --- a/subworkflows/local/featuremapper.nf +++ b/subworkflows/local/featuremapper.nf @@ -14,13 +14,13 @@ workflow FEATUREMAPPER { ch_version = Channel.empty() ISOBARICANALYZER(ch_mzml_files) - ch_version = ch_version.mix(ISOBARICANALYZER.out.version) + ch_version = ch_version.mix(ISOBARICANALYZER.out.versions) IDMAPPER(ch_id_files.combine(ISOBARICANALYZER.out.id_files_consensusXML, by: 0)) - ch_version = ch_version.mix(IDMAPPER.out.version) + ch_version = ch_version.mix(IDMAPPER.out.versions) emit: id_map = IDMAPPER.out.id_map - version = ch_version + versions = ch_version } diff --git a/subworkflows/local/file_preparation.nf b/subworkflows/local/file_preparation.nf index 83b15c14c..5217b9779 100644 --- a/subworkflows/local/file_preparation.nf +++ b/subworkflows/local/file_preparation.nf @@ -31,7 +31,7 @@ workflow FILE_PREPARATION { compressed_files = ch_branched_input.dottar.mix(ch_branched_input.dotzip, ch_branched_input.gz) DECOMPRESS(compressed_files) - ch_versions = ch_versions.mix(DECOMPRESS.out.version) + ch_versions = ch_versions.mix(DECOMPRESS.out.versions) ch_rawfiles = ch_branched_input.uncompressed.mix(DECOMPRESS.out.decompressed_files) // @@ -53,7 +53,7 @@ workflow FILE_PREPARATION { if (params.reindex_mzml) { MZMLINDEXING( ch_branched_input.mzML ) - ch_versions = ch_versions.mix(MZMLINDEXING.out.version) + ch_versions = ch_versions.mix(MZMLINDEXING.out.versions) ch_results = ch_results.mix(MZMLINDEXING.out.mzmls_indexed) } else { ch_results = ch_results.mix(ch_branched_input.mzML) @@ -66,7 +66,7 @@ workflow FILE_PREPARATION { // 'log': Path(*.txt)} // Where meta is the same as the input meta - ch_versions = ch_versions.mix(THERMORAWFILEPARSER.out.version) + ch_versions = ch_versions.mix(THERMORAWFILEPARSER.out.versions) ch_results = ch_results.mix(THERMORAWFILEPARSER.out.mzmls_converted) ch_results.map{ it -> [it[0], it[1]] }.set{ indexed_mzml_bundle } @@ -74,7 +74,7 @@ workflow FILE_PREPARATION { // Convert .d files to mzML if (params.convert_dotd) { TDF2MZML( ch_branched_input.dotd ) - ch_versions = ch_versions.mix(TDF2MZML.out.version) + ch_versions = ch_versions.mix(TDF2MZML.out.versions) ch_results = indexed_mzml_bundle.mix(TDF2MZML.out.mzmls_converted) // indexed_mzml_bundle = indexed_mzml_bundle.mix(TDF2MZML.out.mzmls_converted) } else { @@ -86,7 +86,7 @@ workflow FILE_PREPARATION { ch_statistics = ch_statistics.mix(MZMLSTATISTICS.out.ms_statistics.collect()) ch_spectrum_df = ch_spectrum_df.mix(MZMLSTATISTICS.out.spectrum_df) - ch_versions = ch_versions.mix(MZMLSTATISTICS.out.version) + ch_versions = ch_versions.mix(MZMLSTATISTICS.out.versions) if (params.openms_peakpicking) { // If the peak picker is enabled, it will over-write not bypass the .d files @@ -94,7 +94,7 @@ workflow FILE_PREPARATION { indexed_mzml_bundle ) - ch_versions = ch_versions.mix(OPENMSPEAKPICKER.out.version) + ch_versions = ch_versions.mix(OPENMSPEAKPICKER.out.versions) ch_results = OPENMSPEAKPICKER.out.mzmls_picked } @@ -102,7 +102,7 @@ workflow FILE_PREPARATION { results = ch_results // channel: [val(mzml_id), indexedmzml|.d.tar] statistics = ch_statistics // channel: [ *_ms_info.parquet ] spectrum_data = ch_spectrum_df // channel: [val(mzml_id), *_spectrum_df.parquet] - version = ch_versions // channel: [ *.version.txt ] + versions = ch_versions // channel: [ *.versions.yml ] } // diff --git a/subworkflows/local/id.nf b/subworkflows/local/id.nf index b851ab818..38f6b9c99 100644 --- a/subworkflows/local/id.nf +++ b/subworkflows/local/id.nf @@ -1,7 +1,6 @@ // // MODULE: Local to the pipeline // -include { DECOYDATABASE } from '../../modules/local/openms/decoydatabase/main' include { CONSENSUSID } from '../../modules/local/openms/consensusid/main' // @@ -44,7 +43,7 @@ workflow ID { ch_consensus_results = Channel.empty() if (params.search_engines.split(",").size() > 1) { CONSENSUSID(PSMRESCORING.out.results.groupTuple(size: params.search_engines.split(",").size())) - ch_software_versions = ch_software_versions.mix(CONSENSUSID.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(CONSENSUSID.out.versions.ifEmpty(null)) ch_psmfdrcontrol = CONSENSUSID.out.consensusids ch_consensus_results = CONSENSUSID.out.consensusids } else { @@ -52,14 +51,14 @@ workflow ID { } PSMFDRCONTROL(ch_psmfdrcontrol) - ch_software_versions = ch_software_versions.mix(PSMFDRCONTROL.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(PSMFDRCONTROL.out.versions.ifEmpty(null)) // // SUBWORKFLOW:PHOSPHOSCORING // if (params.enable_mod_localization) { PHOSPHOSCORING(ch_file_preparation_results, PSMFDRCONTROL.out.id_filtered) - ch_software_versions = ch_software_versions.mix(PHOSPHOSCORING.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(PHOSPHOSCORING.out.versions.ifEmpty(null)) ch_id_results = PHOSPHOSCORING.out.id_luciphor } else { ch_id_results = PSMFDRCONTROL.out.id_filtered @@ -69,5 +68,5 @@ workflow ID { id_results = ch_id_results psmrescoring_results = PSMRESCORING.out.results ch_consensus_results = ch_consensus_results - version = ch_software_versions + versions = ch_software_versions } diff --git a/subworkflows/local/input_check.nf b/subworkflows/local/input_check.nf index fc9c45436..df7b455f1 100644 --- a/subworkflows/local/input_check.nf +++ b/subworkflows/local/input_check.nf @@ -9,6 +9,9 @@ workflow INPUT_CHECK { input_file // file: /path/to/input_file main: + + ch_software_versions = Channel.empty() + if (input_file.toString().toLowerCase().contains("sdrf")) { is_sdrf = true } else { @@ -19,9 +22,10 @@ workflow INPUT_CHECK { } } SAMPLESHEET_CHECK ( input_file, is_sdrf, params.validate_ontologies ) + ch_software_versions = ch_software_versions.mix(SAMPLESHEET_CHECK.out.versions) emit: ch_input_file = SAMPLESHEET_CHECK.out.checked_file is_sdrf = is_sdrf - versions = SAMPLESHEET_CHECK.out.versions + versions = ch_software_versions } diff --git a/subworkflows/local/phosphoscoring.nf b/subworkflows/local/phosphoscoring.nf index c092d3cba..30a602871 100644 --- a/subworkflows/local/phosphoscoring.nf +++ b/subworkflows/local/phosphoscoring.nf @@ -14,13 +14,13 @@ workflow PHOSPHOSCORING { ch_version = Channel.empty() IDSCORESWITCHERFORLUCIPHOR(ch_id_files.combine(Channel.value("\"Posterior Error Probability_score\""))) - ch_version = ch_version.mix(IDSCORESWITCHERFORLUCIPHOR.out.version) + ch_version = ch_version.mix(IDSCORESWITCHERFORLUCIPHOR.out.versions) LUCIPHORADAPTER(ch_mzml_files.join(IDSCORESWITCHERFORLUCIPHOR.out.id_score_switcher)) - ch_version = ch_version.mix(LUCIPHORADAPTER.out.version) + ch_version = ch_version.mix(LUCIPHORADAPTER.out.versions) emit: id_luciphor = LUCIPHORADAPTER.out.ptm_in_id_luciphor - version = ch_version + versions = ch_version } diff --git a/subworkflows/local/proteininference.nf b/subworkflows/local/proteininference.nf index faa2b5e3e..e492caf26 100644 --- a/subworkflows/local/proteininference.nf +++ b/subworkflows/local/proteininference.nf @@ -15,16 +15,16 @@ workflow PROTEININFERENCE { if (params.protein_inference_method == "bayesian") { EPIFANY(ch_consus_file) - ch_version = ch_version.mix(EPIFANY.out.version) + ch_version = ch_version.mix(EPIFANY.out.versions) ch_inference = EPIFANY.out.epi_inference } else { PROTEININFERENCER(ch_consus_file) - ch_version = ch_version.mix(PROTEININFERENCER.out.version) + ch_version = ch_version.mix(PROTEININFERENCER.out.versions) ch_inference = PROTEININFERENCER.out.protein_inference } IDFILTER(ch_inference) - ch_version = ch_version.mix(IDFILTER.out.version) + ch_version = ch_version.mix(IDFILTER.out.versions) IDFILTER.out.id_filtered .multiMap{ it -> meta: it[0] @@ -35,6 +35,6 @@ workflow PROTEININFERENCE { emit: epi_idfilter = ch_epi_results.results - version = ch_version + versions = ch_version } diff --git a/subworkflows/local/proteinquant.nf b/subworkflows/local/proteinquant.nf index d4ef2d057..08fa573a9 100644 --- a/subworkflows/local/proteinquant.nf +++ b/subworkflows/local/proteinquant.nf @@ -15,17 +15,16 @@ workflow PROTEINQUANT { ch_version = Channel.empty() IDCONFLICTRESOLVER(ch_conflict_file) - ch_version = ch_version.mix(IDCONFLICTRESOLVER.out.version) + ch_version = ch_version.mix(IDCONFLICTRESOLVER.out.versions) PROTEINQUANTIFIER(IDCONFLICTRESOLVER.out.pro_resconf, ch_expdesign_file) - ch_version = ch_version.mix(PROTEINQUANTIFIER.out.version) + ch_version = ch_version.mix(PROTEINQUANTIFIER.out.versions) MSSTATSCONVERTER(IDCONFLICTRESOLVER.out.pro_resconf, ch_expdesign_file, "ISO") - ch_version = ch_version.mix(MSSTATSCONVERTER.out.version) + ch_version = ch_version.mix(MSSTATSCONVERTER.out.versions) emit: msstats_csv = MSSTATSCONVERTER.out.out_msstats out_mztab = PROTEINQUANTIFIER.out.out_mztab - - version = ch_version + versions = ch_version } diff --git a/subworkflows/local/psmfdrcontrol.nf b/subworkflows/local/psmfdrcontrol.nf index 01b5e1a88..26480f72e 100644 --- a/subworkflows/local/psmfdrcontrol.nf +++ b/subworkflows/local/psmfdrcontrol.nf @@ -7,6 +7,7 @@ include { FALSEDISCOVERYRATE as FDRCONSENSUSID } from '../../modules/local/openm include { IDFILTER } from '../../modules/local/openms/idfilter/main' workflow PSMFDRCONTROL { + take: ch_id_files @@ -16,18 +17,17 @@ workflow PSMFDRCONTROL { if (params.search_engines.split(",").size() == 1) { IDSCORESWITCHER(ch_id_files) - ch_version = ch_version.mix(IDSCORESWITCHER.out.version) + ch_version = ch_version.mix(IDSCORESWITCHER.out.versions) ch_idfilter = IDSCORESWITCHER.out.id_score_switcher } else { FDRCONSENSUSID(ch_id_files) - ch_version = ch_version.mix(FDRCONSENSUSID.out.version) + ch_version = ch_version.mix(FDRCONSENSUSID.out.versions) ch_idfilter = FDRCONSENSUSID.out.id_files_idx_ForIDPEP_FDR } IDFILTER(ch_idfilter) - ch_version = ch_version.mix(IDFILTER.out.version) + ch_version = ch_version.mix(IDFILTER.out.versions) emit: id_filtered =IDFILTER.out.id_filtered - - version = ch_version + versions = ch_version } diff --git a/subworkflows/local/psmrescoring.nf b/subworkflows/local/psmrescoring.nf index c6fce8553..e283aa773 100644 --- a/subworkflows/local/psmrescoring.nf +++ b/subworkflows/local/psmrescoring.nf @@ -46,28 +46,29 @@ workflow PSMRESCORING { EXTRACTPSMFEATURES(ch_ms2rescore_branched.nosage) SAGEFEATURE(ch_ms2rescore_branched.sage) ch_id_files_feats = EXTRACTPSMFEATURES.out.id_files_feat.mix(SAGEFEATURE.out.id_files_feat) - ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.version) + ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.versions, SAGEFEATURE.out.versions) } else { EXTRACTPSMFEATURES(ch_id_files_branched.nosage) ch_id_files_feats = ch_id_files_branched.sage.mix(EXTRACTPSMFEATURES.out.id_files_feat) - ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.version) + ch_software_versions = ch_software_versions.mix(EXTRACTPSMFEATURES.out.versions) } // Add SNR features to percolator if (params.add_snr_feature_percolator) { SPECTRUM2FEATURES(ch_id_files_feats.combine(ch_file_preparation_results, by: 0)) ch_id_files_feats = SPECTRUM2FEATURES.out.id_files_snr - ch_software_versions = ch_software_versions.mix(SPECTRUM2FEATURES.out.version) + ch_software_versions = ch_software_versions.mix(SPECTRUM2FEATURES.out.versions) } // Rescoring for independent run, Sample or whole experiments if (params.rescore_range == "independent_run") { PERCOLATOR(ch_id_files_feats) - ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.version) + ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.versions) ch_consensus_input = PERCOLATOR.out.id_files_perc } else if (params.rescore_range == "by_sample") { // Sample map GETSAMPLE(ch_expdesign) + ch_software_versions = ch_software_versions.mix(GETSAMPLE.out.versions) ch_expdesign_sample = GETSAMPLE.out.ch_expdesign_sample ch_expdesign_sample.splitCsv(header: true, sep: '\t') .map { get_sample_map(it) }.set{ sample_map_idv } @@ -89,10 +90,10 @@ workflow PSMRESCORING { IDMERGER(ch_id_files_feat_branched.comet.groupTuple(by: 2) .mix(ch_id_files_feat_branched.msgf.groupTuple(by: 2)) .mix(ch_id_files_feat_branched.sage.groupTuple(by: 2))) - ch_software_versions = ch_software_versions.mix(IDMERGER.out.version) + ch_software_versions = ch_software_versions.mix(IDMERGER.out.versions) PERCOLATOR(IDMERGER.out.id_merged) - ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.version) + ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.versions) // Currently only ID runs on exactly one mzML file are supported in CONSENSUSID. Split idXML by runs IDRIPPER(PERCOLATOR.out.id_files_perc) @@ -101,7 +102,7 @@ workflow PSMRESCORING { meta.combine(id_rippers, by: 0) .map{ [it[1], it[2], "MS:1001491"]} .set{ ch_consensus_input } - ch_software_versions = ch_software_versions.mix(IDRIPPER.out.version) + ch_software_versions = ch_software_versions.mix(IDRIPPER.out.versions) } else if (params.rescore_range == "by_project"){ ch_id_files_feats.map {[it[0].experiment_id, it[0], it[1]]}.set { ch_id_files_feats} @@ -120,10 +121,10 @@ workflow PSMRESCORING { IDMERGER(ch_id_files_feat_branched.comet.groupTuple(by: 2) .mix(ch_id_files_feat_branched.msgf.groupTuple(by: 2)) .mix(ch_id_files_feat_branched.sage.groupTuple(by: 2))) - ch_software_versions = ch_software_versions.mix(IDMERGER.out.version) + ch_software_versions = ch_software_versions.mix(IDMERGER.out.versions) PERCOLATOR(IDMERGER.out.id_merged) - ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.version) + ch_software_versions = ch_software_versions.mix(PERCOLATOR.out.versions) // Currently only ID runs on exactly one mzML file are supported in CONSENSUSID. Split idXML by runs IDRIPPER(PERCOLATOR.out.id_files_perc) @@ -132,7 +133,7 @@ workflow PSMRESCORING { meta.combine(id_rippers, by: 0) .map{ [it[1], it[2], "MS:1001491"]} .set{ ch_consensus_input } - ch_software_versions = ch_software_versions.mix(IDRIPPER.out.version) + ch_software_versions = ch_software_versions.mix(IDRIPPER.out.versions) } ch_rescoring_results = ch_consensus_input @@ -140,26 +141,25 @@ workflow PSMRESCORING { MS2RESCORE(ch_id_files.combine(ch_file_preparation_results, by: 0)) ch_software_versions = ch_software_versions.mix(MS2RESCORE.out.versions) IDSCORESWITCHER(MS2RESCORE.out.idxml.combine(Channel.value("PEP"))) - ch_software_versions = ch_software_versions.mix(IDSCORESWITCHER.out.version) + ch_software_versions = ch_software_versions.mix(IDSCORESWITCHER.out.versions) ch_consensus_input = IDSCORESWITCHER.out.id_score_switcher.combine(Channel.value("MS:1001491")) ch_rescoring_results = IDSCORESWITCHER.out.id_score_switcher } else { ch_fdridpep = Channel.empty() if (params.search_engines.split(",").size() == 1) { FDRIDPEP(ch_id_files) - ch_software_versions = ch_software_versions.mix(FDRIDPEP.out.version) + ch_software_versions = ch_software_versions.mix(FDRIDPEP.out.versions) ch_id_files = Channel.empty() ch_fdridpep = FDRIDPEP.out.id_files_idx_ForIDPEP_FDR } IDPEP(ch_fdridpep.mix(ch_id_files)) - ch_software_versions = ch_software_versions.mix(IDPEP.out.version) + ch_software_versions = ch_software_versions.mix(IDPEP.out.versions) ch_consensus_input = IDPEP.out.id_files_ForIDPEP ch_rescoring_results = ch_consensus_input } emit: results = ch_rescoring_results - versions = ch_software_versions } diff --git a/subworkflows/local/utils_nfcore_quantms_pipeline/main.nf b/subworkflows/local/utils_nfcore_quantms_pipeline/main.nf index 5d36539ec..f6771f2e7 100644 --- a/subworkflows/local/utils_nfcore_quantms_pipeline/main.nf +++ b/subworkflows/local/utils_nfcore_quantms_pipeline/main.nf @@ -8,29 +8,25 @@ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ -include { UTILS_NFVALIDATION_PLUGIN } from '../../nf-core/utils_nfvalidation_plugin' -include { paramsSummaryMap } from 'plugin/nf-validation' -include { fromSamplesheet } from 'plugin/nf-validation' -include { UTILS_NEXTFLOW_PIPELINE } from '../../nf-core/utils_nextflow_pipeline' +include { UTILS_NFSCHEMA_PLUGIN } from '../../nf-core/utils_nfschema_plugin' +include { paramsSummaryMap } from 'plugin/nf-schema' +include { samplesheetToList } from 'plugin/nf-schema' include { completionEmail } from '../../nf-core/utils_nfcore_pipeline' include { completionSummary } from '../../nf-core/utils_nfcore_pipeline' -include { dashedLine } from '../../nf-core/utils_nfcore_pipeline' -include { nfCoreLogo } from '../../nf-core/utils_nfcore_pipeline' include { imNotification } from '../../nf-core/utils_nfcore_pipeline' include { UTILS_NFCORE_PIPELINE } from '../../nf-core/utils_nfcore_pipeline' -include { workflowCitation } from '../../nf-core/utils_nfcore_pipeline' +include { UTILS_NEXTFLOW_PIPELINE } from '../../nf-core/utils_nextflow_pipeline' /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SUBWORKFLOW TO INITIALISE PIPELINE -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ workflow PIPELINE_INITIALISATION { take: version // boolean: Display version and exit - help // boolean: Display help text validate_params // boolean: Boolean whether to validate parameters against the schema at runtime monochrome_logs // boolean: Do not use coloured log outputs nextflow_cli_args // array: List of positional nextflow CLI args @@ -54,16 +50,10 @@ workflow PIPELINE_INITIALISATION { // // Validate parameters and generate parameter summary to stdout // - pre_help_text = nfCoreLogo(monochrome_logs) - post_help_text = '\n' + workflowCitation() + '\n' + dashedLine(monochrome_logs) - def String workflow_command = "nextflow run ${workflow.manifest.name} -profile --input samplesheet.csv --outdir " - UTILS_NFVALIDATION_PLUGIN ( - help, - workflow_command, - pre_help_text, - post_help_text, + UTILS_NFSCHEMA_PLUGIN ( + workflow, validate_params, - "nextflow_schema.json" + null ) // @@ -72,6 +62,7 @@ workflow PIPELINE_INITIALISATION { UTILS_NFCORE_PIPELINE ( nextflow_cli_args ) + // // Custom validation for pipeline parameters // @@ -80,8 +71,9 @@ workflow PIPELINE_INITIALISATION { // // Create channel from input file provided through params.input // + Channel - .fromSamplesheet("input") + .fromList(samplesheetToList(params.input, "${projectDir}/assets/schema_input.json")) .map { meta, fastq_1, fastq_2 -> if (!fastq_2) { @@ -91,8 +83,8 @@ workflow PIPELINE_INITIALISATION { } } .groupTuple() - .map { - validateInputSamplesheet(it) + .map { samplesheet -> + validateInputSamplesheet(samplesheet) } .map { meta, fastqs -> @@ -106,9 +98,9 @@ workflow PIPELINE_INITIALISATION { } /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SUBWORKFLOW FOR PIPELINE COMPLETION -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ workflow PIPELINE_COMPLETION { @@ -123,7 +115,6 @@ workflow PIPELINE_COMPLETION { multiqc_report // string: Path to MultiQC report main: - summary_params = paramsSummaryMap(workflow, parameters_schema: "nextflow_schema.json") // @@ -131,11 +122,18 @@ workflow PIPELINE_COMPLETION { // workflow.onComplete { if (email || email_on_fail) { - completionEmail(summary_params, email, email_on_fail, plaintext_email, outdir, monochrome_logs, multiqc_report.toList()) + completionEmail( + summary_params, + email, + email_on_fail, + plaintext_email, + outdir, + monochrome_logs, + multiqc_report.toList() + ) } completionSummary(monochrome_logs) - if (hook_url) { imNotification(summary_params, hook_url) } @@ -147,9 +145,9 @@ workflow PIPELINE_COMPLETION { } /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FUNCTIONS -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ // // Check and validate pipeline parameters @@ -165,7 +163,7 @@ def validateInputSamplesheet(input) { def (metas, fastqs) = input[1..2] // Check that multiple runs of the same sample are of the same datatype i.e. single-end / paired-end - def endedness_ok = metas.collect{ it.single_end }.unique().size == 1 + def endedness_ok = metas.collect{ meta -> meta.single_end }.unique().size == 1 if (!endedness_ok) { error("Please check input samplesheet -> Multiple runs of a sample must be of the same datatype i.e. single-end or paired-end: ${metas[0].id}") } @@ -197,7 +195,6 @@ def genomeExistsError() { error(error_string) } } - // // Generate methods description for MultiQC // @@ -239,8 +236,10 @@ def methodsDescriptionText(mqc_methods_yaml) { // Removing `https://doi.org/` to handle pipelines using DOIs vs DOI resolvers // Removing ` ` since the manifest.doi is a string and not a proper list def temp_doi_ref = "" - String[] manifest_doi = meta.manifest_map.doi.tokenize(",") - for (String doi_ref: manifest_doi) temp_doi_ref += "(doi: ${doi_ref.replace("https://doi.org/", "").replace(" ", "")}), " + def manifest_doi = meta.manifest_map.doi.tokenize(",") + manifest_doi.each { doi_ref -> + temp_doi_ref += "(doi: ${doi_ref.replace("https://doi.org/", "").replace(" ", "")}), " + } meta["doi_text"] = temp_doi_ref.substring(0, temp_doi_ref.length() - 2) } else meta["doi_text"] = "" meta["nodoi_text"] = meta.manifest_map.doi ? "" : "
  • If available, make sure to update the text to include the Zenodo DOI of version of the pipeline used.
  • " @@ -261,3 +260,4 @@ def methodsDescriptionText(mqc_methods_yaml) { return description_html.toString() } + diff --git a/subworkflows/nf-core/utils_nextflow_pipeline/main.nf b/subworkflows/nf-core/utils_nextflow_pipeline/main.nf index ac31f28f6..0fcbf7b3f 100644 --- a/subworkflows/nf-core/utils_nextflow_pipeline/main.nf +++ b/subworkflows/nf-core/utils_nextflow_pipeline/main.nf @@ -2,18 +2,13 @@ // Subworkflow with functionality that may be useful for any Nextflow pipeline // -import org.yaml.snakeyaml.Yaml -import groovy.json.JsonOutput -import nextflow.extension.FilesEx - /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SUBWORKFLOW DEFINITION -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ workflow UTILS_NEXTFLOW_PIPELINE { - take: print_version // boolean: print version dump_parameters // boolean: dump parameters @@ -26,7 +21,7 @@ workflow UTILS_NEXTFLOW_PIPELINE { // Print workflow version and exit on --version // if (print_version) { - log.info "${workflow.manifest.name} ${getWorkflowVersion()}" + log.info("${workflow.manifest.name} ${getWorkflowVersion()}") System.exit(0) } @@ -49,16 +44,16 @@ workflow UTILS_NEXTFLOW_PIPELINE { } /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FUNCTIONS -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ // // Generate version string // def getWorkflowVersion() { - String version_string = "" + def version_string = "" as String if (workflow.manifest.version) { def prefix_v = workflow.manifest.version[0] != 'v' ? 'v' : '' version_string += "${prefix_v}${workflow.manifest.version}" @@ -76,13 +71,13 @@ def getWorkflowVersion() { // Dump pipeline parameters to a JSON file // def dumpParametersToJSON(outdir) { - def timestamp = new java.util.Date().format( 'yyyy-MM-dd_HH-mm-ss') - def filename = "params_${timestamp}.json" - def temp_pf = new File(workflow.launchDir.toString(), ".${filename}") - def jsonStr = JsonOutput.toJson(params) - temp_pf.text = JsonOutput.prettyPrint(jsonStr) + def timestamp = new java.util.Date().format('yyyy-MM-dd_HH-mm-ss') + def filename = "params_${timestamp}.json" + def temp_pf = new File(workflow.launchDir.toString(), ".${filename}") + def jsonStr = groovy.json.JsonOutput.toJson(params) + temp_pf.text = groovy.json.JsonOutput.prettyPrint(jsonStr) - FilesEx.copyTo(temp_pf.toPath(), "${outdir}/pipeline_info/params_${timestamp}.json") + nextflow.extension.FilesEx.copyTo(temp_pf.toPath(), "${outdir}/pipeline_info/params_${timestamp}.json") temp_pf.delete() } @@ -90,37 +85,40 @@ def dumpParametersToJSON(outdir) { // When running with -profile conda, warn if channels have not been set-up appropriately // def checkCondaChannels() { - Yaml parser = new Yaml() + def parser = new org.yaml.snakeyaml.Yaml() def channels = [] try { def config = parser.load("conda config --show channels".execute().text) channels = config.channels - } catch(NullPointerException | IOException e) { - log.warn "Could not verify conda channel configuration." - return + } + catch (NullPointerException e) { + log.warn("Could not verify conda channel configuration.") + return null + } + catch (IOException e) { + log.warn("Could not verify conda channel configuration.") + return null } // Check that all channels are present // This channel list is ordered by required channel priority. - def required_channels_in_order = ['conda-forge', 'bioconda', 'defaults'] + def required_channels_in_order = ['conda-forge', 'bioconda'] def channels_missing = ((required_channels_in_order as Set) - (channels as Set)) as Boolean // Check that they are in the right order - def channel_priority_violation = false - def n = required_channels_in_order.size() - for (int i = 0; i < n - 1; i++) { - channel_priority_violation |= !(channels.indexOf(required_channels_in_order[i]) < channels.indexOf(required_channels_in_order[i+1])) - } + def channel_priority_violation = required_channels_in_order != channels.findAll { ch -> ch in required_channels_in_order } if (channels_missing | channel_priority_violation) { - log.warn "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n" + - " There is a problem with your Conda configuration!\n\n" + - " You will need to set-up the conda-forge and bioconda channels correctly.\n" + - " Please refer to https://bioconda.github.io/\n" + - " The observed channel order is \n" + - " ${channels}\n" + - " but the following channel order is required:\n" + - " ${required_channels_in_order}\n" + - "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" + log.warn """\ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + There is a problem with your Conda configuration! + You will need to set-up the conda-forge and bioconda channels correctly. + Please refer to https://bioconda.github.io/ + The observed channel order is + ${channels} + but the following channel order is required: + ${required_channels_in_order} + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" + """.stripIndent(true) } } diff --git a/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config b/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config index d0a926bf6..a09572e5b 100644 --- a/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config +++ b/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config @@ -3,7 +3,7 @@ manifest { author = """nf-core""" homePage = 'https://127.0.0.1' description = """Dummy pipeline""" - nextflowVersion = '!>=23.04.0' + nextflowVersion = '!>=23.04.0' version = '9.9.9' doi = 'https://doi.org/10.5281/zenodo.5070524' } diff --git a/subworkflows/nf-core/utils_nfcore_pipeline/main.nf b/subworkflows/nf-core/utils_nfcore_pipeline/main.nf index 14558c392..5cb7bafef 100644 --- a/subworkflows/nf-core/utils_nfcore_pipeline/main.nf +++ b/subworkflows/nf-core/utils_nfcore_pipeline/main.nf @@ -2,17 +2,13 @@ // Subworkflow with utility functions specific to the nf-core pipeline template // -import org.yaml.snakeyaml.Yaml -import nextflow.extension.FilesEx - /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SUBWORKFLOW DEFINITION -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ workflow UTILS_NFCORE_PIPELINE { - take: nextflow_cli_args @@ -25,23 +21,20 @@ workflow UTILS_NFCORE_PIPELINE { } /* -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FUNCTIONS -======================================================================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ // // Warn if a -profile or Nextflow config has not been provided to run the pipeline // def checkConfigProvided() { - valid_config = true + def valid_config = true as Boolean if (workflow.profile == 'standard' && workflow.configFiles.size() <= 1) { - log.warn "[$workflow.manifest.name] You are attempting to run the pipeline without any custom configuration!\n\n" + - "This will be dependent on your local compute environment but can be achieved via one or more of the following:\n" + - " (1) Using an existing pipeline profile e.g. `-profile docker` or `-profile singularity`\n" + - " (2) Using an existing nf-core/configs for your Institution e.g. `-profile crick` or `-profile uppmax`\n" + - " (3) Using your own local custom config e.g. `-c /path/to/your/custom.config`\n\n" + - "Please refer to the quick start section and usage docs for the pipeline.\n " + log.warn( + "[${workflow.manifest.name}] You are attempting to run the pipeline without any custom configuration!\n\n" + "This will be dependent on your local compute environment but can be achieved via one or more of the following:\n" + " (1) Using an existing pipeline profile e.g. `-profile docker` or `-profile singularity`\n" + " (2) Using an existing nf-core/configs for your Institution e.g. `-profile crick` or `-profile uppmax`\n" + " (3) Using your own local custom config e.g. `-c /path/to/your/custom.config`\n\n" + "Please refer to the quick start section and usage docs for the pipeline.\n " + ) valid_config = false } return valid_config @@ -52,12 +45,14 @@ def checkConfigProvided() { // def checkProfileProvided(nextflow_cli_args) { if (workflow.profile.endsWith(',')) { - error "The `-profile` option cannot end with a trailing comma, please remove it and re-run the pipeline!\n" + - "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n" + error( + "The `-profile` option cannot end with a trailing comma, please remove it and re-run the pipeline!\n" + "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n" + ) } if (nextflow_cli_args[0]) { - log.warn "nf-core pipelines do not accept positional arguments. The positional argument `${nextflow_cli_args[0]}` has been detected.\n" + - "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n" + log.warn( + "nf-core pipelines do not accept positional arguments. The positional argument `${nextflow_cli_args[0]}` has been detected.\n" + "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n" + ) } } @@ -66,25 +61,21 @@ def checkProfileProvided(nextflow_cli_args) { // def workflowCitation() { def temp_doi_ref = "" - String[] manifest_doi = workflow.manifest.doi.tokenize(",") - // Using a loop to handle multiple DOIs + def manifest_doi = workflow.manifest.doi.tokenize(",") + // Handling multiple DOIs // Removing `https://doi.org/` to handle pipelines using DOIs vs DOI resolvers // Removing ` ` since the manifest.doi is a string and not a proper list - for (String doi_ref: manifest_doi) temp_doi_ref += " https://doi.org/${doi_ref.replace('https://doi.org/', '').replace(' ', '')}\n" - return "If you use ${workflow.manifest.name} for your analysis please cite:\n\n" + - "* The pipeline\n" + - temp_doi_ref + "\n" + - "* The nf-core framework\n" + - " https://doi.org/10.1038/s41587-020-0439-x\n\n" + - "* Software dependencies\n" + - " https://github.com/${workflow.manifest.name}/blob/master/CITATIONS.md" + manifest_doi.each { doi_ref -> + temp_doi_ref += " https://doi.org/${doi_ref.replace('https://doi.org/', '').replace(' ', '')}\n" + } + return "If you use ${workflow.manifest.name} for your analysis please cite:\n\n" + "* The pipeline\n" + temp_doi_ref + "\n" + "* The nf-core framework\n" + " https://doi.org/10.1038/s41587-020-0439-x\n\n" + "* Software dependencies\n" + " https://github.com/${workflow.manifest.name}/blob/master/CITATIONS.md" } // // Generate workflow version string // def getWorkflowVersion() { - String version_string = "" + def version_string = "" as String if (workflow.manifest.version) { def prefix_v = workflow.manifest.version[0] != 'v' ? 'v' : '' version_string += "${prefix_v}${workflow.manifest.version}" @@ -102,8 +93,8 @@ def getWorkflowVersion() { // Get software versions for pipeline // def processVersionsFromYAML(yaml_file) { - Yaml yaml = new Yaml() - versions = yaml.load(yaml_file).collectEntries { k, v -> [ k.tokenize(':')[-1], v ] } + def yaml = new org.yaml.snakeyaml.Yaml() + def versions = yaml.load(yaml_file).collectEntries { k, v -> [k.tokenize(':')[-1], v] } return yaml.dumpAsMap(versions).trim() } @@ -113,8 +104,8 @@ def processVersionsFromYAML(yaml_file) { def workflowVersionToYAML() { return """ Workflow: - $workflow.manifest.name: ${getWorkflowVersion()} - Nextflow: $workflow.nextflow.version + ${workflow.manifest.name}: ${getWorkflowVersion()} + Nextflow: ${workflow.nextflow.version} """.stripIndent().trim() } @@ -122,11 +113,7 @@ def workflowVersionToYAML() { // Get channel of software versions used in pipeline in YAML format // def softwareVersionsToYAML(ch_versions) { - return ch_versions - .unique() - .map { processVersionsFromYAML(it) } - .unique() - .mix(Channel.of(workflowVersionToYAML())) + return ch_versions.unique().map { version -> processVersionsFromYAML(version) }.unique().mix(Channel.of(workflowVersionToYAML())) } // @@ -134,25 +121,31 @@ def softwareVersionsToYAML(ch_versions) { // def paramsSummaryMultiqc(summary_params) { def summary_section = '' - for (group in summary_params.keySet()) { - def group_params = summary_params.get(group) // This gets the parameters of that particular group - if (group_params) { - summary_section += "

    $group

    \n" - summary_section += "
    \n" - for (param in group_params.keySet()) { - summary_section += "
    $param
    ${group_params.get(param) ?: 'N/A'}
    \n" + summary_params + .keySet() + .each { group -> + def group_params = summary_params.get(group) + // This gets the parameters of that particular group + if (group_params) { + summary_section += "

    ${group}

    \n" + summary_section += "
    \n" + group_params + .keySet() + .sort() + .each { param -> + summary_section += "
    ${param}
    ${group_params.get(param) ?: 'N/A'}
    \n" + } + summary_section += "
    \n" } - summary_section += "
    \n" } - } - String yaml_file_text = "id: '${workflow.manifest.name.replace('/','-')}-summary'\n" - yaml_file_text += "description: ' - this information is collected when the pipeline is started.'\n" - yaml_file_text += "section_name: '${workflow.manifest.name} Workflow Summary'\n" - yaml_file_text += "section_href: 'https://github.com/${workflow.manifest.name}'\n" - yaml_file_text += "plot_type: 'html'\n" - yaml_file_text += "data: |\n" - yaml_file_text += "${summary_section}" + def yaml_file_text = "id: '${workflow.manifest.name.replace('/', '-')}-summary'\n" as String + yaml_file_text += "description: ' - this information is collected when the pipeline is started.'\n" + yaml_file_text += "section_name: '${workflow.manifest.name} Workflow Summary'\n" + yaml_file_text += "section_href: 'https://github.com/${workflow.manifest.name}'\n" + yaml_file_text += "plot_type: 'html'\n" + yaml_file_text += "data: |\n" + yaml_file_text += "${summary_section}" return yaml_file_text } @@ -161,7 +154,7 @@ def paramsSummaryMultiqc(summary_params) { // nf-core logo // def nfCoreLogo(monochrome_logs=true) { - Map colors = logColours(monochrome_logs) + def colors = logColours(monochrome_logs) as Map String.format( """\n ${dashedLine(monochrome_logs)} @@ -180,7 +173,7 @@ def nfCoreLogo(monochrome_logs=true) { // Return dashed line // def dashedLine(monochrome_logs=true) { - Map colors = logColours(monochrome_logs) + def colors = logColours(monochrome_logs) as Map return "-${colors.dim}----------------------------------------------------${colors.reset}-" } @@ -188,7 +181,7 @@ def dashedLine(monochrome_logs=true) { // ANSII colours used for terminal logging // def logColours(monochrome_logs=true) { - Map colorcodes = [:] + def colorcodes = [:] as Map // Reset / Meta colorcodes['reset'] = monochrome_logs ? '' : "\033[0m" @@ -200,54 +193,54 @@ def logColours(monochrome_logs=true) { colorcodes['hidden'] = monochrome_logs ? '' : "\033[8m" // Regular Colors - colorcodes['black'] = monochrome_logs ? '' : "\033[0;30m" - colorcodes['red'] = monochrome_logs ? '' : "\033[0;31m" - colorcodes['green'] = monochrome_logs ? '' : "\033[0;32m" - colorcodes['yellow'] = monochrome_logs ? '' : "\033[0;33m" - colorcodes['blue'] = monochrome_logs ? '' : "\033[0;34m" - colorcodes['purple'] = monochrome_logs ? '' : "\033[0;35m" - colorcodes['cyan'] = monochrome_logs ? '' : "\033[0;36m" - colorcodes['white'] = monochrome_logs ? '' : "\033[0;37m" + colorcodes['black'] = monochrome_logs ? '' : "\033[0;30m" + colorcodes['red'] = monochrome_logs ? '' : "\033[0;31m" + colorcodes['green'] = monochrome_logs ? '' : "\033[0;32m" + colorcodes['yellow'] = monochrome_logs ? '' : "\033[0;33m" + colorcodes['blue'] = monochrome_logs ? '' : "\033[0;34m" + colorcodes['purple'] = monochrome_logs ? '' : "\033[0;35m" + colorcodes['cyan'] = monochrome_logs ? '' : "\033[0;36m" + colorcodes['white'] = monochrome_logs ? '' : "\033[0;37m" // Bold - colorcodes['bblack'] = monochrome_logs ? '' : "\033[1;30m" - colorcodes['bred'] = monochrome_logs ? '' : "\033[1;31m" - colorcodes['bgreen'] = monochrome_logs ? '' : "\033[1;32m" - colorcodes['byellow'] = monochrome_logs ? '' : "\033[1;33m" - colorcodes['bblue'] = monochrome_logs ? '' : "\033[1;34m" - colorcodes['bpurple'] = monochrome_logs ? '' : "\033[1;35m" - colorcodes['bcyan'] = monochrome_logs ? '' : "\033[1;36m" - colorcodes['bwhite'] = monochrome_logs ? '' : "\033[1;37m" + colorcodes['bblack'] = monochrome_logs ? '' : "\033[1;30m" + colorcodes['bred'] = monochrome_logs ? '' : "\033[1;31m" + colorcodes['bgreen'] = monochrome_logs ? '' : "\033[1;32m" + colorcodes['byellow'] = monochrome_logs ? '' : "\033[1;33m" + colorcodes['bblue'] = monochrome_logs ? '' : "\033[1;34m" + colorcodes['bpurple'] = monochrome_logs ? '' : "\033[1;35m" + colorcodes['bcyan'] = monochrome_logs ? '' : "\033[1;36m" + colorcodes['bwhite'] = monochrome_logs ? '' : "\033[1;37m" // Underline - colorcodes['ublack'] = monochrome_logs ? '' : "\033[4;30m" - colorcodes['ured'] = monochrome_logs ? '' : "\033[4;31m" - colorcodes['ugreen'] = monochrome_logs ? '' : "\033[4;32m" - colorcodes['uyellow'] = monochrome_logs ? '' : "\033[4;33m" - colorcodes['ublue'] = monochrome_logs ? '' : "\033[4;34m" - colorcodes['upurple'] = monochrome_logs ? '' : "\033[4;35m" - colorcodes['ucyan'] = monochrome_logs ? '' : "\033[4;36m" - colorcodes['uwhite'] = monochrome_logs ? '' : "\033[4;37m" + colorcodes['ublack'] = monochrome_logs ? '' : "\033[4;30m" + colorcodes['ured'] = monochrome_logs ? '' : "\033[4;31m" + colorcodes['ugreen'] = monochrome_logs ? '' : "\033[4;32m" + colorcodes['uyellow'] = monochrome_logs ? '' : "\033[4;33m" + colorcodes['ublue'] = monochrome_logs ? '' : "\033[4;34m" + colorcodes['upurple'] = monochrome_logs ? '' : "\033[4;35m" + colorcodes['ucyan'] = monochrome_logs ? '' : "\033[4;36m" + colorcodes['uwhite'] = monochrome_logs ? '' : "\033[4;37m" // High Intensity - colorcodes['iblack'] = monochrome_logs ? '' : "\033[0;90m" - colorcodes['ired'] = monochrome_logs ? '' : "\033[0;91m" - colorcodes['igreen'] = monochrome_logs ? '' : "\033[0;92m" - colorcodes['iyellow'] = monochrome_logs ? '' : "\033[0;93m" - colorcodes['iblue'] = monochrome_logs ? '' : "\033[0;94m" - colorcodes['ipurple'] = monochrome_logs ? '' : "\033[0;95m" - colorcodes['icyan'] = monochrome_logs ? '' : "\033[0;96m" - colorcodes['iwhite'] = monochrome_logs ? '' : "\033[0;97m" + colorcodes['iblack'] = monochrome_logs ? '' : "\033[0;90m" + colorcodes['ired'] = monochrome_logs ? '' : "\033[0;91m" + colorcodes['igreen'] = monochrome_logs ? '' : "\033[0;92m" + colorcodes['iyellow'] = monochrome_logs ? '' : "\033[0;93m" + colorcodes['iblue'] = monochrome_logs ? '' : "\033[0;94m" + colorcodes['ipurple'] = monochrome_logs ? '' : "\033[0;95m" + colorcodes['icyan'] = monochrome_logs ? '' : "\033[0;96m" + colorcodes['iwhite'] = monochrome_logs ? '' : "\033[0;97m" // Bold High Intensity - colorcodes['biblack'] = monochrome_logs ? '' : "\033[1;90m" - colorcodes['bired'] = monochrome_logs ? '' : "\033[1;91m" - colorcodes['bigreen'] = monochrome_logs ? '' : "\033[1;92m" - colorcodes['biyellow'] = monochrome_logs ? '' : "\033[1;93m" - colorcodes['biblue'] = monochrome_logs ? '' : "\033[1;94m" - colorcodes['bipurple'] = monochrome_logs ? '' : "\033[1;95m" - colorcodes['bicyan'] = monochrome_logs ? '' : "\033[1;96m" - colorcodes['biwhite'] = monochrome_logs ? '' : "\033[1;97m" + colorcodes['biblack'] = monochrome_logs ? '' : "\033[1;90m" + colorcodes['bired'] = monochrome_logs ? '' : "\033[1;91m" + colorcodes['bigreen'] = monochrome_logs ? '' : "\033[1;92m" + colorcodes['biyellow'] = monochrome_logs ? '' : "\033[1;93m" + colorcodes['biblue'] = monochrome_logs ? '' : "\033[1;94m" + colorcodes['bipurple'] = monochrome_logs ? '' : "\033[1;95m" + colorcodes['bicyan'] = monochrome_logs ? '' : "\033[1;96m" + colorcodes['biwhite'] = monochrome_logs ? '' : "\033[1;97m" return colorcodes } @@ -262,14 +255,15 @@ def attachMultiqcReport(multiqc_report) { mqc_report = multiqc_report.getVal() if (mqc_report.getClass() == ArrayList && mqc_report.size() >= 1) { if (mqc_report.size() > 1) { - log.warn "[$workflow.manifest.name] Found multiple reports from process 'MULTIQC', will use only one" + log.warn("[${workflow.manifest.name}] Found multiple reports from process 'MULTIQC', will use only one") } mqc_report = mqc_report[0] } } - } catch (all) { + } + catch (Exception all) { if (multiqc_report) { - log.warn "[$workflow.manifest.name] Could not attach MultiQC report to summary email" + log.warn("[${workflow.manifest.name}] Could not attach MultiQC report to summary email") } } return mqc_report @@ -281,26 +275,35 @@ def attachMultiqcReport(multiqc_report) { def completionEmail(summary_params, email, email_on_fail, plaintext_email, outdir, monochrome_logs=true, multiqc_report=null) { // Set up the e-mail variables - def subject = "[$workflow.manifest.name] Successful: $workflow.runName" + def subject = "[${workflow.manifest.name}] Successful: ${workflow.runName}" if (!workflow.success) { - subject = "[$workflow.manifest.name] FAILED: $workflow.runName" + subject = "[${workflow.manifest.name}] FAILED: ${workflow.runName}" } def summary = [:] - for (group in summary_params.keySet()) { - summary << summary_params[group] - } + summary_params + .keySet() + .sort() + .each { group -> + summary << summary_params[group] + } def misc_fields = [:] misc_fields['Date Started'] = workflow.start misc_fields['Date Completed'] = workflow.complete misc_fields['Pipeline script file path'] = workflow.scriptFile misc_fields['Pipeline script hash ID'] = workflow.scriptId - if (workflow.repository) misc_fields['Pipeline repository Git URL'] = workflow.repository - if (workflow.commitId) misc_fields['Pipeline repository Git Commit'] = workflow.commitId - if (workflow.revision) misc_fields['Pipeline Git branch/tag'] = workflow.revision - misc_fields['Nextflow Version'] = workflow.nextflow.version - misc_fields['Nextflow Build'] = workflow.nextflow.build + if (workflow.repository) { + misc_fields['Pipeline repository Git URL'] = workflow.repository + } + if (workflow.commitId) { + misc_fields['Pipeline repository Git Commit'] = workflow.commitId + } + if (workflow.revision) { + misc_fields['Pipeline Git branch/tag'] = workflow.revision + } + misc_fields['Nextflow Version'] = workflow.nextflow.version + misc_fields['Nextflow Build'] = workflow.nextflow.build misc_fields['Nextflow Compile Timestamp'] = workflow.nextflow.timestamp def email_fields = [:] @@ -338,39 +341,41 @@ def completionEmail(summary_params, email, email_on_fail, plaintext_email, outdi // Render the sendmail template def max_multiqc_email_size = (params.containsKey('max_multiqc_email_size') ? params.max_multiqc_email_size : 0) as nextflow.util.MemoryUnit - def smail_fields = [ email: email_address, subject: subject, email_txt: email_txt, email_html: email_html, projectDir: "${workflow.projectDir}", mqcFile: mqc_report, mqcMaxSize: max_multiqc_email_size.toBytes() ] + def smail_fields = [email: email_address, subject: subject, email_txt: email_txt, email_html: email_html, projectDir: "${workflow.projectDir}", mqcFile: mqc_report, mqcMaxSize: max_multiqc_email_size.toBytes()] def sf = new File("${workflow.projectDir}/assets/sendmail_template.txt") def sendmail_template = engine.createTemplate(sf).make(smail_fields) def sendmail_html = sendmail_template.toString() // Send the HTML e-mail - Map colors = logColours(monochrome_logs) + def colors = logColours(monochrome_logs) as Map if (email_address) { try { - if (plaintext_email) { throw GroovyException('Send plaintext e-mail, not HTML') } + if (plaintext_email) { +new org.codehaus.groovy.GroovyException('Send plaintext e-mail, not HTML') } // Try to send HTML e-mail using sendmail def sendmail_tf = new File(workflow.launchDir.toString(), ".sendmail_tmp.html") sendmail_tf.withWriter { w -> w << sendmail_html } - [ 'sendmail', '-t' ].execute() << sendmail_html - log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Sent summary e-mail to $email_address (sendmail)-" - } catch (all) { + ['sendmail', '-t'].execute() << sendmail_html + log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Sent summary e-mail to ${email_address} (sendmail)-") + } + catch (Exception all) { // Catch failures and try with plaintext - def mail_cmd = [ 'mail', '-s', subject, '--content-type=text/html', email_address ] + def mail_cmd = ['mail', '-s', subject, '--content-type=text/html', email_address] mail_cmd.execute() << email_html - log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Sent summary e-mail to $email_address (mail)-" + log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Sent summary e-mail to ${email_address} (mail)-") } } // Write summary e-mail HTML to a file def output_hf = new File(workflow.launchDir.toString(), ".pipeline_report.html") output_hf.withWriter { w -> w << email_html } - FilesEx.copyTo(output_hf.toPath(), "${outdir}/pipeline_info/pipeline_report.html"); + nextflow.extension.FilesEx.copyTo(output_hf.toPath(), "${outdir}/pipeline_info/pipeline_report.html") output_hf.delete() // Write summary e-mail TXT to a file def output_tf = new File(workflow.launchDir.toString(), ".pipeline_report.txt") output_tf.withWriter { w -> w << email_txt } - FilesEx.copyTo(output_tf.toPath(), "${outdir}/pipeline_info/pipeline_report.txt"); + nextflow.extension.FilesEx.copyTo(output_tf.toPath(), "${outdir}/pipeline_info/pipeline_report.txt") output_tf.delete() } @@ -378,15 +383,17 @@ def completionEmail(summary_params, email, email_on_fail, plaintext_email, outdi // Print pipeline summary on completion // def completionSummary(monochrome_logs=true) { - Map colors = logColours(monochrome_logs) + def colors = logColours(monochrome_logs) as Map if (workflow.success) { if (workflow.stats.ignoredCount == 0) { - log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Pipeline completed successfully${colors.reset}-" - } else { - log.info "-${colors.purple}[$workflow.manifest.name]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-" + log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Pipeline completed successfully${colors.reset}-") + } + else { + log.info("-${colors.purple}[${workflow.manifest.name}]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-") } - } else { - log.info "-${colors.purple}[$workflow.manifest.name]${colors.red} Pipeline completed with errors${colors.reset}-" + } + else { + log.info("-${colors.purple}[${workflow.manifest.name}]${colors.red} Pipeline completed with errors${colors.reset}-") } } @@ -395,21 +402,30 @@ def completionSummary(monochrome_logs=true) { // def imNotification(summary_params, hook_url) { def summary = [:] - for (group in summary_params.keySet()) { - summary << summary_params[group] - } + summary_params + .keySet() + .sort() + .each { group -> + summary << summary_params[group] + } def misc_fields = [:] - misc_fields['start'] = workflow.start - misc_fields['complete'] = workflow.complete - misc_fields['scriptfile'] = workflow.scriptFile - misc_fields['scriptid'] = workflow.scriptId - if (workflow.repository) misc_fields['repository'] = workflow.repository - if (workflow.commitId) misc_fields['commitid'] = workflow.commitId - if (workflow.revision) misc_fields['revision'] = workflow.revision - misc_fields['nxf_version'] = workflow.nextflow.version - misc_fields['nxf_build'] = workflow.nextflow.build - misc_fields['nxf_timestamp'] = workflow.nextflow.timestamp + misc_fields['start'] = workflow.start + misc_fields['complete'] = workflow.complete + misc_fields['scriptfile'] = workflow.scriptFile + misc_fields['scriptid'] = workflow.scriptId + if (workflow.repository) { + misc_fields['repository'] = workflow.repository + } + if (workflow.commitId) { + misc_fields['commitid'] = workflow.commitId + } + if (workflow.revision) { + misc_fields['revision'] = workflow.revision + } + misc_fields['nxf_version'] = workflow.nextflow.version + misc_fields['nxf_build'] = workflow.nextflow.build + misc_fields['nxf_timestamp'] = workflow.nextflow.timestamp def msg_fields = [:] msg_fields['version'] = getWorkflowVersion() @@ -434,13 +450,13 @@ def imNotification(summary_params, hook_url) { def json_message = json_template.toString() // POST - def post = new URL(hook_url).openConnection(); + def post = new URL(hook_url).openConnection() post.setRequestMethod("POST") post.setDoOutput(true) post.setRequestProperty("Content-Type", "application/json") - post.getOutputStream().write(json_message.getBytes("UTF-8")); - def postRC = post.getResponseCode(); - if (! postRC.equals(200)) { - log.warn(post.getErrorStream().getText()); + post.getOutputStream().write(json_message.getBytes("UTF-8")) + def postRC = post.getResponseCode() + if (!postRC.equals(200)) { + log.warn(post.getErrorStream().getText()) } } diff --git a/subworkflows/nf-core/utils_nfschema_plugin/main.nf b/subworkflows/nf-core/utils_nfschema_plugin/main.nf new file mode 100644 index 000000000..4994303ea --- /dev/null +++ b/subworkflows/nf-core/utils_nfschema_plugin/main.nf @@ -0,0 +1,46 @@ +// +// Subworkflow that uses the nf-schema plugin to validate parameters and render the parameter summary +// + +include { paramsSummaryLog } from 'plugin/nf-schema' +include { validateParameters } from 'plugin/nf-schema' + +workflow UTILS_NFSCHEMA_PLUGIN { + + take: + input_workflow // workflow: the workflow object used by nf-schema to get metadata from the workflow + validate_params // boolean: validate the parameters + parameters_schema // string: path to the parameters JSON schema. + // this has to be the same as the schema given to `validation.parametersSchema` + // when this input is empty it will automatically use the configured schema or + // "${projectDir}/nextflow_schema.json" as default. This input should not be empty + // for meta pipelines + + main: + + // + // Print parameter summary to stdout. This will display the parameters + // that differ from the default given in the JSON schema + // + if(parameters_schema) { + log.info paramsSummaryLog(input_workflow, parameters_schema:parameters_schema) + } else { + log.info paramsSummaryLog(input_workflow) + } + + // + // Validate the parameters using nextflow_schema.json or the schema + // given via the validation.parametersSchema configuration option + // + if(validate_params) { + if(parameters_schema) { + validateParameters(parameters_schema:parameters_schema) + } else { + validateParameters() + } + } + + emit: + dummy_emit = true +} + diff --git a/subworkflows/nf-core/utils_nfschema_plugin/meta.yml b/subworkflows/nf-core/utils_nfschema_plugin/meta.yml new file mode 100644 index 000000000..f7d9f0288 --- /dev/null +++ b/subworkflows/nf-core/utils_nfschema_plugin/meta.yml @@ -0,0 +1,35 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/nf-core/modules/master/subworkflows/yaml-schema.json +name: "utils_nfschema_plugin" +description: Run nf-schema to validate parameters and create a summary of changed parameters +keywords: + - validation + - JSON schema + - plugin + - parameters + - summary +components: [] +input: + - input_workflow: + type: object + description: | + The workflow object of the used pipeline. + This object contains meta data used to create the params summary log + - validate_params: + type: boolean + description: Validate the parameters and error if invalid. + - parameters_schema: + type: string + description: | + Path to the parameters JSON schema. + This has to be the same as the schema given to the `validation.parametersSchema` config + option. When this input is empty it will automatically use the configured schema or + "${projectDir}/nextflow_schema.json" as default. The schema should not be given in this way + for meta pipelines. +output: + - dummy_emit: + type: boolean + description: Dummy emit to make nf-core subworkflows lint happy +authors: + - "@nvnieuwk" +maintainers: + - "@nvnieuwk" diff --git a/subworkflows/nf-core/utils_nfschema_plugin/tests/main.nf.test b/subworkflows/nf-core/utils_nfschema_plugin/tests/main.nf.test new file mode 100644 index 000000000..842dc432a --- /dev/null +++ b/subworkflows/nf-core/utils_nfschema_plugin/tests/main.nf.test @@ -0,0 +1,117 @@ +nextflow_workflow { + + name "Test Subworkflow UTILS_NFSCHEMA_PLUGIN" + script "../main.nf" + workflow "UTILS_NFSCHEMA_PLUGIN" + + tag "subworkflows" + tag "subworkflows_nfcore" + tag "subworkflows/utils_nfschema_plugin" + tag "plugin/nf-schema" + + config "./nextflow.config" + + test("Should run nothing") { + + when { + + params { + test_data = '' + } + + workflow { + """ + validate_params = false + input[0] = workflow + input[1] = validate_params + input[2] = "" + """ + } + } + + then { + assertAll( + { assert workflow.success } + ) + } + } + + test("Should validate params") { + + when { + + params { + test_data = '' + outdir = 1 + } + + workflow { + """ + validate_params = true + input[0] = workflow + input[1] = validate_params + input[2] = "" + """ + } + } + + then { + assertAll( + { assert workflow.failed }, + { assert workflow.stdout.any { it.contains('ERROR ~ Validation of pipeline parameters failed!') } } + ) + } + } + + test("Should run nothing - custom schema") { + + when { + + params { + test_data = '' + } + + workflow { + """ + validate_params = false + input[0] = workflow + input[1] = validate_params + input[2] = "${projectDir}/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json" + """ + } + } + + then { + assertAll( + { assert workflow.success } + ) + } + } + + test("Should validate params - custom schema") { + + when { + + params { + test_data = '' + outdir = 1 + } + + workflow { + """ + validate_params = true + input[0] = workflow + input[1] = validate_params + input[2] = "${projectDir}/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json" + """ + } + } + + then { + assertAll( + { assert workflow.failed }, + { assert workflow.stdout.any { it.contains('ERROR ~ Validation of pipeline parameters failed!') } } + ) + } + } +} diff --git a/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow.config b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow.config new file mode 100644 index 000000000..0907ac58f --- /dev/null +++ b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow.config @@ -0,0 +1,8 @@ +plugins { + id "nf-schema@2.1.0" +} + +validation { + parametersSchema = "${projectDir}/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json" + monochromeLogs = true +} \ No newline at end of file diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/nextflow_schema.json b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json similarity index 95% rename from subworkflows/nf-core/utils_nfvalidation_plugin/tests/nextflow_schema.json rename to subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json index 7626c1c93..331e0d2f4 100644 --- a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/nextflow_schema.json +++ b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json @@ -1,10 +1,10 @@ { - "$schema": "http://json-schema.org/draft-07/schema", + "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "https://raw.githubusercontent.com/./master/nextflow_schema.json", "title": ". pipeline parameters", "description": "", "type": "object", - "definitions": { + "$defs": { "input_output_options": { "title": "Input/output options", "type": "object", @@ -87,10 +87,10 @@ }, "allOf": [ { - "$ref": "#/definitions/input_output_options" + "$ref": "#/$defs/input_output_options" }, { - "$ref": "#/definitions/generic_options" + "$ref": "#/$defs/generic_options" } ] } diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/main.nf b/subworkflows/nf-core/utils_nfvalidation_plugin/main.nf deleted file mode 100644 index 2585b65d1..000000000 --- a/subworkflows/nf-core/utils_nfvalidation_plugin/main.nf +++ /dev/null @@ -1,62 +0,0 @@ -// -// Subworkflow that uses the nf-validation plugin to render help text and parameter summary -// - -/* -======================================================================================== - IMPORT NF-VALIDATION PLUGIN -======================================================================================== -*/ - -include { paramsHelp } from 'plugin/nf-validation' -include { paramsSummaryLog } from 'plugin/nf-validation' -include { validateParameters } from 'plugin/nf-validation' - -/* -======================================================================================== - SUBWORKFLOW DEFINITION -======================================================================================== -*/ - -workflow UTILS_NFVALIDATION_PLUGIN { - - take: - print_help // boolean: print help - workflow_command // string: default commmand used to run pipeline - pre_help_text // string: string to be printed before help text and summary log - post_help_text // string: string to be printed after help text and summary log - validate_params // boolean: validate parameters - schema_filename // path: JSON schema file, null to use default value - - main: - - log.debug "Using schema file: ${schema_filename}" - - // Default values for strings - pre_help_text = pre_help_text ?: '' - post_help_text = post_help_text ?: '' - workflow_command = workflow_command ?: '' - - // - // Print help message if needed - // - if (print_help) { - log.info pre_help_text + paramsHelp(workflow_command, parameters_schema: schema_filename) + post_help_text - System.exit(0) - } - - // - // Print parameter summary to stdout - // - log.info pre_help_text + paramsSummaryLog(workflow, parameters_schema: schema_filename) + post_help_text - - // - // Validate parameters relative to the parameter JSON schema - // - if (validate_params){ - validateParameters(parameters_schema: schema_filename) - } - - emit: - dummy_emit = true -} diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/meta.yml b/subworkflows/nf-core/utils_nfvalidation_plugin/meta.yml deleted file mode 100644 index 3d4a6b04f..000000000 --- a/subworkflows/nf-core/utils_nfvalidation_plugin/meta.yml +++ /dev/null @@ -1,44 +0,0 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/nf-core/modules/master/subworkflows/yaml-schema.json -name: "UTILS_NFVALIDATION_PLUGIN" -description: Use nf-validation to initiate and validate a pipeline -keywords: - - utility - - pipeline - - initialise - - validation -components: [] -input: - - print_help: - type: boolean - description: | - Print help message and exit - - workflow_command: - type: string - description: | - The command to run the workflow e.g. "nextflow run main.nf" - - pre_help_text: - type: string - description: | - Text to print before the help message - - post_help_text: - type: string - description: | - Text to print after the help message - - validate_params: - type: boolean - description: | - Validate the parameters and error if invalid. - - schema_filename: - type: string - description: | - The filename of the schema to validate against. -output: - - dummy_emit: - type: boolean - description: | - Dummy emit to make nf-core subworkflows lint happy -authors: - - "@adamrtalbot" -maintainers: - - "@adamrtalbot" - - "@maxulysse" diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/main.nf.test b/subworkflows/nf-core/utils_nfvalidation_plugin/tests/main.nf.test deleted file mode 100644 index 5784a33f2..000000000 --- a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/main.nf.test +++ /dev/null @@ -1,200 +0,0 @@ -nextflow_workflow { - - name "Test Workflow UTILS_NFVALIDATION_PLUGIN" - script "../main.nf" - workflow "UTILS_NFVALIDATION_PLUGIN" - tag "subworkflows" - tag "subworkflows_nfcore" - tag "plugin/nf-validation" - tag "'plugin/nf-validation'" - tag "utils_nfvalidation_plugin" - tag "subworkflows/utils_nfvalidation_plugin" - - test("Should run nothing") { - - when { - - params { - monochrome_logs = true - test_data = '' - } - - workflow { - """ - help = false - workflow_command = null - pre_help_text = null - post_help_text = null - validate_params = false - schema_filename = "$moduleTestDir/nextflow_schema.json" - - input[0] = help - input[1] = workflow_command - input[2] = pre_help_text - input[3] = post_help_text - input[4] = validate_params - input[5] = schema_filename - """ - } - } - - then { - assertAll( - { assert workflow.success } - ) - } - } - - test("Should run help") { - - - when { - - params { - monochrome_logs = true - test_data = '' - } - workflow { - """ - help = true - workflow_command = null - pre_help_text = null - post_help_text = null - validate_params = false - schema_filename = "$moduleTestDir/nextflow_schema.json" - - input[0] = help - input[1] = workflow_command - input[2] = pre_help_text - input[3] = post_help_text - input[4] = validate_params - input[5] = schema_filename - """ - } - } - - then { - assertAll( - { assert workflow.success }, - { assert workflow.exitStatus == 0 }, - { assert workflow.stdout.any { it.contains('Input/output options') } }, - { assert workflow.stdout.any { it.contains('--outdir') } } - ) - } - } - - test("Should run help with command") { - - when { - - params { - monochrome_logs = true - test_data = '' - } - workflow { - """ - help = true - workflow_command = "nextflow run noorg/doesntexist" - pre_help_text = null - post_help_text = null - validate_params = false - schema_filename = "$moduleTestDir/nextflow_schema.json" - - input[0] = help - input[1] = workflow_command - input[2] = pre_help_text - input[3] = post_help_text - input[4] = validate_params - input[5] = schema_filename - """ - } - } - - then { - assertAll( - { assert workflow.success }, - { assert workflow.exitStatus == 0 }, - { assert workflow.stdout.any { it.contains('nextflow run noorg/doesntexist') } }, - { assert workflow.stdout.any { it.contains('Input/output options') } }, - { assert workflow.stdout.any { it.contains('--outdir') } } - ) - } - } - - test("Should run help with extra text") { - - - when { - - params { - monochrome_logs = true - test_data = '' - } - workflow { - """ - help = true - workflow_command = "nextflow run noorg/doesntexist" - pre_help_text = "pre-help-text" - post_help_text = "post-help-text" - validate_params = false - schema_filename = "$moduleTestDir/nextflow_schema.json" - - input[0] = help - input[1] = workflow_command - input[2] = pre_help_text - input[3] = post_help_text - input[4] = validate_params - input[5] = schema_filename - """ - } - } - - then { - assertAll( - { assert workflow.success }, - { assert workflow.exitStatus == 0 }, - { assert workflow.stdout.any { it.contains('pre-help-text') } }, - { assert workflow.stdout.any { it.contains('nextflow run noorg/doesntexist') } }, - { assert workflow.stdout.any { it.contains('Input/output options') } }, - { assert workflow.stdout.any { it.contains('--outdir') } }, - { assert workflow.stdout.any { it.contains('post-help-text') } } - ) - } - } - - test("Should validate params") { - - when { - - params { - monochrome_logs = true - test_data = '' - outdir = 1 - } - workflow { - """ - help = false - workflow_command = null - pre_help_text = null - post_help_text = null - validate_params = true - schema_filename = "$moduleTestDir/nextflow_schema.json" - - input[0] = help - input[1] = workflow_command - input[2] = pre_help_text - input[3] = post_help_text - input[4] = validate_params - input[5] = schema_filename - """ - } - } - - then { - assertAll( - { assert workflow.failed }, - { assert workflow.stdout.any { it.contains('ERROR ~ ERROR: Validation of pipeline parameters failed!') } } - ) - } - } -} diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/tags.yml b/subworkflows/nf-core/utils_nfvalidation_plugin/tests/tags.yml deleted file mode 100644 index 60b1cfff4..000000000 --- a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/tags.yml +++ /dev/null @@ -1,2 +0,0 @@ -subworkflows/utils_nfvalidation_plugin: - - subworkflows/nf-core/utils_nfvalidation_plugin/** diff --git a/workflows/dia.nf b/workflows/dia.nf index cced0edeb..d7a62dae8 100644 --- a/workflows/dia.nf +++ b/workflows/dia.nf @@ -51,7 +51,7 @@ workflow DIA { DIANNCFG(meta) ch_software_versions = ch_software_versions - .mix(DIANNCFG.out.version.ifEmpty(null)) + .mix(DIANNCFG.out.versions.ifEmpty(null)) // // MODULE: SILICOLIBRARYGENERATION @@ -91,7 +91,7 @@ workflow DIA { DIANN_PRELIMINARY_ANALYSIS(ch_file_preparation_results.combine(speclib)) } ch_software_versions = ch_software_versions - .mix(DIANN_PRELIMINARY_ANALYSIS.out.version.ifEmpty(null)) + .mix(DIANN_PRELIMINARY_ANALYSIS.out.versions.ifEmpty(null)) // // MODULE: ASSEMBLE_EMPIRICAL_LIBRARY @@ -104,7 +104,7 @@ workflow DIA { speclib ) ch_software_versions = ch_software_versions - .mix(ASSEMBLE_EMPIRICAL_LIBRARY.out.version.ifEmpty(null)) + .mix(ASSEMBLE_EMPIRICAL_LIBRARY.out.versions.ifEmpty(null)) indiv_fin_analysis_in = ch_file_preparation_results .combine(ch_searchdb) .combine(ASSEMBLE_EMPIRICAL_LIBRARY.out.log) @@ -118,7 +118,7 @@ workflow DIA { // INDIVIDUAL_FINAL_ANALYSIS(indiv_fin_analysis_in) ch_software_versions = ch_software_versions - .mix(INDIVIDUAL_FINAL_ANALYSIS.out.version.ifEmpty(null)) + .mix(INDIVIDUAL_FINAL_ANALYSIS.out.versions.ifEmpty(null)) // // MODULE: DIANNSUMMARY @@ -141,7 +141,7 @@ workflow DIA { ch_searchdb) ch_software_versions = ch_software_versions.mix( - DIANNSUMMARY.out.version.ifEmpty(null) + DIANNSUMMARY.out.versions.ifEmpty(null) ) // @@ -153,10 +153,10 @@ workflow DIA { DIANNSUMMARY.out.pr_matrix, ch_ms_info, meta, ch_searchdb, - DIANNSUMMARY.out.version + DIANNSUMMARY.out.versions ) ch_software_versions = ch_software_versions - .mix(DIANNCONVERT.out.version.ifEmpty(null)) + .mix(DIANNCONVERT.out.versions.ifEmpty(null)) // // MODULE: MSSTATS @@ -165,7 +165,7 @@ workflow DIA { MSSTATS(DIANNCONVERT.out.out_msstats) ch_msstats_out = MSSTATS.out.msstats_csv ch_software_versions = ch_software_versions.mix( - MSSTATS.out.version.ifEmpty(null) + MSSTATS.out.versions.ifEmpty(null) ) } diff --git a/workflows/lfq.nf b/workflows/lfq.nf index 256f3fdc7..16f2ff712 100644 --- a/workflows/lfq.nf +++ b/workflows/lfq.nf @@ -38,7 +38,7 @@ workflow LFQ { // SUBWORKFLOWS: ID // ID(ch_file_preparation_results, ch_database_wdecoy, ch_expdesign) - ch_software_versions = ch_software_versions.mix(ID.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(ID.out.versions.ifEmpty(null)) // // SUBWORKFLOW: PROTEOMICSLFQ @@ -54,7 +54,7 @@ workflow LFQ { ch_expdesign, ch_database_wdecoy ) - ch_software_versions = ch_software_versions.mix(PROTEOMICSLFQ.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(PROTEOMICSLFQ.out.versions.ifEmpty(null)) // // MODULE: MSSTATS @@ -63,7 +63,7 @@ workflow LFQ { if(!params.skip_post_msstats && params.quantification_method == "feature_intensity"){ MSSTATS(PROTEOMICSLFQ.out.out_msstats) ch_msstats_out = MSSTATS.out.msstats_csv - ch_software_versions = ch_software_versions.mix(MSSTATS.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(MSSTATS.out.versions.ifEmpty(null)) } diff --git a/workflows/quantms.nf b/workflows/quantms.nf index 8ddb787e5..86fca0d1e 100644 --- a/workflows/quantms.nf +++ b/workflows/quantms.nf @@ -4,7 +4,7 @@ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ -include { paramsSummaryMap } from 'plugin/nf-validation' +include { paramsSummaryMap } from 'plugin/nf-schema' include { paramsSummaryMultiqc } from '../subworkflows/nf-core/utils_nfcore_pipeline' include { softwareVersionsToYAML } from '../subworkflows/nf-core/utils_nfcore_pipeline' include { methodsDescriptionText } from '../subworkflows/local/utils_nfcore_quantms_pipeline' @@ -56,7 +56,7 @@ workflow QUANTMS { INPUT_CHECK.out.ch_input_file, INPUT_CHECK.out.is_sdrf ) - ch_versions = ch_versions.mix(CREATE_INPUT_CHANNEL.out.version.ifEmpty(null)) + ch_versions = ch_versions.mix(CREATE_INPUT_CHANNEL.out.versions.ifEmpty(null)) // // SUBWORKFLOW: File preparation @@ -65,7 +65,7 @@ workflow QUANTMS { CREATE_INPUT_CHANNEL.out.ch_meta_config_iso.mix(CREATE_INPUT_CHANNEL.out.ch_meta_config_lfq).mix(CREATE_INPUT_CHANNEL.out.ch_meta_config_dia) ) - ch_versions = ch_versions.mix(FILE_PREPARATION.out.version.ifEmpty(null)) + ch_versions = ch_versions.mix(FILE_PREPARATION.out.versions.ifEmpty(null)) FILE_PREPARATION.out.results .branch { @@ -74,8 +74,6 @@ workflow QUANTMS { lfq: it[0].labelling_type.contains("label free") } .set{ch_fileprep_result} - - // // WORKFLOW: Run main nf-core/quantms analysis pipeline based on the quantification type // @@ -101,7 +99,7 @@ workflow QUANTMS { ch_db_for_decoy_creation_or_null ) ch_searchengine_in_db = DECOYDATABASE.out.db_decoy - ch_versions = ch_versions.mix(DECOYDATABASE.out.version.ifEmpty(null)) + ch_versions = ch_versions.mix(DECOYDATABASE.out.versions.ifEmpty(null)) } // This rescoring engine currently only is supported in id_only subworkflows via ms2rescore. @@ -113,7 +111,7 @@ workflow QUANTMS { log.warn "The rescoring engine is set to mokapot. This rescoring engine currently only supports psm-level-fdr via ms2rescore." } DDA_ID( FILE_PREPARATION.out.results, ch_searchengine_in_db, FILE_PREPARATION.out.spectrum_data, CREATE_INPUT_CHANNEL.out.ch_expdesign) - ch_versions = ch_versions.mix(DDA_ID.out.version.ifEmpty(null)) + ch_versions = ch_versions.mix(DDA_ID.out.versions.ifEmpty(null)) ch_ids_pmultiqc = ch_ids_pmultiqc.mix(DDA_ID.out.ch_pmultiqc_ids) ch_consensus_pmultiqc = ch_consensus_pmultiqc.mix(DDA_ID.out.ch_pmultiqc_consensus) } else { @@ -135,21 +133,19 @@ workflow QUANTMS { ch_pipeline_results = ch_pipeline_results.mix(DIA.out.diann_report) ch_msstats_in = ch_msstats_in.mix(DIA.out.msstats_in) ch_versions = ch_versions.mix(DIA.out.versions.ifEmpty(null)) - } - // - // Collate and save software versions - // - ch_versions - .branch { - yaml : it.asBoolean() - other : true - } - .set{ versions_clean } + // Other subworkflow will return null when performing another subworkflow due to unknown reason. + ch_versions = ch_versions.filter{ it != null } + + } - softwareVersionsToYAML(versions_clean.yaml) - .collectFile(storeDir: "${params.outdir}/pipeline_info", name: 'nf_core_pipeline_software_mqc_versions.yml', sort: true, newLine: true) - .set { ch_collated_versions } + softwareVersionsToYAML(ch_versions) + .collectFile( + storeDir: "${params.outdir}/pipeline_info", + name: 'nf_core_' + 'pipeline_software_' + 'mqc_' + 'versions.yml', + sort: true, + newLine: true + ).set { ch_collated_versions } ch_multiqc_files = Channel.empty() ch_multiqc_config = Channel.fromPath("$projectDir/assets/multiqc_config.yml", checkIfExists: true) @@ -177,7 +173,7 @@ workflow QUANTMS { emit: multiqc_report = SUMMARYPIPELINE.out.ch_pmultiqc_report.toList() - versions = versions_clean.yaml + versions = ch_versions } /* diff --git a/workflows/tmt.nf b/workflows/tmt.nf index f46d2ed80..ddeb0b40b 100644 --- a/workflows/tmt.nf +++ b/workflows/tmt.nf @@ -38,31 +38,31 @@ workflow TMT { // SUBWORKFLOWS: ID // ID(ch_file_preparation_results, ch_database_wdecoy, ch_expdesign) - ch_software_versions = ch_software_versions.mix(ID.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(ID.out.versions.ifEmpty(null)) // // SUBWORKFLOW: FEATUREMAPPER // FEATUREMAPPER(ch_file_preparation_results, ID.out.id_results) - ch_software_versions = ch_software_versions.mix(FEATUREMAPPER.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(FEATUREMAPPER.out.versions.ifEmpty(null)) // // MODULE: FILEMERGE // FILEMERGE(FEATUREMAPPER.out.id_map.collect()) - ch_software_versions = ch_software_versions.mix(FILEMERGE.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(FILEMERGE.out.versions.ifEmpty(null)) // // SUBWORKFLOW: PROTEININFERENCE // PROTEININFERENCE(FILEMERGE.out.id_merge) - ch_software_versions = ch_software_versions.mix(PROTEININFERENCE.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(PROTEININFERENCE.out.versions.ifEmpty(null)) // // SUBWORKFLOW: PROTEINQUANT // PROTEINQUANT(PROTEININFERENCE.out.epi_idfilter, ch_expdesign) - ch_software_versions = ch_software_versions.mix(PROTEINQUANT.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(PROTEINQUANT.out.versions.ifEmpty(null)) // // MODULE: MSSTATSTMT @@ -71,7 +71,7 @@ workflow TMT { if(!params.skip_post_msstats){ MSSTATSTMT(PROTEINQUANT.out.msstats_csv) ch_msstats_out = MSSTATSTMT.out.msstats_csv - ch_software_versions = ch_software_versions.mix(MSSTATSTMT.out.version.ifEmpty(null)) + ch_software_versions = ch_software_versions.mix(MSSTATSTMT.out.versions.ifEmpty(null)) } ID.out.psmrescoring_results