Skip to content

Part 5: Install and use a nf-neuro subworkflow

1. List and install a nf-neuro subworkflow

Similar to modules, you can list available subworkflows in a remote repository or those installed locally in your pipeline using the nf-core subworkflows list command. To view all the available subworkflows, use the nf-core subworkflows list remote command, which will display all subworkflows in the terminal.

Terminal window
nf-core subworkflows list remote

Now, let’s install the PREPROC_T1 subworkflow, designed to preprocess T1-weighted MRI data.
You can install the subworkflow using the nf-core subworkflows install command. The subworkflow will be installed in the ./subworkflows/nf-neuro/ directory.

Terminal window
nf-core subworkflows install preproc_T1

2. Add new data in the input pipeline structure

To include a T1w image in the pipeline, we will follow the same approach from Part 2 as we did for the DWI images. The required steps are:

  1. Add T1w image path to the get_data input structure.
  2. emit a new output channel called anat.
  3. Include the T1w channel data in the workflow and visualize it.
  4. Run the pipeline using nextflow run main.nf --input data -profile docker.
Expected main.nf
#!/usr/bin/env nextflow
include { RECONST_DTIMETRICS } from './modules/nf-neuro/reconst/dtimetrics/main'
include { DENOISING_MPPCA } from './modules/nf-neuro/denoising/mppca/main'
include { PREPROC_T1 } from './subworkflows/nf-neuro/preproc_t1/main'
workflow get_data {
main:
if ( !params.input ) {
log.info "You must provide an input directory containing all images using:"
log.info ""
log.info " --input=/path/to/[input] Input directory containing your subjects"
log.info " |"
log.info " ├-- S1"
log.info " | ├-- *dwi.nii.gz"
log.info " | ├-- *dwi.bval"
log.info " | ├-- *dwi.bvec"
log.info " | └-- *t1.nii.gz"
log.info " └-- S2"
log.info " ├-- *dwi.nii.gz"
log.info " ├-- *bval"
log.info " ├-- *bvec"
log.info " └-- *t1.nii.gz"
log.info ""
error "Please resubmit your command with the previous file structure."
}
input = file(params.input)
// ** Loading DWI files. ** //
dwi_channel = Channel.fromFilePairs("$input/**/**/dwi/*dwi.{nii.gz,bval,bvec}", size: 3, flat: true)
{ it.parent.parent.parent.name + "_" + it.parent.parent.name} // Set the subject filename as subjectID + '_' + session.
.map{ sid, bvals, bvecs, dwi -> [ [id: sid], dwi, bvals, bvecs ] } // Reordering the inputs.
// ** Loading T1 file. ** //
t1_channel = Channel.fromFilePairs("$input/**/**/anat/*T1w.nii.gz", size: 1, flat: true)
{ it.parent.parent.parent.name + "_" + it.parent.parent.name } // Set the subject filename as subjectID + '_' + session.
.map{ sid, t1 -> [ [id: sid], t1 ] }
emit:
dwi = dwi_channel
anat = t1_channel
}
workflow {
inputs = get_data()
// Use Multimap to split the tuple into multi inputs structure
ch_dwi_bvalbvec = inputs.dwi
.multiMap { meta, dwi, bval, bvec ->
dwi: [ meta, dwi ]
bvs_files: [ meta, bval, bvec ]
dwi_bval_bvec: [ meta, dwi, bval, bvec ]
}
// Denoising DWI
input_dwi_denoise = ch_dwi_bvalbvec.dwi
.map{ it + [[]] }
DENOISING_MPPCA( input_dwi_denoise )
// Fetch specific output
ch_dwi_denoised = DENOISING_MPPCA.out.image
// Input DTI update with DWI denoised output
input_dti_denoised = ch_dwi_denoised
.join(ch_dwi_bvalbvec.bvs_files)
.map{ it + [[]] }
// DTI-derived metrics
RECONST_DTIMETRICS( input_dti_denoised )
// Preprocessing T1 images
inputs.anat.view()
}

Now, you can run nextflow..

Terminal window
nextflow run main.nf --input data -profile docker

3. Prepare the input structure for the subworkflow and include it in your main.nf

a. Use the subworkflow as follow in your main.nf:

PREPROC_T1( input_channel(s) )

b. Prepare structure input for the subworkflow

As with modules, the API documentation also lists subworkflows, providing information on Inputs, Outputs and the module list used in the Components section: PREPROC_T1.

The PREPROC_T1 subworkflow has 7 input channels defined, with only ch_image being mandatory. Just like using an empty list for modules, using an empty channel allows the process to run without supplying all optional inputs.

  • ch_image (Mandatory)
  • ch_template (Optional)
  • ch_probability_map (Optional)
  • ch_mask_nlmeans (Optional)
  • ch_ref_n4 (Optional)
  • ch_ref_resample (Optional)
  • ch_weights (Optional)
  • Using the Channel.empty() function for optional inputs, you can define the input structure for PREPROC_T1 using the newly fetched inputs.anat channel.

    PREPROC_T1(
    inputs.anat,
    Channel.empty(),
    Channel.empty(),
    Channel.empty(),
    Channel.empty(),
    Channel.empty(),
    Channel.empty()
    )

    4. Configure your subworkflow

    Configuring a subworkflow requires understanding the modules used within the subworkflow and their associated parameters. To properly use them, we will need to configure each of the module similarly to what we did for RECONST_DTIMETRICS.

    The PREPROC_T1 API documentation lists the modules included in the subworkflow in the Components section:

  • denoising/nlmeans
  • preproc/n4
  • image/resample
  • betcrop/antsbet
  • betcrop/synthbet
  • image/cropvolume
  • However, our current API documentation does not provide a list of the parameters for each module within a subworkflow. This is a work-in-progress and will be implemented in the near future to facilitate the usability of our subworkflows. In the meantime, you can find an example of the parameters in the nextflow.config file within the tests (./subworkflow/nf-neuro/preproc_t1/tests/nextflow.config).

    For the purpose of this tutorial, we will only enable the denoising and synthbet options (set to true) and disable all other options (set to false).

    Add the following lines to your nextflow.config file after params.output = 'result':

    // ** Subworkflow PREPROC T1 **
    params.preproc_t1_run_denoising = true
    params.preproc_t1_run_N4 = false
    params.preproc_t1_run_resampling = false
    params.preproc_t1_run_ants_bet = false
    params.preproc_t1_run_synthbet = true
    params.preproc_t1_run_crop = false

    Since we are disabling all options except for denoising and synthbet, we only need to include specific parameters for those two modules. Note that the denoising module does not require any specific parameters, so it can be ignored in the configuration.

    Now, update your nextflow.config file by adding the specific options for the subworkflow in the process section as follows after the DENOISING_MPPCA block:

    withName: "BETCROP_SYNTHBET" {
    memory = "4G"
    ext.nocsf = false
    }

    5. Verify your files

    #!/usr/bin/env nextflow
    include { RECONST_DTIMETRICS } from './modules/nf-neuro/reconst/dtimetrics/main'
    include { DENOISING_MPPCA } from './modules/nf-neuro/denoising/mppca/main'
    include { PREPROC_T1 } from './subworkflows/nf-neuro/preproc_t1/main'
    workflow get_data {
    main:
    if ( !params.input ) {
    log.info "You must provide an input directory containing all images using:"
    log.info ""
    log.info " --input=/path/to/[input] Input directory containing your subjects"
    log.info " |"
    log.info " ├-- S1"
    log.info " | ├-- *dwi.nii.gz"
    log.info " | ├-- *dwi.bval"
    log.info " | ├-- *dwi.bvec"
    log.info " | └-- *t1.nii.gz"
    log.info " └-- S2"
    log.info " ├-- *dwi.nii.gz"
    log.info " ├-- *bval"
    log.info " ├-- *bvec"
    log.info " └-- *t1.nii.gz"
    log.info ""
    error "Please resubmit your command with the previous file structure."
    }
    input = file(params.input)
    // ** Loading DWI files. ** //
    dwi_channel = Channel.fromFilePairs("$input/**/**/dwi/*dwi.{nii.gz,bval,bvec}", size: 3, flat: true)
    { it.parent.parent.parent.name + "_" + it.parent.parent.name} // Set the subject filename as subjectID + '_' + session.
    .map{ sid, bvals, bvecs, dwi -> [ [id: sid], dwi, bvals, bvecs ] } // Reordering the inputs.
    // ** Loading T1 file. ** //
    t1_channel = Channel.fromFilePairs("$input/**/**/anat/*T1w.nii.gz", size: 1, flat: true)
    { it.parent.parent.parent.name + "_" + it.parent.parent.name } // Set the subject filename as subjectID + '_' + session.
    .map{ sid, t1 -> [ [id: sid], t1 ] }
    emit:
    dwi = dwi_channel
    anat = t1_channel
    }
    workflow {
    inputs = get_data()
    // Use Multimap to split the tuple into multi inputs structure
    ch_dwi_bvalbvec = inputs.dwi
    .multiMap { meta, dwi, bval, bvec ->
    dwi: [ meta, dwi ]
    bvs_files: [ meta, bval, bvec ]
    dwi_bval_bvec: [ meta, dwi, bval, bvec ]
    }
    // Denoising DWI
    input_dwi_denoise = ch_dwi_bvalbvec.dwi
    .map{ it + [[]] }
    DENOISING_MPPCA( input_dwi_denoise )
    // Fetch specific output
    ch_dwi_denoised = DENOISING_MPPCA.out.image
    // Input DTI update with DWI denoised output
    input_dti_denoised = ch_dwi_denoised
    .join(ch_dwi_bvalbvec.bvs_files)
    .map{ it + [[]] }
    // DTI-derived metrics
    RECONST_DTIMETRICS( input_dti_denoised )
    // Preprocessing T1 images
    //inputs.anat.view()
    PREPROC_T1(
    inputs.anat,
    Channel.empty(),
    Channel.empty(),
    Channel.empty(),
    Channel.empty(),
    Channel.empty(),
    Channel.empty()
    )
    }

    6. Run nextflow

    Now, you can run nextflow..

    Terminal window
    nextflow run main.nf --input data -profile docker -resume