Skip to content

Subworkflow definition

Defining a subworkflow for nf-neuro is a bit like creating a custom one for a project, with a few exceptions. For one, you cannot use modules or subworkflows from outside nf-neuro. If you need one, you’ll have to contribute it as well. However, that means you don’t have to install anything, you can use components by referencing their path relative to the subworkflow :

  • For modules : ../../../modules/nf-neuro/
  • For subworkflows : ../

For the anatomical preprocessing subworkflow, add the following include statements at the top of the file :

// MODULES
include { DENOISING_NLMEANS } from '../../../modules/nf-neuro/denoising/nlmeans/main'
include { BETCROP_SYNTHBET } from '../../../modules/nf-neuro/betcrop/synthbet/main'
include { BETCROP_ANTSBET } from '../../../modules/nf-neuro/betcrop/antsbet/main'
include { PREPROC_N4 } from '../../../modules/nf-neuro/preproc/n4/main'
// SUBWORKFLOWS
include { ANATOMICAL_SEGMENTATION } from '../anatomical_segmentation/main'
workflow PREPROC_ANAT {
...

The job of the subworkflow is to define the flow of data between the components it includes. In the current case, you want to execute the following linear processing flow :

  1. Denoise the anatomical image with DENOISING_NLMEANS.
  2. Do a quick first brain extraction with BETCROP_SYNTHBET.
  3. Run intensity normalization with PREPROC_N4.
  4. Compute a robust brain mask with BETCROP_ANTSBET.
  5. Segment brain tissues with ANATOMICAL_SEGMENTATION.

Instead of defining input channels now, start by describing the chaining of channels in the main section. This will help detect which data is needed in input and what should be defined in a configuration. Take into account that you have access to an anatomical images channel, containing tuple elements of the shape :

[ [id: string], path(anat_image) ]

Add the channel to the inputs, then create an input channel for the nl-means module :

workflow PREPROC_ANAT {
take:
ch_anatomical // Structure : [ [id: string], path(anat_image) ]
main:
ch_versions = Channel.empty()
ch_denoising_nlmeans = ch_anatomical
.map{ meta, image -> [meta, image, [], []] }
DENOISING_NLMEANS( ch_denoising_nlmeans )
ch_versions = ch_versions.mix(DENOISING_NLMEANS.out.versions)
}

Next, chain up the output image resuting from the DENOISING_NLMEANS module into BETCROP_SYNTHBET, preparing its input channel like above :

workflow PREPROC_ANAT {
take:
ch_anatomical // Structure : [ [id: string], path(anat_image) ]
main:
ch_versions = Channel.empty()
ch_denoising_nlmeans = ch_anatomical
.map{ meta, image -> [meta, image, [], []] }
DENOISING_NLMEANS( ch_denoising_nlmeans )
ch_versions = ch_versions.mix(DENOISING_NLMEANS.out.versions)
ch_betcrop_synthbet = DENOISING_NLMEANS.out.image
.map{ meta, image -> [meta, image, []] }
BETCROP_SYNTHBET( ch_betcrop_synthbet )
ch_versions = ch_versions.mix(BETCROP_SYNTHBET.out.versions)
}

To call the next process, PREPROC_N4, you have to associate the outputs of two modules together : DENOISING_NLMEANS and BETCROP_SYNTHBET. This is done by the join operator. It uses the first item in the tuple of each element, the meta map, to associate them together and emit a single channel with the result :

DENOISING_NLMEANS( ch_denoising_nlmeans )
ch_versions = ch_versions.mix(DENOISING_NLMEANS.out.versions)
ch_betcrop_synthbet = DENOISING_NLMEANS.out.image
.map{ meta, image -> [meta, image, []] }
BETCROP_SYNTHBET( ch_betcrop_synthbet )
ch_versions = ch_versions.mix(BETCROP_SYNTHBET.out.versions)
ch_preproc_n4 = DENOISING_NLMEANS.out.image
.map{ meta, image -> [meta, image, []] }
.join(BETCROP_SYNTHBET.out.brain_mask)
PREPROC_N4( ch_preproc_n4 )
ch_versions = ch_versions.mix(PREPROC_N4.out.versions)
}

The next module to chain is BETCROP_ANTSBET. It requires two special files to execute : an anatomical reference and a brain probability image from a template. It could be good to define one here, but would be restrictive since the users could want to use your subworkflow for other kind of images than the ones that fits your template (other species, ages or for pathologies). Here, you’ll instead define another input for it :

workflow PREPROC_ANAT {
take:
ch_anatomical // Structure : [ [id: string] , path(anat_image) ]
ch_template // Structure : [ path(anat_ref), path(brain_proba) ]
main:
...
ch_preproc_n4 = DENOISING_NLMEANS.out.image
.map{ meta, image -> [meta, image, []] }
.join(BETCROP_SYNTHBET.out.brain_mask)
PREPROC_N4( ch_preproc_n4 )
ch_versions = ch_versions.mix(PREPROC_N4.out.versions)
ch_betcrop_antsbet = PREPROC_N4.out.image
.combine(ch_template)
BETCROP_ANTSBET( ch_betcrop_antsbet )
ch_versions = ch_versions.mix(BETCROP_ANTSBET.out.versions)
}

The last step to implement is the segmentation. This one is particular, since it is a call to another subworkflow, not to a module. Contrary to the other calls, it has more than one input channel (not that modules are exempt from it). However, for the case of this subworkflow, we’ll only supply the image channel, all others will be given empty channels :

PREPROC_N4( ch_preproc_n4 )
ch_versions = ch_versions.mix(PREPROC_N4.out.versions)
ch_betcrop_antsbet = PREPROC_N4.out.image
.combine(ch_template)
BETCROP_ANTSBET( ch_betcrop_antsbet )
ch_versions = ch_versions.mix(BETCROP_ANTSBET.out.versions)
ANATOMICAL_SEGMENTATION(
PREPROC_N4.out.image,
Channel.empty(),
Channel.empty(),
Channel.empty()
)

For users of your subworkflow to be able to access the results of the execution of the included components, you have to output them in the emit section. As a rule of thumb, each output channel should emit a single item per element. They should also always be associated to a meta map, if the item is scoped to a subject or sample.

The most boilded down output section only outputs the end results of the subworkflow. For the current case, it means :

  1. the image outputed by PREPROC_N4, which has also been denoised by NL-Means.
  2. The final brain mask computed by BETCROP_ANTSBET.
  3. The segmentation masks and maps generated by ANATOMICAL_SEGMENTATION.
ANATOMICAL_SEGMENTATION(
PREPROC_N4.out.image,
Channel.empty(),
Channel.empty(),
Channel.empty()
)
emit:
ch_anatomical = PREPROC_N4.out.image // channel: [ [id: string] , path(image) ]
ch_brain_mask = BETCROP_ANTSBET.out.mask // channel: [ [id: string] , path(brain_mask) ]
wm_mask = ANATOMICAL_SEGMENTATION.out.wm_mask // channel: [ [id: string] , path(wm_mask) ]
gm_mask = ANATOMICAL_SEGMENTATION.out.gm_mask // channel: [ [id: string] , path(gm_mask) ]
csf_mask = ANATOMICAL_SEGMENTATION.out.csf_mask // channel: [ [id: string] , path(csf_mask) ]
wm_map = ANATOMICAL_SEGMENTATION.out.wm_map // channel: [ [id: string] , path(wm_map) ]
gm_map = ANATOMICAL_SEGMENTATION.out.gm_map // channel: [ [id: string] , path(gm_map) ]
csf_map = ANATOMICAL_SEGMENTATION.out.csf_map // channel: [ [id: string] , path(csf_map) ]
versions = ch_versions // channel: [ path(versions.yml) ]

With that, you have implemented a basic form of the anatomical segmentation subworkflow. In the next sections of the tutorial, you will add optional inputs, compose the configuration and link it to the included components, document the different sections and create test cases. For now, here is the full implementation :

// MODULES
include { DENOISING_NLMEANS } from '../../../modules/nf-neuro/denoising/nlmeans/main'
include { BETCROP_SYNTHBET } from '../../../modules/nf-neuro/betcrop/synthbet/main'
include { BETCROP_ANTSBET } from '../../../modules/nf-neuro/betcrop/antsbet/main'
include { PREPROC_N4 } from '../../../modules/nf-neuro/preproc/n4/main'
// SUBWORKFLOWS
include { ANATOMICAL_SEGMENTATION } from '../anatomical_segmentation/main'
workflow PREPROC_ANAT {
take:
ch_anatomical // Structure : [ [id: string] , path(anat_image) ]
ch_template // Structure : [ path(anat_ref), path(brain_proba) ]
main:
ch_versions = Channel.empty()
ch_denoising_nlmeans = ch_anatomical
.map{ meta, image -> [meta, image, [], []] }
DENOISING_NLMEANS( ch_denoising_nlmeans )
ch_versions = ch_versions.mix(DENOISING_NLMEANS.out.versions)
ch_betcrop_synthbet = DENOISING_NLMEANS.out.image
.map{ meta, image -> [meta, image, []] }
BETCROP_SYNTHBET( ch_betcrop_synthbet )
ch_versions = ch_versions.mix(BETCROP_SYNTHBET.out.versions)
ch_preproc_n4 = DENOISING_NLMEANS.out.image
.map{ meta, image -> [meta, image, []] }
.join(BETCROP_SYNTHBET.out.brain_mask)
PREPROC_N4( ch_preproc_n4 )
ch_versions = ch_versions.mix(PREPROC_N4.out.versions)
ch_betcrop_antsbet = PREPROC_N4.out.image
.combine(ch_template)
BETCROP_ANTSBET( ch_betcrop_antsbet )
ch_versions = ch_versions.mix(BETCROP_ANTSBET.out.versions)
ANATOMICAL_SEGMENTATION(
PREPROC_N4.out.image,
Channel.empty(),
Channel.empty(),
Channel.empty()
)
emit:
ch_anatomical = PREPROC_N4.out.image // channel: [ [id: string] , path(image) ]
ch_brain_mask = BETCROP_ANTSBET.out.mask // channel: [ [id: string] , path(brain_mask) ]
wm_mask = ANATOMICAL_SEGMENTATION.out.wm_mask // channel: [ [id: string] , path(wm_mask) ]
gm_mask = ANATOMICAL_SEGMENTATION.out.gm_mask // channel: [ [id: string] , path(gm_mask) ]
csf_mask = ANATOMICAL_SEGMENTATION.out.csf_mask // channel: [ [id: string] , path(csf_mask) ]
wm_map = ANATOMICAL_SEGMENTATION.out.wm_map // channel: [ [id: string] , path(wm_map) ]
gm_map = ANATOMICAL_SEGMENTATION.out.gm_map // channel: [ [id: string] , path(gm_map) ]
csf_map = ANATOMICAL_SEGMENTATION.out.csf_map // channel: [ [id: string] , path(csf_map) ]
versions = ch_versions // channel: [ path(versions.yml) ]
}