Skip to content

Running the pipeline

sf-tractomics pipeline schema

Some sf-tractomics core functionalities are accessed and selected using profiles and arguments.

Processing profiles:

  1. -profile gpu

    Activate usage of GPU accelerated algorithms to drastically increase processing speeds. Currently only uspports nvidia GPU. Accelerations for FSL Eddy and Local tractography are automatically enabled using this profile.

  2. -profile full_pipeline

    Runs the full sf-tractomics pipeline from end-to-end, with all options.

Configuration profiles:

  1. -profile docker (Recommended):

    Each process will be run using Docker containers.

  2. -profile apptainer (Recommended):

    Each process will be run using Apptainer images.

  3. -profile arm:

    Made to be use on computers with an ARM architecture (e.g., Mac M1,2,3,4). This is still experimental, depending on which profile you select, some containers might not be built for the ARM architecture. Feel free to open an issue if needed.

  4. -profile slurm:

    If selected, the SLURM job scheduler will be used to dispatch jobs.

Using either -profile docker or -profile apptainer is highly recommended, as it controls the version of the software used and ensure reproducibility. While it is technically possible to run the pipeline without Docker or Apptainer, the amount of dependencies to install is simply not worth it.

The typical command for running the pipeline is as follows:

Terminal window
nextflow run scilus/sf-tractomics -r 0.1.0 --input <input_directory> --outdir ./results -profile docker -resume -with-report <report_name>.html

This will launch the pipeline with the docker configuration profile, will run preprocessing, alignment of anatomy in diffusion space, reconstruct diffusion profiles (DTI and fODF) and perform tracography (local and particle filtering algorithms). There are only 3 parameters that need to be supplied at runtime:

  1. --input: for the path to your BIDS directory

    For more details on how to organize your input folder, please refer to the inputs section.

  2. --outdir: path to the output directory

    We do not specify a default for the output directory location to ensure that users have total control on where the output files will be stored, as it can quickly grow into a large number of files. The recommended naming would be something along the line of sf-tractomics-v{version} where {version} could be 0.1.0 for example.

  3. -profile: profile to be run and container system to use

    sf-tractomics processing steps was designed in profiles, giving users total control on which type of processing they want to make. One caveat is that users need to explicitly tell which profile to run. This is done via the -profile parameter. To view the available processing profiles, please see this section

  4. -resume: Enables nextflow caching capabilities.

    This is a core nextflow arguments. It enables the resumability of your pipeline. In the event where the pipeline fails for a variety of reasons, the following run will start back where it left off. For more details, see the core nextflow arguments section.

  5. -with-report: Enables nextflow caching capabilities.

    This is a core nextflow arguments. It enables the html report of your pipeline. For more details, see the core nextflow arguments section.

Note that the pipeline will create the following files in your working directory:

  • Directorywork/ # Nextflow working directory
  • .nextflow_log # Log file from Nextflow
  • Directorysf-tractomics-v0.1.0/ # Results location (defined with —outdir)
    • pipeline_info # Global informations on the run
    • stats # Global statistics on the run
    • Directorysub-01
      • # Other entities like session
        • Directoryanat/ # Clean T1w in diffusion space
        • Directorydwi/ # All clean DWI files, models, tractograms, …
          • Directorybundles/ # Extracted bundles when using bundling
      • Directoryxfm/ # Transforms between diffusion and anatomy
  • # Other nextflow related files

If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file.

Pipeline settings can be provided in a yaml or json file via -params-file <file>.

The above pipeline run specified with a params file in yaml format:

Terminal window
nextflow run scilus/sf-tractomics -r 0.1.0 -profile docker -params-file params.yaml

with:

params.yaml
input: '<input_directory>/'
outdir: './results/'
<...>

It is a good idea to specify the pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the scilus/sf-tractomics releases page and find the latest pipeline version - numeric only (eg. 0.1.0). Then specify this when running the pipeline with -r (one hyphen) - eg. -r 0.1.0. Of course, you can switch to another version by changing the number after the -r flag.

This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.

To further assist in reproducibility, you can use share and reuse parameter files to repeat pipeline runs with the same settings without having to write out a command with every single parameter.

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.

Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, and Apptainer) - see below.

The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to check if your system is supported, please see the nf-core/configs documentation.

Note that multiple profiles can be loaded, for example: -profile tracking,docker - the order of arguments is important! They are loaded in sequence, so later profiles can overwrite earlier profiles. For a complete description of the available profiles, please see this section.

Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files’ contents as well. For more info about this parameter, see this blog post.

You can also supply a run name to resume a specific run: -resume [run-name]. Use the nextflow log command to show previous run names.

Nextflow can create an HTML execution report: a single document which includes many useful metrics about a workflow execution. The report is organised in the three main sections: Summary, Resources and Tasks.

Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.