Skip to content

sct-pipeline/contrast-agnostic-softseg-spinalcord

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,032 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards Contrast-agnostic Soft Segmentation of the Spinal Cord

MedIA ImagingNeuroscience

Official repository for contrast-agnostic segmentation of the spinal cord.

This repo contains all the code for training the contrast-agnostic model. The code for training is based on the nnUNetv2 framework. The segmentation model is available as part of Spinal Cord Toolbox (SCT) via the sct_deepseg functionality.

Citation Information

If you find this work and/or code useful for your research, please cite our papers:

@article{BEDARD2025103473,
title = {Towards contrast-agnostic soft segmentation of the spinal cord},
journal = {Medical Image Analysis},
volume = {101},
pages = {103473},
year = {2025},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2025.103473},
url = {https://www.sciencedirect.com/science/article/pii/S1361841525000210},
author = {Sandrine Bédard* and Enamundram Naga Karthik* and Charidimos Tsagkas and Emanuele Pravatà and Cristina Granziera and Andrew Smith and Kenneth Arnold {Weber II} and Julien Cohen-Adad},
note = {Shared authorship -- authors contributed equally}
}
@article{Karthik2026,
title = {Monitoring morphometric drift in lifelong learning segmentation of the spinal cord},
journal = {Imaging Neuroscience},
volume = {4},
pages = {IMAG.a.1105},
year = {2026},
doi = {https://doi.org/10.1162/IMAG.a.1105},
author = {Enamundram Naga Karthik and Sandrine Bédard and Jan Valošek and Christoph S Aigner and Elise Bannier and Josef Bednařík and Virginie Callot and Anna Combes and Armin Curt and Gergely David and Falk Eippert and Lynn Farner and Michael G Fehlings and Patrick Freund and Tobias Granberg and Cristina Granziera and Ulrike Horn and Tomáš Horák and Suzanne Humphreys and Markus Hupp and Anne Kerbrat and Nawal Kinany and Shannon Kolind and Petr Kudlička and Anna Lebret and Lisa Eunyoung Lee and Caterina Mainero and Allan R Martin and Megan McGrath and Govind Nair and Kristin P O'Grady and Jiwon Oh and Russell Ouellette and Nikolai Pfender and Dario Pfyffer and Pierre-François Pradat and Alexandre Prat and Emanuele Pravatà and Daniel S Reich and Ilaria Ricchi and Naama Rotem-Kohavi and Simon Schading-Sassenhausen and Maryam Seif and Andrew Smith and Seth A Smith and Grace Sweeney and Roger Tam and Anthony Traboulsee and Constantina Andrada Treaba and Charidimos Tsagkas and Zachary Vavasour and Dimitri Van De Ville and Kenneth Arnold Weber II and Sarath Chandar and Julien Cohen-Adad}
}

Lifelong learning figure

Table of contents

Training the model

Step 1: Configuring the environment

  1. Create a conda environment with the following command:
conda create -n contrast_agnostic python=3.9.16
  1. Activate the environment with the following command:
conda activate contrast_agnostic
  1. Clone the repository with the following command:
git clone https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord.git
  1. Install the required packages with the following command:
cd contrast-agnostic-softseg-spinalcord
pip install -r nnUnet/requirements.txt

Note The requirements.txt does NOT install nnUNet. It has to be installed separately and can be done within the conda environment created above. See here for installation instructions. Please note that the nnUNet version used in this work is tag v2.5.1.

Step 2: Train the model

The script scripts/train_contrast_agnostic.sh downloads the datasets from git-annex, creates datalists, converts them into nnUNet-specific format, and trains the model. More instructions about what variables to set and which datasets to use can be found in the script itself. Once these variables are set, run:

bash scripts/train_contrast_agnostic.sh

Important

The script train_contrast_agnostic.sh will NOT run out-of-the-box. User-specific variables such as the path to download datasets and nnUnet repository need to be set. Info about which varibles to set can be found in the script itself.

Important

You might need to run the train_contrast_agnostic.sh script in a virtual terminal such as tmux or screen.

Lifelong learning for monitoring morphometric drift

This section provides some notes on the lifelong/continuous learning framework for automatically monitoring morphometric drift between various versions of segmentation models. Once a new segmentation model is developed and released, a GitHub actions (GHA) workflow is triggered which automatically computes the spinal cord CSA between current (new) version of the model and previously released models.

For a fair comparison, we evalute various model versions on the frozen test set of the spine-generic data-multi-subject (public) dataset. The test split can be found in scripts/spine_generic_test_split_for_csa_drift_monitoring.yaml file.

Step 1: Creating a new release

Here are the steps involved in the workflow:

  • After training a new segmentation model, create a release with the following naming convention:
    • Tag name: vX.Y (e.g. v2.0, v3.0, etc.), where X is the major update (i.e. architectural/training-strategy change) and Y is the minor update (addition of new contrasts and/or pathologies).
    • Release title: contrast-agnostic-spinal-cord-segmentation vX.Y (note, the title can be anything, GHA workflow does not depend on it).
    • Release description: A drop-down summary of the dataset characteristics. The details of the datasets used during training is automatically generated from the nnUnet/utils.py script.
    • Release assets: The model weights and the training logs (if needed) are attached to the release. The entire output folder of the nnUnet model containing the folds, should be uploaded. The naming convention for the .zip file should be model_contrast_agnostic_<date-the-model-was-trained-on>.zip.
    • Once the above steps are completed, publish the release.

Step 2: The GHA workflow

  • Once published, the release triggers a GHA workflow. The workflow is a .yml file located in the .github/workflows folder. For a high-level overview, it is divided into the following steps:
    • Job 1: Clones the dataset via git-annex and only downloads subjects in the test split. The dataset is cached for future use.
    • Job 2: The test set of (n=49) is split into batches of 3 subjects for parallel processing. The model is downloaded from the release and each job (or, a runner) is responsible for computing the C2-C3 CSA for all the 6 contrasts.
    • Job 3: The output .csv files are aggregated across batches and merged into a single CSV file. The file is saved with the following naming convention csa_c2c3__model_<tag-name>.csv (note that the tag name defined in Step 1 is being used here) and uploaded to the release.
    • Job 4: All csa_c2c3__model_<tag-name>.csv files corresponding to current and previous releases are downloaded. Then, violin plots comparing the CSA per contrast (for each model) and the STD of CSA across contrasts are generated. The plots are saved in the morphometric_plots.zip folder and uploaded to the existing release.

In summary, once a new model is released, the GitHub actions workflow automatically generates the plots for monitoring the morphometric drift between various versions of the segmentation model.

About

Contrast-agnostic spinal cord segmentation project with softseg

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors