|
| 1 | +# What's new in 1.2 🎉🎉 |
| 2 | + |
| 3 | +- Auto3DSeg enhancements and benchmarks |
| 4 | +- nnUNet integration |
| 5 | +- TensorRT-optimized networks |
| 6 | +- MetricsReloaded integration |
| 7 | +- Bundle workflow APIs |
| 8 | +- Modular patch inference |
| 9 | + |
| 10 | +## Auto3DSeg enhancements and benchmarks |
| 11 | +Auto3DSeg is an innovative solution for 3D medical image segmentation, leveraging the advancements in MONAI and GPUs for algorithm development and deployment. |
| 12 | +Key improvements in this release include: |
| 13 | +- Several new modules to the training pipelines, such as automated GPU-based hyperparameter scaling, early stopping mechanisms, and dynamic validation frequency. |
| 14 | +- Multi-GPU parallelism has been activated for all GPU-related components including data analysis, model training, and model ensemble, to augment overall performance and capabilities. |
| 15 | +- The algorithms were benchmarked for computational efficiency on the TotalSegmentator dataset, containing over 1,000 CT images. |
| 16 | +- Multi-node training is implemented, reducing model training time significantly. |
| 17 | + |
| 18 | + |
| 19 | +## nnUNet integration |
| 20 | +The integration introduces a new class, `nnUNetV2Runner`, which leverages Python APIs to facilitate model training, validation, |
| 21 | +and ensemble, thereby simplifying the data conversion process for users. |
| 22 | +Benchmarking results from various public datasets confirm that nnUNetV2Runner performs as expected. |
| 23 | +Users are required to prepare a data list and create an `input.yaml` file to install and use the system. |
| 24 | +The framework also allows automatic execution of the entire nnU-Net pipeline, from model training to ensemble, |
| 25 | +with options to specify the number of epochs. Users can access APIs for training, dataset conversion, data preprocessing, and other components. |
| 26 | +Please check out [the tutorials](https://github.com/Project-MONAI/tutorials/tree/main/nnunet) for more details. |
| 27 | + |
| 28 | +## TensorRT-optimized networks |
| 29 | +[NVIDIA TensorRT](https://developer.nvidia.com/tensorrt) is an SDK for high-performance deep learning inference, |
| 30 | +includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. |
| 31 | +It can accelerate the deep learning model forward computation on the NVIDIA GPU. |
| 32 | +In this release, the `trt_export` API to export the TensorRT engine-based TorchScript model has been integrated into the MONAI bundle. |
| 33 | +Users can try to export bundles with it. A few bundles in the MONAI model zoo, |
| 34 | +like the [spleen_ct_segmentation](https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation) |
| 35 | +and [endoscopic_tool_segmentation](https://github.com/Project-MONAI/model-zoo/tree/dev/models/endoscopic_tool_segmentation) bundles, |
| 36 | +have already been exported and benchmarked. For more details about how to export and benchmark a model, |
| 37 | +please go to this [tutorial](https://github.com/Project-MONAI/tutorials/blob/main/acceleration/TensorRT_inference_acceleration.ipynb). |
| 38 | + |
| 39 | + |
| 40 | +## MetricsReloaded integration |
| 41 | +MetricsReloaded - a new recommendation framework for biomedical image analysis validation - is released publicly |
| 42 | +via https://github.com/Project-MONAI/MetricsReloaded. Binary and categorical metrics computing modules are included in this release, |
| 43 | +using MetricsReloaded as the backend. [Example scripts](https://github.com/Project-MONAI/tutorials/tree/main/modules/metrics_reloaded) are made available to demonstrate the usage. |
| 44 | + |
| 45 | + |
| 46 | +## Bundle workflow APIs |
| 47 | +`BundleWorkflow` abstracts the typical workflows (such as training, evaluation, and inference) of a bundle with three main interfaces: |
| 48 | +`initialize`, `run`, and `finalize`, applications use these APIs to execute a bundle. |
| 49 | +It unifies the required properties and optional properties for the workflows, downstream applications |
| 50 | +can invoke the components instead of parsing configs with keys. |
| 51 | +In this release, `ConfigWorkflow` class is also created for JSON and YAML config-based bundle workflows for improved Pythonic usability. |
| 52 | + |
| 53 | + |
| 54 | +## Modular patch inference |
| 55 | +In patch inference, patches are extracted from the image, the inference is run on those patches, and outputs are merged |
| 56 | +to construct the result image corresponding to the input image. Although depending on the task, model, and computational/memory resources, |
| 57 | +the exact implementations of a patch inference may vary, the overall process of splitting, running inference, and merging the results remains the same. |
| 58 | +In this release, we have created a modular design for patch inference, which defines the overall process while abstracting away the specific |
| 59 | +behavior of how to split the image into patches, how to pre and post process each patch, and how to merge the output patches. |
0 commit comments