Skip to content

Commit 455d98c

Browse files
authored
Update README.md
1 parent e17ba69 commit 455d98c

1 file changed

Lines changed: 23 additions & 47 deletions

File tree

README.md

Lines changed: 23 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -5,27 +5,24 @@
55

66
<h1 align="center">Compressed Radiation Treatment Planning (CompressRTP)</h1>
77

8+
<h2 align="center">
9+
<a href="./images/RMR_NeurIPS_Paper.pdf">NeurIPS'2024 </a> |
10+
<a href="https://arxiv.org/abs/2410.00756">ArXiv'2024 </a> |
11+
<a href="https://iopscience.iop.org/article/10.1088/1361-6560/acbefe/meta">PMB'2023</a>
812

9-
Radiotherapy is used to treat more than half of all cancer patients, either on its own or alongside other treatments like surgery, chemotherapy, or immunotherapy. It works by directing high-energy radiation beams at the patient's body to destroy cancer cells. Because every patient's anatomy is unique, radiotherapy must be personalized. This means customizing the radiation beams to effectively target the tumor while minimizing harm to nearby healthy tissue.
13+
</h2>
1014

11-
# The Challenge
1215

13-
Personalizing radiotherapy involves solving large and complex optimization problems. These problems need to be solved quickly due to the limited time available in clinical settings. Currently, they are often solved using rough approximations, which can lead to less effective treatments. This might result in the tumor not receiving enough radiation or healthy tissues being exposed to too much.
16+
# What is CompressRTP?
1417

15-
# The CompressRTP Project
18+
Radiotherapy is used to treat more than half of all cancer patients, either alone or in combination with other treatments like surgery, chemotherapy, or immunotherapy. It works by directing high-energy radiation beams at the patient's body to destroy cancer cells. Since every patient's anatomy is unique, radiotherapy must be personalized. This means customizing the radiation beams to effectively target the tumor while minimizing harm to nearby healthy tissue.
1619

17-
The CompressRTP project aims to solve these optimization problems both rapidly and accurately. This ongoing project currently includes tools introduced in our latest three publications [1, 2, 3]. These tools are available as three Jupyter Notebooks in this repository:
18-
19-
1. [Sparse-Only Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_only.ipynb)
20-
2. [Sparse-Plus-Low-Rank Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb)
21-
3. [Fluence Wavelet Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/fluence_wavelets.ipynb)
22-
23-
These codes are provided as extensions to PortPy.
20+
Personalizing radiotherapy involves solving large and complex optimization problems. These problems need to be solved quickly due to the limited time available in clinical settings. Currently, they are often solved using gross approximations, which can lead to less effective treatments. This might result in the tumor not receiving enough radiation or healthy tissues being exposed to excessive radiation. The CompressRTP project aims to solve these optimization problems both rapidly and accurately. This ongoing project currently includes tools introduced in our latest three publications [1, 2, 3].
2421

2522
# High-Level Overview
2623
The optimization problems in radiotherapy are highly complex due to the "curse of dimensionality" since they involve many beams, beamlets (small segments of beams), and voxels (3D pixels representing volume). However, much of this data is redundant because it comes from discretizing a system that is inherently continuous. For example, radiation doses delivered from adjacent beamlets are highly correlated, and radiation doses delivered to neighboring voxels are very similar. This redundancy means that large-scale radiotherapy optimization problems are highly compressible, which is the foundation of **CompressRTP**.
2724

28-
Dimensionality reduction and compression have a rich history in statistics and engineering. Recently, these techniques have re-emerged as powerful tools for addressing increasingly high-dimensional problems in fields like big data and machine learning. Our goal is to adapt and adopt these versatile methods to embed high-dimensional radiotherapy optimization problems into lower-dimensional spaces so they can be solved efficiently. A general radiotherapy optimization problem can be formulated as:
25+
Dimensionality reduction and compression have a rich history in statistics and engineering. Recently, these techniques have re-emerged as powerful tools for addressing increasingly high-dimensional problems in fields like big data and machine learning. Our goal is to **adapt and adopt** these versatile methods to **embed high-dimensional** radiotherapy optimization problems into **lower-dimensional spaces** so they can be solved efficiently. A general radiotherapy optimization problem can be formulated as:
2926

3027
$Minimize f(Ax,x)$
3128

@@ -37,14 +34,14 @@ Subject to $g(Ax,x)\leq 0,x\geq 0$
3734
1. **Matrix Compression to Address the Computational Intractability of $A$:**
3835

3936
- **The Challenge:** The matrix $𝐴$ is large and dense (approximately 100,000–500,000 rows and 5,000–20,000 columns) and is the main source of computational difficulty in solving radiotherapy optimization problems.
40-
- **Traditional Approach:** This matrix is often sparsified in practice by simply ignoring small elements (e.g., zeroing out elements less than 1% of the maximum value in $𝐴$, which can potentially lead to sub-optimal treatment plans.
37+
- **Traditional Approach:** This matrix is often sparsified in practice by simply ignoring small elements (e.g., zeroing out elements less than 1% of the maximum value in $𝐴$), which can potentially lead to sub-optimal treatment plans.
4138
- **CompressRTP Solutions:** We provide a compressed and accurate representation of matrix $𝐴$ using two different techniques:
42-
- **(1.1) Sparse-Only Compression:** This technique sparsifies $𝐴$ using advanced tools from probability and randomized linear algebra.
43-
- **(1.2) Sparse-Plus-Low-Rank Compression:** This method decomposes $𝐴$ into a sum of a sparse matrix and a low-rank matrix.
39+
- **(1.1) Sparse-Only Compression:** This technique sparsifies $𝐴$ using advanced tools from probability and randomized linear algebra. ([NeurIPS paper](./images/RMR_NeurIPS_Paper.pdf), [Sparse-Only Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_only.ipynb))
40+
- **(1.2) Sparse-Plus-Low-Rank Compression:** This method decomposes $𝐴$ into a sum of a sparse matrix and a low-rank matrix. ([ArXiv paper](https://arxiv.org/abs/2410.00756), [Sparse-Plus-Low-Rank Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb))
4441
2. **Fluence Compression to Enforce Smoothness on $𝑥$:**
4542
- **The Need for Smoothness:** The beamlet intensities $𝑥$ need to be smooth for efficient and accurate delivery of radiation. Smoothness refers to small variations in the intensity of neighboring beamlets in two dimensions.
4643
- **Traditional Approach:** Smoothness is often achieved implicitly by adding regularization terms to the objective function that discourage variations between neighboring beamlets.
47-
- **CompressRTP Solution:** We enforce smoothness explicitly by representing the beamlet intensities using low-frequency wavelets, resulting in built-in wavelet-induced smoothness. This can be easily integrated into the optimization problem by adding a set of linear constraints.
44+
- **CompressRTP Solution:** We enforce smoothness explicitly by representing the beamlet intensities using low-frequency wavelets, resulting in built-in wavelet-induced smoothness. This can be easily integrated into the optimization problem by adding a set of linear constraints. ([PMB paper](https://iopscience.iop.org/article/10.1088/1361-6560/acbefe/meta), [Wavelet Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/fluence_wavelets.ipynb))
4845

4946

5047
# 1) Matrix Compression to Address the Computational Intractability of $𝐴$
@@ -60,22 +57,10 @@ $Minimize f(Sx,x)$
6057
Subject to $g(Sx,x)\leq 0,x\geq 0$
6158
$(S≈A,S$ is sparse, $A$ is dense)
6259

63-
64-
We have developed a simple yet effective matrix sparsification technique with desirable mathematical properties (see **Algorithm 1** below). The main idea is to **retain the large elements of the matrix deterministically** and **handle the smaller elements probabilistically** (see the [paper](./images/RMR_NeurIPS_Paper.pdf) for a detailed explanation).
65-
66-
<p align="center">
67-
<img src="./images/Algorithm_RMR.png" width="90%" height="40%">
68-
<p>
69-
70-
Our **Randomized Minor Rectification (RMR)** algorithm transforms a dense matrix $𝐴$ into a sparse matrix $𝑆$ that typically contains only 2–4% of the non-zero elements of $𝐴$, while maintaining negligible accuracy loss (i.e., small feasibility and sub-optimality gaps).
71-
72-
This approach ensures that an optimal solution of the surrogate problem is a near-optimal solution of the original problem, as stated in the following two theorems:
73-
74-
**Small Feasibility Gap (Theorem 3.6 in the Paper)**:
75-
An optimal point of the surrogate problem violates each constraint of the original problem by no more than $(19 + 5 log m)\epsilon ∥x∥_2$ with a probability of at least 95%.
76-
77-
**Small Sub-Optimality Gap (Theorem 3.9 in the Paper)**: An optimal point of the surrogate problem, $x_A$, is a near-optimal solution to the original problem with a probability of at least 95%, and the sub-optimality gap of O(e), where $e = (19 + 5 logm) \epsilon max (∥x_A∥_2, ∥x_S∥_2)$ ($x_A$ is an optimal solution of the original problem).
7860

61+
In our [paper](./images/RMR_NeurIPS_Paper.pdf), we introduced **Randomized Minor Rectification (RMR)**, a simple yet effective matrix sparsification algorithm equiped with robust mathematical properties. The core principle of RMR is to **deterministically retain the large elements of a matrix while probabilistically handling the smaller ones**. Specifically, the RMR algorithm converts a dense matrix $𝐴$ into a sparse matrix $𝑆$ with typically 2–4% non-zero elements. This sparsification ensures that the optimal solution to the surrogate optimization problem (where $𝐴$ is replaced by
62+
$𝑆$) remains a near-optimal solution for the original problem. For a detailed mathematical analysis, refer to Theorems 3.6 and 3.9 in our [paper](./images/RMR_NeurIPS_Paper.pdf).
63+
7964
<p align="center">
8065
<img src="./images/RMR_vs_Others.png" width="90%" height="40%">
8166
<p>
@@ -93,7 +78,7 @@ $𝐴$ is sparsified by simply zeroing out small elements—a technique commonly
9378

9479
**Implementation in PortPy:**
9580

96-
If you are using PortPy for your radiotherapy research, you can apply RMR sparsification by simply adding the following lines of code:
81+
If you are using PortPy for your radiotherapy research, you can apply RMR sparsification by simply adding the following lines of code. For more details, see [Sparse-Only Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_only.ipynb).
9782

9883
```python
9984
from compress_rtp.utils.get_sparse_only import get_sparse_only
@@ -113,32 +98,23 @@ is **low-rank** and therefore **compressible**.
11398
<img src="./images/SPlusL_singular_values.png" width="90%" height="40%">
11499
<p>
115100

116-
**Figure Explanation:** The low-rank nature of matrix $𝐴$ can be verified by observing the exponential decay of its singular values, as shown by the blue line in the left figure. If we decompose matrix
117-
$𝐴$ into $𝐴=𝑆+𝐿$, where $𝑆$ is a sparse matrix containing large-magnitude elements (e.g., elements greater than 1% of the maximum value of $𝐴$), and $𝐿$ includes smaller elements mainly representing scattering doses, then the singular values of the scattering matrix $𝐿$ reveal an even sharper exponential decay (depicted by the red line). This suggests the use of “sparse-plus-low-rank” compression, $𝐴≈𝑆+𝐻𝑊$, as schematically shown in the right figure.
101+
**Figure Explanation:** The low-rank nature of matrix $𝐴$ can be verified by observing the exponential decay of its singular values, as shown by the blue line in the **left figure**. If we decompose matrix
102+
$𝐴$ into $𝐴=𝑆+𝐿$, where $𝑆$ is a sparse matrix containing large-magnitude elements (e.g., elements greater than 1% of the maximum value of $𝐴$), and $𝐿$ includes smaller elements mainly representing scattering doses, then the singular values of the scattering matrix $𝐿$ reveal an even sharper exponential decay (depicted by the red line). This suggests the use of “sparse-plus-low-rank” compression, $𝐴≈𝑆+𝐻𝑊$, as schematically shown in the **right figure**.
118103

119104

120105
The matrix $𝑆$ is sparse, $𝐻$ is a “tall skinny matrix” with only a few columns, and $𝑊$ is a “wide short matrix” with only a few rows. Therefore, $𝐴≈𝑆+𝐻𝑊$ provides a compressed representation of the data. This allows us to solve the following surrogate problem instead of the original problem
121106

122107
$Minimize f(Sx+Hy,x)$
123108

124-
Subject to $g(Sx+Hy,x)\leq 0, x=Wy, x\geq 0$
125-
126-
<p align="center">
127-
<img src="./images/SPlusL_Lung_Benefits.png" width="90%" height="40%">
128-
<p>
129-
130-
**Figure Explanation:** The figure above demonstrates improvements in plan quality across multiple clinical criteria when using the sparse-plus-low-rank compression method instead of naively sparsifying the matrix
131-
$𝐴$ by simply removing small elements. This comparison was performed on data from 10 lung cancer patients. Additionally, using compression reduced the average optimization time by approximately 20% compared to the naïve sparsification. More aggressive compression can be applied to gain even greater acceleration.
132-
133-
Decomposing a matrix into the sum of a sparse matrix and a low-rank matrix has found numerous applications in fields such as computer vision, medical imaging, and statistics. Historically, this structure has been employed as a form of prior knowledge to recover objects of interest that manifest themselves in either the sparse or low-rank components.
109+
Subject to $g(Sx+Hy,x)\leq 0, y=Wx, x\geq 0$
134110

135-
However, the application presented here represents a novel departure from conventional uses of sparse-plus-low-rank decomposition. Unlike traditional settings where specific components (sparse or low-rank) hold intrinsic importance, our primary goal is not to isolate or interpret these structures. Instead, we leverage them for computationally efficient matrix representation. In this case, the structure serves purely as a tool for optimizing computational efficiency while maintaining data integrity.
111+
Decomposing a matrix into the sum of a sparse matrix and a low-rank matrix has found numerous applications in fields such as computer vision, medical imaging, and statistics. Historically, this structure has been employed as a form of prior knowledge to recover objects of interest that manifest themselves in either the sparse or low-rank components. However, the application presented here represents a novel departure from conventional uses of sparse-plus-low-rank decomposition. Unlike traditional settings where specific components (sparse or low-rank) hold intrinsic importance, our primary goal is not to isolate or interpret these structures. Instead, we leverage them for computationally efficient matrix representation. In this case, the structure serves purely as a tool for optimizing computational efficiency while maintaining data integrity.
136112

137113
**Note:** Both sparse-only and sparse-plus-low-rank compression techniques serve the same purpose. We are currently investigating the strengths and weaknesses of each technique and their potential combination. Stay tuned for more results.
138114

139115
**Implementation in PortPy:**
140116

141-
In PortPy, you can apply the sparse-plus-low-rank compression using the following lines of code. Unlike the sparse-only compression using RMR, which did not require any changes other than replacing $𝐴x$ with $𝑆x$ in your optimization formulation and code, this compression requires adding a linear constraint $x=𝑊𝑦$ and replacing $Ax$ with $𝑆x+Hy$. These changes can be easily implemented using CVXPy (see the [Sparse-Plus-Low-Rank Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb) for details).
117+
In PortPy, you can apply the sparse-plus-low-rank compression using the following lines of code. Unlike the sparse-only compression using RMR, which did not require any changes other than replacing $𝐴x$ with $𝑆x$ in your optimization formulation and code, this compression requires adding a linear constraint $y=𝑊x$ and replacing $Ax$ with $𝑆x+Hy$. These changes can be easily implemented using CVXPy (see the [Sparse-Plus-Low-Rank Matrix Compression](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/matrix_sparse_plus_low_rank.ipynb) for details).
142118

143119
```python
144120
from compress_rtp.utils.get_sparse_plus_low_rank import get_sparse_plus_low_rank
@@ -163,7 +139,7 @@ To address these challenges, we treat the intensity map of each beam as a **2D i
163139

164140
**Implementation in PortPy:**
165141

166-
In **PortPy**, you can incorporate wavelet smoothness by adding the following lines of code:
142+
In **PortPy**, you can incorporate wavelet smoothness by adding the following lines of code. For detailed explanation, see [Wavelet Jupyter Notebook](https://github.com/PortPy-Project/CompressRTP/blob/main/examples/fluence_wavelets.ipynb).
167143

168144
```python
169145
from compress_rtp.utils.get_low_dim_basis import get_low_dim_basis

0 commit comments

Comments
 (0)