Skip to content

Commit 04f07ae

Browse files
One more attempt at modifying the license and readme. (#187)
* One more attempt at modifying the license and readme. * Updated release.yaml.
1 parent 31d40a1 commit 04f07ae

4 files changed

Lines changed: 251 additions & 9 deletions

File tree

.github/workflows/release.yaml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,6 @@ jobs:
1616
- name: install dependencies, then build source tarball
1717
run: |
1818
cd openequivariance
19-
rm LICENSE
20-
rm README.md
21-
cp ../LICENSE .
22-
cp ../README.md .
2319
python3 -m pip install build
2420
python3 -m build --sdist
2521
- name: store the distribution packages

CHANGELOG.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
## Latest Changes
22

3-
### v0.6.1 (2025-02-23)
4-
OpenEquivariance v0.6.1 brings long-needed improvements to the
3+
### v0.6.2 (2025-02-23)
4+
OpenEquivariance v0.6.2 brings long-needed improvements to the
55
PyTorch frontend. We strongly encourage all users to upgrade
6-
to PyTorch 2.10 and OEQ v0.6.1.
6+
to PyTorch 2.10 and OEQ v0.6.2.
77

88
**Added**:
99
- OpenEquivariance triggers a build of the CUDA extension module

openequivariance/LICENSE

Lines changed: 0 additions & 1 deletion
This file was deleted.

openequivariance/LICENSE

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
BSD 3-Clause License
2+
3+
Copyright (c) 2025, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
4+
5+
Redistribution and use in source and binary forms, with or without
6+
modification, are permitted provided that the following conditions are met:
7+
8+
1. Redistributions of source code must retain the above copyright notice, this
9+
list of conditions and the following disclaimer.
10+
11+
2. Redistributions in binary form must reproduce the above copyright notice,
12+
this list of conditions and the following disclaimer in the documentation
13+
and/or other materials provided with the distribution.
14+
15+
3. Neither the name of the copyright holder nor the names of its
16+
contributors may be used to endorse or promote products derived from
17+
this software without specific prior written permission.
18+
19+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
20+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22+
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
23+
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
25+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
26+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
27+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

openequivariance/README.md

Lines changed: 0 additions & 1 deletion
This file was deleted.

openequivariance/README.md

Lines changed: 220 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,220 @@
1+
# OpenEquivariance
2+
[![OEQ C++ Extension Build Verification](https://github.com/PASSIONLab/OpenEquivariance/actions/workflows/verify_extension_build.yml/badge.svg?event=push)](https://github.com/PASSIONLab/OpenEquivariance/actions/workflows/verify_extension_build.yml)
3+
[![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
4+
5+
[[PyTorch Examples]](#pytorch-examples)
6+
[[JAX Examples]](#jax-examples)
7+
[[Citation and Acknowledgements]](#citation-and-acknowledgements)
8+
9+
OpenEquivariance is a CUDA and HIP kernel generator for the Clebsch-Gordon tensor product,
10+
a key kernel in rotation-equivariant deep neural networks.
11+
It implements some of the tensor products
12+
that [e3nn](https://e3nn.org/) supports
13+
commonly found in graph neural networks
14+
(e.g. [Nequip](https://github.com/mir-group/nequip) or
15+
[MACE](https://github.com/ACEsuit/mace)). To get
16+
started with PyTorch, ensure that you have PyTorch
17+
and GCC 9+ available before installing our package via
18+
19+
```bash
20+
pip install openequivariance
21+
```
22+
23+
We provide up to an order of magnitude acceleration over e3nn perform on par with the latest
24+
version of [NVIDIA cuEquivariance](https://github.com/NVIDIA/cuEquivariance),
25+
which has a closed-source kernel package.
26+
We also offer fused equivariant graph
27+
convolutions that can reduce
28+
computation and memory consumption significantly.
29+
30+
For detailed instructions on tests, benchmarks, MACE / Nequip, and our API,
31+
check out the [documentation](https://passionlab.github.io/OpenEquivariance).
32+
33+
⭐️ **JAX**: Our latest update brings
34+
support for JAX. For NVIDIA GPUs,
35+
install it (after installing JAX)
36+
with the following two commands strictly in order:
37+
38+
``` bash
39+
pip install openequivariance[jax]
40+
pip install openequivariance_extjax --no-build-isolation
41+
```
42+
43+
For AMD GPUs:
44+
``` bash
45+
pip install openequivariance[jax]
46+
JAX_HIP=1 pip install openequivariance_extjax --no-build-isolation
47+
```
48+
49+
See the section below for example usage and
50+
our [API page](https://passionlab.github.io/OpenEquivariance/api/) for more details.
51+
52+
## PyTorch Examples
53+
Here's a CG tensor product implemented by e3nn:
54+
55+
```python
56+
import torch
57+
import e3nn.o3 as o3
58+
59+
gen = torch.Generator(device='cuda')
60+
61+
batch_size = 1000
62+
X_ir, Y_ir, Z_ir = o3.Irreps("1x2e"), o3.Irreps("1x3e"), o3.Irreps("1x2e")
63+
X = torch.rand(batch_size, X_ir.dim, device='cuda', generator=gen)
64+
Y = torch.rand(batch_size, Y_ir.dim, device='cuda', generator=gen)
65+
66+
instructions=[(0, 0, 0, "uvu", True)]
67+
68+
tp_e3nn = o3.TensorProduct(X_ir, Y_ir, Z_ir, instructions,
69+
shared_weights=False, internal_weights=False).to('cuda')
70+
W = torch.rand(batch_size, tp_e3nn.weight_numel, device='cuda', generator=gen)
71+
72+
Z = tp_e3nn(X, Y, W)
73+
print(torch.norm(Z))
74+
```
75+
76+
And here's the same tensor product using openequivariance. We require that your
77+
tensors are stored on a CUDA device for this to work:
78+
79+
```python
80+
import openequivariance as oeq
81+
82+
problem = oeq.TPProblem(X_ir, Y_ir, Z_ir, instructions, shared_weights=False, internal_weights=False)
83+
tp_fast = oeq.TensorProduct(problem, torch_op=True)
84+
85+
Z = tp_fast(X, Y, W) # Reuse X, Y, W from earlier
86+
print(torch.norm(Z))
87+
```
88+
89+
Our interface for `oeq.TPProblem` is almost a strict superset of
90+
`o3.TensorProduct` (two key differences: we
91+
impose `internal_weights=False` and add support for multiple datatypes).
92+
You can pass e3nn `Irreps` instances directly or
93+
use `oeq.Irreps`, which is identical.
94+
95+
We recommend reading the [e3nn documentation and API reference](https://docs.e3nn.org/en/latest/) first, then using our kernels
96+
as drop-in replacements. We support most "uvu" and "uvw" tensor products;
97+
see [this section](#tensor-products-we-accelerate) for an up-to-date list of supported configurations.
98+
99+
**Important**: For many configurations, our kernels return results identical to
100+
e3nn up to floating point roundoff (this includes all "uvu" problems with
101+
multiplicity 1 for all irreps in the second input). For other configurations
102+
(e.g. any "uvw" connection modes), we return identical
103+
results up to a well-defined reordering of the weights relative to e3nn.
104+
105+
If you're executing tensor products as part of a message passing graph
106+
neural network, we offer fused kernels that save both memory and compute time:
107+
108+
```python
109+
from torch_geometric import EdgeIndex
110+
111+
node_ct, nonzero_ct = 3, 4
112+
113+
# Receiver, sender indices for message passing GNN
114+
edge_index = EdgeIndex(
115+
[[0, 1, 1, 2], # Receiver
116+
[1, 0, 2, 1]], # Sender
117+
device='cuda',
118+
dtype=torch.long)
119+
120+
X = torch.rand(node_ct, X_ir.dim, device='cuda', generator=gen)
121+
Y = torch.rand(nonzero_ct, Y_ir.dim, device='cuda', generator=gen)
122+
W = torch.rand(nonzero_ct, problem.weight_numel, device='cuda', generator=gen)
123+
124+
tp_conv = oeq.TensorProductConv(problem, torch_op=True, deterministic=False) # Reuse problem from earlier
125+
Z = tp_conv.forward(X, Y, W, edge_index[0], edge_index[1]) # Z has shape [node_ct, z_ir.dim]
126+
print(torch.norm(Z))
127+
```
128+
129+
If you can guarantee `EdgeIndex` is sorted by receiver index and supply the transpose
130+
permutation, we can provide even greater speedup (and deterministic results)
131+
by avoiding atomics:
132+
133+
```python
134+
_, sender_perm = edge_index.sort_by("col") # Sort by sender index
135+
edge_index, receiver_perm = edge_index.sort_by("row") # Sort by receiver index
136+
137+
# Now we can use the faster deterministic algorithm
138+
tp_conv = oeq.TensorProductConv(problem, torch_op=True, deterministic=True)
139+
Z = tp_conv.forward(X, Y[receiver_perm], W[receiver_perm], edge_index[0], edge_index[1], sender_perm)
140+
print(torch.norm(Z))
141+
```
142+
**Note**: you don't need Pytorch geometric to use our kernels. When
143+
`deterministic=False`, the `sender` and `receiver` indices can have
144+
arbitrary order.
145+
146+
## JAX Examples
147+
After installation, use the library
148+
as follows. Set `OEQ_NOTORCH=1`
149+
in your environment to avoid the PyTorch import in
150+
the regular `openequivariance` package.
151+
```python
152+
import jax
153+
import os
154+
155+
os.environ["OEQ_NOTORCH"] = "1"
156+
import openequivariance as oeq
157+
158+
seed = 42
159+
key = jax.random.PRNGKey(seed)
160+
161+
batch_size = 1000
162+
X_ir, Y_ir, Z_ir = oeq.Irreps("1x2e"), oeq.Irreps("1x3e"), oeq.Irreps("1x2e")
163+
problem = oeq.TPProblem(X_ir, Y_ir, Z_ir, [(0, 0, 0, "uvu", True)], shared_weights=False, internal_weights=False)
164+
165+
166+
node_ct, nonzero_ct = 3, 4
167+
edge_index = jax.numpy.array(
168+
[
169+
[0, 1, 1, 2],
170+
[1, 0, 2, 1],
171+
],
172+
dtype=jax.numpy.int32, # NOTE: This int32, not int64
173+
)
174+
175+
X = jax.random.uniform(key, shape=(node_ct, X_ir.dim), minval=0.0, maxval=1.0, dtype=jax.numpy.float32)
176+
Y = jax.random.uniform(key, shape=(nonzero_ct, Y_ir.dim),
177+
minval=0.0, maxval=1.0, dtype=jax.numpy.float32)
178+
W = jax.random.uniform(key, shape=(nonzero_ct, problem.weight_numel),
179+
minval=0.0, maxval=1.0, dtype=jax.numpy.float32)
180+
181+
tp_conv = oeq.jax.TensorProductConv(problem, deterministic=False)
182+
Z = tp_conv.forward(
183+
X, Y, W, edge_index[0], edge_index[1]
184+
)
185+
print(jax.numpy.linalg.norm(Z))
186+
187+
# Test JAX JIT
188+
jitted = jax.jit(lambda X, Y, W, e1, e2: tp_conv.forward(X, Y, W, e1, e2))
189+
print(jax.numpy.linalg.norm(jitted(X, Y, W, edge_index[0], edge_index[1])))
190+
```
191+
192+
## Citation and Acknowledgements
193+
If you find this code useful, please cite our paper:
194+
195+
```bibtex
196+
@inbook{openequivariance,
197+
author={Vivek Bharadwaj and Austin Glover and Aydin Buluc and James Demmel},
198+
title={An Efficient Sparse Kernel Generator for O(3)-Equivariant Deep Networks},
199+
booktitle = {SIAM Conference on Applied and Computational Discrete Algorithms (ACDA25)},
200+
chapter = {},
201+
url={https://arxiv.org/abs/2501.13986},
202+
publisher={Society for Industrial and Applied Mathematics},
203+
year={2025}
204+
}
205+
```
206+
207+
Our codebase includes a lightweight clone of
208+
[e3nn](https://e3nn.org/)'s frontend interface (in particular, the
209+
`TensorProduct` and `Irreps` classes). We removed references to Pytorch
210+
and separated the implementation from the problem description (for future
211+
frontend support outside of torch). We also extracted the Wigner 3j tensor generating code from QuTiP. Thank you to the current
212+
developers and maintainers!
213+
214+
## Copyright
215+
216+
Copyright (c) 2025, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
217+
218+
If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office at IPO@lbl.gov.
219+
220+
NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit others to do so.

0 commit comments

Comments
 (0)