Skip to content

Commit 6d3e445

Browse files
committed
update
1 parent c6de913 commit 6d3e445

4 files changed

Lines changed: 91 additions & 34 deletions

File tree

README.md

Lines changed: 52 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<p align="center">
2-
<img src="assets/logo.png" width="65%">
2+
<img src="https://raw.githubusercontent.com/VectorSpaceLab/EditScore/refs/heads/main/assets/logo.png" width="65%">
33
</p>
44

55
<p align="center">
@@ -26,8 +26,9 @@
2626
- **Versatile Applications**: Ready to use as a best-in-class reranker to improve editing outputs, or as a high-fidelity reward signal for **stable and effective Reinforcement Learning (RL) fine-tuning**.
2727

2828
## 🔥 News
29-
- **2025-09-30**: We release **OmniGen2-EditScore7B**, unlocking online RL For Image Editing via high-fidelity EditScore. LoRA weights are available at [Hugging Face](https://huggingface.co/OmniGen2/OmniGen2-EditScore7B) and [ModelScope](https://www.modelscope.cn/models/OmniGen2/OmniGen2-EditScore7B).
30-
- **2025-09-30**: We are excited to release **EditScore** and **EditReward-Bench**! Model weights and the benchmark dataset are now publicly available. You can access them on Hugging Face: [Models Collection](https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe) and [Benchmark Dataset](https://huggingface.co/datasets/EditScore/EditReward-Bench), and on ModelScope: [Models Collection](https://www.modelscope.cn/collections/EditScore-8b0d53aa945d4e) and [Benchmark Dataset](https://www.modelscope.cn/datasets/EditScore/EditReward-Bench).
29+
- **2025-10-12**: Best-of-N inference scripts for OmniGen2, Flux-dev-Kontext, and Qwen-Image-Edit are now available!
30+
- 2025-09-30: We release **OmniGen2-EditScore7B**, unlocking online RL For Image Editing via high-fidelity EditScore. LoRA weights are available at [Hugging Face](https://huggingface.co/OmniGen2/OmniGen2-EditScore7B) and [ModelScope](https://www.modelscope.cn/models/OmniGen2/OmniGen2-EditScore7B).
31+
- 2025-09-30: We are excited to release **EditScore** and **EditReward-Bench**! Model weights and the benchmark dataset are now publicly available. You can access them on Hugging Face: [Models Collection](https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe) and [Benchmark Dataset](https://huggingface.co/datasets/EditScore/EditReward-Bench), and on ModelScope: [Models Collection](https://www.modelscope.cn/collections/EditScore-8b0d53aa945d4e) and [Benchmark Dataset](https://www.modelscope.cn/datasets/EditScore/EditReward-Bench).
3132

3233
## 📖 Introduction
3334
While Reinforcement Learning (RL) holds immense potential for this domain, its progress has been severely hindered by the absence of a high-fidelity, efficient reward signal.
@@ -39,7 +40,7 @@ To overcome this barrier, we provide a systematic, two-part solution:
3940
- **A Powerful & Versatile Tool**: Guided by our benchmark, we developed the **EditScore** model series. Through meticulous data curation and an effective self-ensembling strategy, EditScore sets a new state of the art for open-source reward models, even surpassing the accuracy of leading proprietary VLMs.
4041

4142
<p align="center">
42-
<img src="assets/table_reward_model_results.png" width="95%">
43+
<img src="https://raw.githubusercontent.com/VectorSpaceLab/EditScore/refs/heads/main/assets/table_reward_model_results.png" width="95%">
4344
<br>
4445
<em>Benchmark results on EditReward-Bench.</em>
4546
</p>
@@ -52,7 +53,7 @@ We demonstrate the practical utility of EditScore through two key applications:
5253
This repository releases both the **EditScore** models and the **EditReward-Bench** dataset to facilitate future research in reward modeling, policy optimization, and AI-driven model improvement.
5354

5455
<p align="center">
55-
<img src="assets/figure_edit_results.png" width="95%">
56+
<img src="https://raw.githubusercontent.com/VectorSpaceLab/EditScore/refs/heads/main/assets/figure_edit_results.png" width="95%">
5657
<br>
5758
<em>EditScore as a superior reward signal for image editing.</em>
5859
</p>
@@ -64,45 +65,57 @@ We are actively working on improving EditScore and expanding its capabilities. H
6465

6566
- [ ] Release training data for reward model and online RL.
6667
- [ ] Release RL training code applying EditScore to OmniGen2.
67-
- [ ] Provide Best-of-N inference scripts for OmniGen2, Flux-dev-Kontext, and Qwen-Image-Edit.
68+
- [x] Provide Best-of-N inference scripts for OmniGen2, Flux-dev-Kontext, and Qwen-Image-Edit.
6869

6970
## 🚀 Quick Start
7071

7172
### 🛠️ Environment Setup
73+
We offer two ways to install EditScore. Choose the one that best fits your needs.
74+
**Method 1: Install from PyPI (Recommended for Users)**: If you want to use EditScore as a library in your own project.
75+
**Method 2: Install from Source (For Developers)**: If you plan to contribute to the code, modify it, or run the examples in this repository
7276

73-
#### ✅ Recommended Setup
74-
77+
#### Prerequisites: Installing PyTorch
78+
Both installation methods require PyTorch to be installed first, as its version is dependent on your system's CUDA setup.
7579
```bash
76-
# 1. Clone the repo
77-
git clone git@github.com:VectorSpaceLab/EditScore.git
78-
cd EditScore
79-
80-
# 2. (Optional) Create a clean Python environment
80+
# (Optional) Create a clean Python environment
8181
conda create -n editscore python=3.12
8282
conda activate editscore
8383

84-
# 3. Install dependencies
85-
# 3.1 Install PyTorch (choose correct CUDA version)
84+
# Choose the command that matches your CUDA version.
85+
# This example is for CUDA 12.6.
8686
pip install torch==2.7.1 torchvision --extra-index-url https://download.pytorch.org/whl/cu126
87+
````
8788

88-
# 3.2 Install other required packages
89-
pip install -r requirements.txt
90-
91-
# EditScore runs even without vllm, though we recommend install it for best performance.
92-
pip install vllm
89+
<details>
90+
<summary>🌏 For users in Mainland China</summary>
91+
```bash
92+
# Install PyTorch from a domestic mirror
93+
pip install torch==2.7.1 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu126
9394
```
95+
</details>
9496

95-
#### 🌏 For users in Mainland China
97+
#### Method 1: Install from PyPI (Recommended for Users)
98+
```bash
99+
pip install -U editscore
100+
```
96101

102+
#### Method 2: Install from Source (For Developers)
103+
This method gives you a local, editable version of the project.
104+
1. Clone the repository
97105
```bash
98-
# Install PyTorch from a domestic mirror
99-
pip install torch==2.7.1 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu126
106+
git clone https://github.com/VectorSpaceLab/EditScore.git
107+
cd EditScore
108+
```
100109

101-
# Install other dependencies from Tsinghua mirror
102-
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
110+
2. Install EditScore in editable mode
111+
```bash
112+
pip install -e .
113+
```
103114

104-
# EditScore runs even without vllm, though we recommend install it for best performance.
105-
pip install vllm -i https://pypi.tuna.tsinghua.edu.cn/simple
115+
#### ✅ (Recommended) Install Optional High-Performance Dependencies
116+
For the best performance, especially during inference, we highly recommend installing vllm.
117+
```bash
118+
pip install vllm
106119
```
107120

108121
---
@@ -139,6 +152,12 @@ print(f"Edit Score: {result['final_score']}")
139152
---
140153

141154
## 📊 Benchmark Your Image-Editing Reward Model
155+
#### Install benchmark dependencies
156+
To use example code for benchmark, run following
157+
```bash
158+
pip install -r requirements.txt
159+
```
160+
142161
We provide an evaluation script to benchmark reward models on **EditReward-Bench**. To evaluate your own custom reward model, simply create a scorer class with a similar interface and update the script.
143162
```bash
144163
# This script will evaluate the default EditScore model on the benchmark
@@ -148,6 +167,12 @@ bash evaluate.sh
148167
bash evaluate_vllm.sh
149168
```
150169

170+
## Apply EditScore to Image Editing
171+
We offer two example use cases for your exploration:
172+
- **Best-of-N selection**: Use EditScore to automatically pick the most preferred image among multiple candidates.
173+
- **Reinforcement fine-tuning**: Use EditScore as a reward model to guide RL-based optimization.
174+
For detailed instructions and examples, please refer to the [documentation](experiments/OmniGen2-RL/docs/README.md).
175+
151176
## ❤️ Citing Us
152177
If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated:
153178

examples/OmniGen2-RL

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Subproject commit 7cbdd4d213ca1c7eac6ee50e89e9b6b2fcc0383b

pyproject.toml

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
[build-system]
2+
requires = ["setuptools>=61.0"]
3+
build-backend = "setuptools.build_meta"
4+
5+
[project]
6+
name = "editscore"
7+
version = "0.1.2"
8+
authors = [
9+
{ name="Xin Luo", email="xinluo@mail.ustc.edu.cn" },
10+
{ name="Jiahao Wang", email="jiahaowang0917@gmail.com" },
11+
{ name="Chenyuan Wu", email="wuchenyuan@mail.ustc.edu.cn" },
12+
]
13+
description = "A high-fidelity reward model for instruction-based image editing."
14+
readme = "README.md"
15+
requires-python = ">=3.8"
16+
classifiers = [
17+
"Programming Language :: Python :: 3",
18+
"License :: OSI Approved :: Apache Software License", # 假设是 Apache 2.0
19+
"Operating System :: OS Independent",
20+
]
21+
# 这里列出 EditScore 核心库的依赖
22+
dependencies = [
23+
"torch",
24+
"torchvision",
25+
"accelerate",
26+
"transformers",
27+
"qwen-vl-utils",
28+
"peft",
29+
"Pillow",
30+
]
31+
32+
[project.urls]
33+
Homepage = "https://github.com/VectorSpaceLab/EditScore"
34+
Issues = "https://github.com/VectorSpaceLab/EditScore/issues"
35+
36+
[tool.setuptools.packages.find]
37+
# 自动发现名为 'editscore' 的包(即 editscore/ 目录)
38+
where = ["."]

requirements.txt

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,4 @@
1-
torch==2.7.1
2-
torchvision==0.22.1
3-
accelerate
4-
transformers
5-
qwen-vl-utils
61
datasets
7-
peft
8-
Pillow
92
tqdm
103
python-dotenv
114
wheel

0 commit comments

Comments
 (0)