Skip to content

Commit c774885

Browse files
authored
Merge branch 'main' into fix/qwen-image-cfg-mask
2 parents 5b2a1d2 + 71a6fd9 commit c774885

53 files changed

Lines changed: 2568 additions & 178 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.ai/AGENTS.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,10 @@ Strive to write code as simple and explicit as possible.
3535
- Use `self.progress_bar(timesteps)` for progress tracking
3636
- Don't subclass an existing pipeline for a variant — DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`)
3737

38+
### Modular Pipelines
39+
40+
- See [modular.md](modular.md) for modular pipeline conventions, patterns, and gotchas.
41+
3842
## Skills
3943

4044
Task-specific guides live in `.ai/skills/` and are loaded on demand by AI agents. Available skills include:

.ai/skills/model-integration/modular-conversion.md renamed to .ai/modular.md

Lines changed: 40 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,6 @@
1-
# Modular Pipeline Conversion Reference
1+
# Modular pipeline conventions and rules
22

3-
## When to use
4-
5-
Modular pipelines break a monolithic `__call__` into composable blocks. Convert when:
6-
- The model supports multiple workflows (T2V, I2V, V2V, etc.)
7-
- Users need to swap guidance strategies (CFG, CFG-Zero*, PAG)
8-
- You want to share blocks across pipeline variants
3+
Shared reference for modular pipeline conventions, patterns, and gotchas.
94

105
## File structure
116

@@ -14,7 +9,7 @@ src/diffusers/modular_pipelines/<model>/
149
__init__.py # Lazy imports
1510
modular_pipeline.py # Pipeline class (tiny, mostly config)
1611
encoders.py # Text encoder + image/video VAE encoder blocks
17-
before_denoise.py # Pre-denoise setup blocks
12+
before_denoise.py # Pre-denoise setup blocks (timesteps, latent prep, noise)
1813
denoise.py # The denoising loop blocks
1914
decoders.py # VAE decode block
2015
modular_blocks_<model>.py # Block assembly (AutoBlocks)
@@ -81,15 +76,27 @@ for i, t in enumerate(timesteps):
8176
latents = components.scheduler.step(noise_pred, t, latents, generator=generator)[0]
8277
```
8378

84-
## Key pattern: Chunk loops for video models
79+
## Key pattern: Denoising loop
80+
81+
All models use `LoopSequentialPipelineBlocks` for the denoising loop (iterating over timesteps):
82+
```python
83+
class MyModelDenoiseLoopWrapper(LoopSequentialPipelineBlocks):
84+
block_classes = [LoopBeforeDenoiser, LoopDenoiser, LoopAfterDenoiser]
85+
```
8586

86-
Use `LoopSequentialPipelineBlocks` for outer loop:
87+
Autoregressive video models (e.g. Helios) also use it for an outer chunk loop:
8788
```python
88-
class ChunkDenoiseStep(LoopSequentialPipelineBlocks):
89-
block_classes = [PrepareChunkStep, NoiseGenStep, DenoiseInnerStep, UpdateStep]
89+
class HeliosChunkDenoiseStep(HeliosChunkLoopWrapper):
90+
block_classes = [
91+
HeliosChunkHistorySliceStep,
92+
HeliosChunkNoiseGenStep,
93+
HeliosChunkSchedulerResetStep,
94+
HeliosChunkDenoiseInner,
95+
HeliosChunkUpdateStep,
96+
]
9097
```
9198

92-
Note: blocks inside `LoopSequentialPipelineBlocks` receive `(components, block_state, k)` where `k` is the loop iteration index.
99+
Note: sub-blocks inside `LoopSequentialPipelineBlocks` receive `(components, block_state, i, t)` for denoise loops or `(components, block_state, k)` for chunk loops.
93100

94101
## Key pattern: Workflow selection
95102

@@ -136,6 +143,26 @@ ComponentSpec(
136143
)
137144
```
138145

146+
## Gotchas
147+
148+
1. **Importing from standard pipelines.** The modular and standard pipeline systems are parallel — modular blocks must not import from `diffusers.pipelines.*`. For shared utility methods (e.g. `_pack_latents`, `retrieve_timesteps`), either redefine as standalone functions or use `# Copied from diffusers.pipelines.<model>...` headers. See `wan/before_denoise.py` and `helios/before_denoise.py` for examples.
149+
150+
2. **Cross-importing between modular pipelines.** Don't import utilities from another model's modular pipeline (e.g. SD3 importing from `qwenimage.inputs`). If a utility is shared, move it to `modular_pipeline_utils.py` or copy it with a `# Copied from` header.
151+
152+
3. **Accepting `guidance_scale` as a pipeline input.** Users configure the guider separately (see [guider docs](https://huggingface.co/docs/diffusers/main/en/api/guiders)). Different guider types have different parameters; forwarding them through the pipeline doesn't scale. Don't manually set `components.guider.guidance_scale = ...` inside blocks. Same applies to computing `do_classifier_free_guidance` — that logic belongs in the guider.
153+
154+
4. **Accepting pre-computed outputs as inputs to skip encoding.** In standard pipelines we accept `prompt_embeds`, `negative_prompt_embeds`, `image_latents`, etc. so users can skip encoding steps. In modular pipelines this is unnecessary — users just pop out the encoder block and run it separately. Encoder blocks should only accept raw inputs (`prompt`, `image`, etc.).
155+
156+
5. **VAE encoding inside prepare-latents.** Image encoding should be its own block in `encoders.py` (e.g. `MyModelVaeEncoderStep`). The prepare-latents block should accept `image_latents`, not raw images. This lets users run encoding standalone. See `WanVaeEncoderStep` for reference.
157+
158+
6. **Instantiating components inline.** If a class like `VideoProcessor` is needed, register it as a `ComponentSpec` and access via `components.video_processor`. Don't create new instances inside block `__call__`.
159+
160+
7. **Deeply nested block structure.** Prefer flat sequences over nesting Auto blocks inside Sequential blocks inside Auto blocks. Put the `Auto` selection at the top level and make each workflow variant a flat `InsertableDict` of leaf blocks. See `flux2/modular_blocks_flux2_klein.py` for the pattern.
161+
162+
8. **Using `InputParam.template()` / `OutputParam.template()` when semantics don't match.** Templates carry predefined descriptions — e.g. the `"latents"` output template means "Denoised latents". Don't use it for initial noisy latents from a prepare-latents step. Use a plain `InputParam(...)` / `OutputParam(...)` with an accurate description instead.
163+
164+
9. **Test model paths pointing to contributor repos.** Tiny test models must live under `hf-internal-testing/`, not personal repos like `username/tiny-model`. Move the model before merge.
165+
139166
## Conversion checklist
140167

141168
- [ ] Read original pipeline's `__call__` end-to-end, map stages

.ai/review-rules.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Review-specific rules for Claude. Focus on correctness — style is handled by r
55
Before reviewing, read and apply the guidelines in:
66
- [AGENTS.md](AGENTS.md) — coding style, copied code
77
- [models.md](models.md) — model conventions, attention pattern, implementation rules, dependencies, gotchas
8-
- [skills/model-integration/modular-conversion.md](skills/model-integration/modular-conversion.md) — modular pipeline patterns, block structure, key conventions
8+
- [modular.md](modular.md) — modular pipeline conventions, patterns, common mistakes
99
- [skills/parity-testing/SKILL.md](skills/parity-testing/SKILL.md) — testing rules, comparison utilities
1010
- [skills/parity-testing/pitfalls.md](skills/parity-testing/pitfalls.md) — known pitfalls (dtype mismatches, config assumptions, etc.)
1111

.ai/skills/model-integration/SKILL.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ See [../../models.md](../../models.md) for the attention pattern, implementation
8282

8383
## Modular Pipeline Conversion
8484

85-
See [modular-conversion.md](modular-conversion.md) for the full guide on converting standard pipelines to modular format, including block types, build order, guider abstraction, and conversion checklist.
85+
See [modular.md](../../modular.md) for the full guide on modular pipeline conventions, block types, build order, guider abstraction, gotchas, and conversion checklist.
8686

8787
---
8888

.github/workflows/upload_pr_documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ on:
88

99
jobs:
1010
build:
11-
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@90b4ee2c10b81b5c1a6367c4e6fc9e2fb510a7e3 # main
11+
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@9ad2de8582b56c017cb530c1165116d40433f1c6 # main
1212
with:
1313
package_name: diffusers
1414
secrets:

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -490,6 +490,8 @@
490490
- sections:
491491
- local: api/pipelines/audioldm2
492492
title: AudioLDM 2
493+
- local: api/pipelines/longcat_audio_dit
494+
title: LongCat-AudioDiT
493495
- local: api/pipelines/stable_audio
494496
title: Stable Audio
495497
title: Audio
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
<!--Copyright 2026 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# LongCat-AudioDiT
14+
15+
LongCat-AudioDiT is a text-to-audio diffusion model from Meituan LongCat. The diffusers integration exposes a standard [`DiffusionPipeline`] interface for text-conditioned audio generation.
16+
17+
This pipeline supports loading the original flat LongCat checkpoint layout from either a local directory or a Hugging Face Hub repository containing:
18+
19+
- `config.json`
20+
- `model.safetensors`
21+
22+
The loader builds the text encoder, transformer, and VAE from `config.json`, restores component weights from `model.safetensors`, and ties the shared UMT5 embedding when needed.
23+
24+
This pipeline was adapted from the LongCat-AudioDiT reference implementation: https://github.com/meituan-longcat/LongCat-AudioDiT
25+
26+
## Usage
27+
28+
```py
29+
import soundfile as sf
30+
import torch
31+
from diffusers import LongCatAudioDiTPipeline
32+
33+
pipeline = LongCatAudioDiTPipeline.from_pretrained(
34+
"meituan-longcat/LongCat-AudioDiT-1B",
35+
torch_dtype=torch.float16,
36+
)
37+
pipeline = pipeline.to("cuda")
38+
39+
audio = pipeline(
40+
prompt="A calm ocean wave ambience with soft wind in the background.",
41+
audio_end_in_s=5.0,
42+
num_inference_steps=16,
43+
guidance_scale=4.0,
44+
output_type="pt",
45+
).audios
46+
47+
output = audio[0, 0].float().cpu().numpy()
48+
sf.write("longcat.wav", output, pipeline.sample_rate)
49+
```
50+
51+
## Tips
52+
53+
- `audio_end_in_s` is the most direct way to control output duration.
54+
- `output_type="pt"` returns a PyTorch tensor shaped `(batch, channels, samples)`.
55+
56+
## LongCatAudioDiTPipeline
57+
58+
[[autodoc]] LongCatAudioDiTPipeline
59+
- all
60+
- __call__
61+
- from_pretrained

docs/source/en/api/pipelines/overview.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ The table below lists all the pipelines currently available in 🤗 Diffusers an
2929
|---|---|
3030
| [AnimateDiff](animatediff) | text2video |
3131
| [AudioLDM2](audioldm2) | text2audio |
32+
| [LongCat-AudioDiT](longcat_audio_dit) | text2audio |
3233
| [AuraFlow](aura_flow) | text2image |
3334
| [Bria 3.2](bria_3_2) | text2image |
3435
| [CogVideoX](cogvideox) | text2video |

examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -895,19 +895,16 @@ def initialize_new_tokens(self, inserting_toks: List[str]):
895895
self.train_ids_t5 = tokenizer.convert_tokens_to_ids(self.inserting_toks)
896896

897897
# random initialization of new tokens
898-
embeds = (
899-
text_encoder.text_model.embeddings.token_embedding if idx == 0 else text_encoder.encoder.embed_tokens
900-
)
898+
text_module = text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
899+
embeds = text_module.embeddings.token_embedding if idx == 0 else text_encoder.encoder.embed_tokens
901900
std_token_embedding = embeds.weight.data.std()
902901

903902
logger.info(f"{idx} text encoder's std_token_embedding: {std_token_embedding}")
904903

905904
train_ids = self.train_ids if idx == 0 else self.train_ids_t5
906905
# if initializer_concept are not provided, token embeddings are initialized randomly
907906
if args.initializer_concept is None:
908-
hidden_size = (
909-
text_encoder.text_model.config.hidden_size if idx == 0 else text_encoder.encoder.config.hidden_size
910-
)
907+
hidden_size = text_module.config.hidden_size if idx == 0 else text_encoder.encoder.config.hidden_size
911908
embeds.weight.data[train_ids] = (
912909
torch.randn(len(train_ids), hidden_size).to(device=self.device).to(dtype=self.dtype)
913910
* std_token_embedding
@@ -940,7 +937,8 @@ def save_embeddings(self, file_path: str):
940937
idx_to_text_encoder_name = {0: "clip_l", 1: "t5"}
941938
for idx, text_encoder in enumerate(self.text_encoders):
942939
train_ids = self.train_ids if idx == 0 else self.train_ids_t5
943-
embeds = text_encoder.text_model.embeddings.token_embedding if idx == 0 else text_encoder.shared
940+
text_module = text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
941+
embeds = text_module.embeddings.token_embedding if idx == 0 else text_encoder.shared
944942
assert embeds.weight.data.shape[0] == len(self.tokenizers[idx]), "Tokenizers should be the same."
945943
new_token_embeddings = embeds.weight.data[train_ids]
946944

@@ -962,7 +960,8 @@ def device(self):
962960
@torch.no_grad()
963961
def retract_embeddings(self):
964962
for idx, text_encoder in enumerate(self.text_encoders):
965-
embeds = text_encoder.text_model.embeddings.token_embedding if idx == 0 else text_encoder.shared
963+
text_module = text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
964+
embeds = text_module.embeddings.token_embedding if idx == 0 else text_encoder.shared
966965
index_no_updates = self.embeddings_settings[f"index_no_updates_{idx}"]
967966
embeds.weight.data[index_no_updates] = (
968967
self.embeddings_settings[f"original_embeddings_{idx}"][index_no_updates]
@@ -2112,7 +2111,8 @@ def get_sigmas(timesteps, n_dim=4, dtype=torch.float32):
21122111
if args.train_text_encoder:
21132112
text_encoder_one.train()
21142113
# set top parameter requires_grad = True for gradient checkpointing works
2115-
unwrap_model(text_encoder_one).text_model.embeddings.requires_grad_(True)
2114+
_te_one = unwrap_model(text_encoder_one)
2115+
(_te_one.text_model if hasattr(_te_one, "text_model") else _te_one).embeddings.requires_grad_(True)
21162116
elif args.train_text_encoder_ti: # textual inversion / pivotal tuning
21172117
text_encoder_one.train()
21182118
if args.enable_t5_ti:

examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py

Lines changed: 33 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -763,19 +763,28 @@ def initialize_new_tokens(self, inserting_toks: List[str]):
763763
self.train_ids = tokenizer.convert_tokens_to_ids(self.inserting_toks)
764764

765765
# random initialization of new tokens
766-
std_token_embedding = text_encoder.text_model.embeddings.token_embedding.weight.data.std()
766+
std_token_embedding = (
767+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
768+
).embeddings.token_embedding.weight.data.std()
767769

768770
print(f"{idx} text encoder's std_token_embedding: {std_token_embedding}")
769771

770-
text_encoder.text_model.embeddings.token_embedding.weight.data[self.train_ids] = (
771-
torch.randn(len(self.train_ids), text_encoder.text_model.config.hidden_size)
772+
(
773+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
774+
).embeddings.token_embedding.weight.data[self.train_ids] = (
775+
torch.randn(
776+
len(self.train_ids),
777+
(
778+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
779+
).config.hidden_size,
780+
)
772781
.to(device=self.device)
773782
.to(dtype=self.dtype)
774783
* std_token_embedding
775784
)
776785
self.embeddings_settings[f"original_embeddings_{idx}"] = (
777-
text_encoder.text_model.embeddings.token_embedding.weight.data.clone()
778-
)
786+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
787+
).embeddings.token_embedding.weight.data.clone()
779788
self.embeddings_settings[f"std_token_embedding_{idx}"] = std_token_embedding
780789

781790
inu = torch.ones((len(tokenizer),), dtype=torch.bool)
@@ -794,10 +803,14 @@ def save_embeddings(self, file_path: str):
794803
# text_encoder_0 - CLIP ViT-L/14, text_encoder_1 - CLIP ViT-G/14 - TODO - change for sd
795804
idx_to_text_encoder_name = {0: "clip_l", 1: "clip_g"}
796805
for idx, text_encoder in enumerate(self.text_encoders):
797-
assert text_encoder.text_model.embeddings.token_embedding.weight.data.shape[0] == len(
798-
self.tokenizers[0]
799-
), "Tokenizers should be the same."
800-
new_token_embeddings = text_encoder.text_model.embeddings.token_embedding.weight.data[self.train_ids]
806+
assert (
807+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
808+
).embeddings.token_embedding.weight.data.shape[0] == len(self.tokenizers[0]), (
809+
"Tokenizers should be the same."
810+
)
811+
new_token_embeddings = (
812+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
813+
).embeddings.token_embedding.weight.data[self.train_ids]
801814

802815
# New tokens for each text encoder are saved under "clip_l" (for text_encoder 0), "clip_g" (for
803816
# text_encoder 1) to keep compatible with the ecosystem.
@@ -819,7 +832,9 @@ def device(self):
819832
def retract_embeddings(self):
820833
for idx, text_encoder in enumerate(self.text_encoders):
821834
index_no_updates = self.embeddings_settings[f"index_no_updates_{idx}"]
822-
text_encoder.text_model.embeddings.token_embedding.weight.data[index_no_updates] = (
835+
(
836+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
837+
).embeddings.token_embedding.weight.data[index_no_updates] = (
823838
self.embeddings_settings[f"original_embeddings_{idx}"][index_no_updates]
824839
.to(device=text_encoder.device)
825840
.to(dtype=text_encoder.dtype)
@@ -830,11 +845,15 @@ def retract_embeddings(self):
830845
std_token_embedding = self.embeddings_settings[f"std_token_embedding_{idx}"]
831846

832847
index_updates = ~index_no_updates
833-
new_embeddings = text_encoder.text_model.embeddings.token_embedding.weight.data[index_updates]
848+
new_embeddings = (
849+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
850+
).embeddings.token_embedding.weight.data[index_updates]
834851
off_ratio = std_token_embedding / new_embeddings.std()
835852

836853
new_embeddings = new_embeddings * (off_ratio**0.1)
837-
text_encoder.text_model.embeddings.token_embedding.weight.data[index_updates] = new_embeddings
854+
(
855+
text_encoder.text_model if hasattr(text_encoder, "text_model") else text_encoder
856+
).embeddings.token_embedding.weight.data[index_updates] = new_embeddings
838857

839858

840859
class DreamBoothDataset(Dataset):
@@ -1704,7 +1723,8 @@ def compute_text_embeddings(prompt, text_encoders, tokenizers):
17041723
text_encoder_one.train()
17051724
# set top parameter requires_grad = True for gradient checkpointing works
17061725
if args.train_text_encoder:
1707-
text_encoder_one.text_model.embeddings.requires_grad_(True)
1726+
_te_one = text_encoder_one
1727+
(_te_one.text_model if hasattr(_te_one, "text_model") else _te_one).embeddings.requires_grad_(True)
17081728

17091729
unet.train()
17101730
for step, batch in enumerate(train_dataloader):

0 commit comments

Comments
 (0)