Conversation
537cc13 to
9a1f345
Compare
| # Models | ||
| # ────────────────────────────────────────────── | ||
| student_model = AutoModelForImageTextToText.from_pretrained(cli_args.student_model_name, dtype=torch.bfloat16) | ||
| teacher_model = AutoModelForImageTextToText.from_pretrained(cli_args.teacher_model_name, dtype=torch.bfloat16) |
There was a problem hiding this comment.
Example script uses wrong dtype parameter name
Low Severity
AutoModelForImageTextToText.from_pretrained is called with dtype=torch.bfloat16 instead of the correct torch_dtype=torch.bfloat16. The dtype kwarg is not a recognized parameter for from_pretrained, so the models will silently load in their default precision (float32) instead of bfloat16, increasing memory usage and potentially causing dtype mismatches during training.
Reviewed by Cursor Bugbot for commit 9a1f345. Configure here.
There was a problem hiding this comment.
Not a bug, torch_dtype is deprecated (everybody knows this warning)
Maybe I should add version checking, like here
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 3 total unresolved issues (including 2 from previous reviews).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 7c96055. Configure here.
|
@kashif @qgallouedec I think you guys might be interested in this PR, looking forward to hearing from u |


What does this PR do?
Adds VLM support to GOLDTrainer:
Motivation
The GOLD algorithm has no theoretical constraints against VLM-to-VLM distillation -- the barriers were purely engineering (incompatible image token formats, different tokenizers, raw image handling through the dataloader).
Key changes
_teacher_processoris stored and used incompute_lossto build teacher-compatible vision tensors from raw imagesteacher_tokenizer_name_or_pathexamples/scripts/gold_vlm.pywith two documented usage examples (same-family JSD + vLLM, cross-family ULD)Note
Looking for feedback:
_fill_buffer), and the overall design choice with two different collators, as well as two separate generation flows (_generate_on_policy_vlm_rawvs_generate_on_policy_for_slices). Would appreciate feedback from anyone with more experience in this area.docs/source/gold_trainer.md-- will add if that's desirable, just let me know.Before submitting
AI writing disclosure
We welcome the use of AI tools to help with contributions. For transparency and to help us improve our review process, please indicate the level of AI involvement in this PR.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Note
Medium Risk
Adds new VLM-specific collation, buffering, and generation paths (including vLLM) plus cross-architecture teacher processing, which changes core training-loop behavior and could affect distillation correctness/performance on multimodal datasets.
Overview
Enables vision-language distillation in
GOLDTrainerby detecting vision datasets, validating VLM student/teacher compatibility, and switching to a VLM-aware pipeline that preserves raw images through the dataloader (identity collator + on-the-fly collation).Adds
DataCollatorForVisionLanguageChatMLand updates training/generation (_fill_buffer, new_generate_on_policy_vlm_raw, multimodal forward kwargs, prompt-length handling) to support both same-architecture JSD and cross-architecture ULD where the teacher can re-process images via a stored_teacher_processor.Extends config defaults (
remove_unused_columns=False), auto-resolves teacher tokenizer for ULD, adds a runnableexamples/scripts/gold_vlm.py, and significantly expands test coverage for VLM collation, init validation, cross-architecture behavior, and VLM+vLLM integration.Reviewed by Cursor Bugbot for commit fd3be85. Bugbot is set up for automated code reviews on this repo. Configure here.