AI-assisted toolchain that turns course materials + learning objectives into instructor-reviewable Twine quest scenarios for scenario-based learning (SBL). Generate Twine learning quests from slides/notes + objectives with TwineGPT’s Streamlit web UI. The pipeline plans quest structure, instantiates reusable patterns, exports Twine HTML, and optionally converts to H5P branching scenarios—built for fast instructor review.
Create pedagogically-sound choose-your-own-adventure quests from course materials using large language models, with built-in graph validity and multi-format export.
- Structured quest planning (nodes, choices, feedback, metadata)
- Pattern-based instantiation (inventory, matching, strategy shifts, etc.)
- Twine export (HTML)
- Optional H5P branching scenario export
- Instructor review workflow before publishing
✨ Core Capabilities
- LLM-powered branching narrative generation — Creates true branching storylines with meaningful decision points and consequences
- Multiple generation modes — Choose between guided (pattern-constrained), direct (free-form), or skeleton (deterministic) generation
- Pattern-based quality constraints — Ensures pedagogical soundness without rigid structure
- Graph validity by construction — All nodes reachable, no dead ends, proper feedback loops
- Multi-format export — SugarCube Twee, JSON quest plans, H5P Branching Scenarios
- H5P integration — Direct export to LMS-compatible format (Canvas, Moodle, Blackboard)
- Automated validation — Comprehensive metrics for graph structure, pedagogy, and compatibility
🎮 Web UI (Streamlit)
- Drag-and-drop material upload (PDF, TXT, MD) with multi-file support
- AI-powered learning objective extraction
- Interactive node editor
- Real-time graph visualization with unreachable node highlighting
- Validation metrics dashboard
- One-click export to Twee, JSON, or H5P
🔧 CLI Toolchain
generate— Create quest plans from materials and objectivesexport-h5p— Convert existing quest plans to H5P packageserq3-sweep— Research tool for motivation preset evaluation
Setup environment:
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Configure API keys (.env file)
cat > .env << EOF
# For OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-5.2-chat-latest
# For Ollama (local)
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gpt-oss
EOF# Start Streamlit app
streamlit run app.pyThen open http://localhost:8501 and:
- Upload course materials (PDF, TXT, MD — multiple files OK)
- Enter or auto-generate learning objectives
- Click "Generate Quest Plan"
- Review, edit, and validate the generated quest
- Export as Twee, JSON, or H5P package
see below
TwineGPT supports three generation modes, each with different tradeoffs:
Best for: Most use cases, paper-aligned branching narratives
Pattern-constrained but structurally flexible generation. The LLM designs the branching structure freely while adhering to pedagogical quality constraints.
Characteristics:
- ✅ True branching storylines with multiple paths
- ✅ Patterns serve as quality constraints, not rigid templates
- ✅ 2-4 choices per decision point
- ✅ Iterative refinement ensures graph validity
- ✅ Balances creative freedom with pedagogical rigor
Usage:
python -m twinegpt generate \
--mode guided \
--materials input/course.md \
--objectives-json '["LO1","LO2"]' \
--motivation-preset achievement \
--target-nodes 18 \
--max-branching 3Best for: Maximum creative freedom, research experiments
The LLM generates the entire quest structure from scratch with minimal constraints. Requires LLM self-repair for graph validity.
Characteristics:
- ✅ Complete creative freedom
- ✅ LLM determines all structure
⚠️ May require multiple repair passes⚠️ Less predictable output structure
Usage:
python -m twinegpt generate \
--mode direct \
--materials input/course.md \
--objectives-json '["LO1","LO2"]'Best for: Controlled experiments, deterministic output
Deterministic structure generation followed by LLM content filling. Most predictable but least flexible.
Characteristics:
- ✅ Guaranteed graph validity by construction
- ✅ Deterministic structure (reproducible)
- ✅ Good for parameter sweeps and experiments
⚠️ Limited branching flexibility⚠️ Pre-determined node sequence
Usage:
python -m twinegpt generate \
--mode skeleton_fill \
--materials input/course.md \
--objectives-json '["LO1","LO2"]' \
--motivation-preset immersionpython -m twinegpt generate \
--materials input/course_material.md \
--objectives-json '["LO1: Understand X","LO2: Apply Y"]' \
--misconceptions-json '["Common error A","Misconception B"]' \
--provider openai \
--mode guided \
--motivation-preset achievement \
--target-nodes 18 \
--max-branching 3 \
--out-plan quest.json \
--out-twee quest.tweeParameters:
--mode— Generation mode:guided(default),direct, orskeleton_fill--motivation-preset— Narrative style:achievement,immersion, orsocial--target-nodes— Approximate node count (guided mode only, default: 18)--max-branching— Max choices per decision (guided mode only, default: 3)--provider— LLM provider:openai,ollama_compat, orollama_native
python -m twinegpt export-h5p \
--plan-json quest.json \
--output-h5p quest.h5p \
--output-report compatibility.mdThen upload quest.h5p directly to Canvas, Moodle, or any H5P-compatible platform.
Materials (PDF/TXT/MD)
↓
[LLM Planning]
↓
QuestPlan (JSON)
├─→ SugarCube Twee (interactive fiction)
├─→ H5P Branching (LMS deployment)
└─→ Validation Report (metrics & warnings)
| Module | Purpose |
|---|---|
planner.py |
LLM-based quest outline & skeleton generation |
patterns.py |
Pattern library compiler (frames, loops, rewards) |
compiler_twee.py |
Twee story format export |
h5p/export.py |
H5P Branching Question conversion |
validate.py |
Graph validation & pedagogy metrics |
ui/graphviz.py |
Story graph visualization |
providers/ |
LLM integrations (OpenAI, Ollama) |
MAX_NODES=30
### Streamlit Settings (sidebar)
When running the web UI, configure via the left sidebar:
- **Provider** — OpenAI, Ollama Compatible, or Ollama Native
- **Model** — Override default model name
- **Ollama Base URL** — Endpoint for local LLM
- **Motivation Preset** — achievement, immersion, or social (shapes narrative style)
- **Enable Twine Variables** — Inventory, scoring hooks (disable for H5P compatibility)
- **Enable Reward Reveals** — Visual feedback layers
## Export Formats
### SugarCube Twee (.twee)
Standard Twine 2 format. Import into Twine editor for further customization and HTML export.
**Features:**
- Full inventory system (if vars enabled)
- Scoring/achievement tracking
- Reward layer reveals
- SugarCube macros for advanced interactions
**Example:**
```twee
:: Start [start]
You begin your journey...
[[Continue|Node1]]
:: Node1
...
Complete internal representation of the quest structure.
Includes:
- All nodes with IDs, text, kind (start/encounter/debrief/etc.)
- Choice graph with links and conditional gating
- Inventory item requirements
- Reward definitions
- Learning objective mappings
Use case: Re-import for editing or feed to custom tooling.
Format: ZIP archive containing H5P Branching Question v1.3+
Deployment: Upload directly to:
- Canvas LMS
- Moodle
- Blackboard
- H5P.org
Compatibility Report: Automatically generated with warnings about:
- Unsupported features (e.g., complex inventory gating → converted with notes)
- Scoring limitations (H5P has no weighted scoring)
- Feature coverage percentage
The built-in validator checks:
| Metric | Description |
|---|---|
| Graph Valid | All nodes reachable; no dead ends (except intended) |
| Choice Density | Avg. choices per 100 words (optimal: 2–4) |
| Micro-Feedback Cadence | Avg. steps between feedback/branching |
| Debrief Coverage | % of paths that end with reflection node |
| SBL Completeness | Scenario frame + tutorial + examples present |
| Inventory Events | Count of item-gating choices |
| Reward Nodes | Unlocked content/feedback elements |
Use --mode skeleton_fill for guaranteed graph validity:
python -m twinegpt generate \
--materials input/materials.md \
--objectives-json '["LO1","LO2"]' \
--provider openai \
--mode skeleton_fill \
--motivation-preset immersion \
--out-plan plan.json \
--out-twee story.tweeThis mode:
- Generates a QuestOutline (pedagogical structure)
- Creates a deterministic, graph-valid skeleton
- Asks LLM to fill node text without changing structure
Result: Guaranteed correct graph, human-curated narrative.
- Ensure API keys are set in
.env - For Ollama, verify it's running:
ollama serve(port 11434) - Check base URL format:
- Ollama compatible:
http://localhost:11434/v1 - Ollama native:
http://localhost:11434/api
- Ollama compatible:
- Verify
.h5pfile size > 0 KB - Check compatibility report for unsupported features
- Some patterns (complex inventory) may not convert perfectly
- Use
--mode skeleton_fillfor guaranteed validity - Or enable validation in web UI before export
If you use TwineGPT in research or educational practice, please cite the associated HCI International 2026 paper (currently in accepted for publication stage, will be presented at the 28th International Conference on Human-Computer Interaction, 26 - 31 July 2026, Montreal, Canada):
@inproceedings{loebel_twinegpt2026,
title={Twine and Generative AI in Interactive Learning Ecosystems: A Framework for AI-Assisted Narrative Game Design in Higher Education},
author={Jens-Martin Loebel},
booktitle={Proceedings of the 28th International Conference on Human-Computer Interaction},
year={2026}
}Apache-2.0, see [LICENSE]
Contributions welcome! Please open issues or PRs for:
- New LLM provider integrations
- Additional export formats
- Improved validation metrics
- UI/UX enhancements
- Bug fixes and improvements to the codebase
- research questions