🧭 A paper workflow core for turning vague paper requests into structured, evidence-aware, figure-aware, citation-consistent deliverables.
- ✨ What it is for
- 🚫 What it is not for
- ⚡ 60-second path
- 🧩 Start here by environment
- 🗺 Workflow artifact map
- 🔧 What the core covers
- 🚀 Installation
- 🛠 Unified CLI
- 🧰 More detailed usage
- 🔑 External services
- 🧪 Examples and validation
- 🗂 Repository layout
- License
paper-intake-router is a paper workflow core, not a generic AI paper writer.
It is built for academic work where the hard part is not only drafting paragraphs, but also turning a vague request into a stable workflow with task normalization, figure/table planning, citation discipline, and deliverable-aware rendering.
Use it when you need an agent or local workflow to do things like:
- normalize a paper request into a structured task sheet
- decide what the next stage should be before drafting starts
- plan figures and tables before the draft drifts
- keep figure references and numbering consistent
- render citations in a predictable final style
- keep intermediate artifacts organized in a task workspace
It is not a promise of:
- school-specific compliance without the real official template
- real experimental validity
- trustworthy references invented from nowhere
- submission readiness without human review
- one-shot “write my whole thesis perfectly” behavior
If that boundary matters for your use case, read references/capability-boundaries.md early.
Input: examples/intake.json
Run:
python3 scripts/paper_router.py build-task -- \
--input examples/intake.json \
--out-json /tmp/task.json
python3 scripts/paper_router.py init-workspace -- \
--base-dir /tmp/paper-runs \
--task /tmp/task.json \
--out-json /tmp/workspace.json
python3 scripts/paper_router.py build-figure-plan -- \
--task /tmp/task.json \
--out-json /tmp/figure-plan.json
python3 scripts/paper_router.py smoke-testOutput:
/tmp/task.json/tmp/workspace.json/tmp/figure-plan.json- a smoke-tested local chain ending in a rendered draft with citations
If you prefer raw scripts, the same workflow is available under scripts/.
This repository is designed first for the OpenClaw ecosystem.
Use it when you want a paper-oriented workflow core behind an OpenClaw Skill, especially when the agent needs more than “generate text”: intake routing, artifact planning, and stable citation handling.
You can run the core workflow locally with Python only.
The local chain covers task normalization, task workspace initialization, figure/table planning, validation, autofix, and citation rendering.
It can be adapted to Claude Code, OpenCode, or similar runtimes, but it is not guaranteed to work out of the box.
Expect to adapt:
- runtime assumptions
- workspace and path conventions
- upstream search and evidence backends
- tool wiring and invocation glue
The core artifact flow looks like this:
intake.json
→ task.json
→ workspace.json
→ figure-plan.json
→ fixed.md / validation.json
→ final.md
What each artifact means:
task.json: normalized paper request, defaults, and routing metadataworkspace.json: task-scoped directory manifest for drafts, figures, tables, references, and outputsfigure-plan.json: planned figures/tables, numbering rules, and codegen targetsvalidation.json: consistency check between draft references and the figure planfinal.md: citation-rendered draft after internal markers are resolved
- normalize requests into structured task sheets
- infer defaults for paper type, language, style, and delivery mode
- resolve default layout templates when no official template is provided
- reference shortlist
- screening and retry search flow
- reference pack
- writing evidence pack
- citation plan by chapter and claim type
- figure/table planning before drafting
- numbering rules derived from template logic
- code / CSV / artifact scaffolding
- validation of figure references against the plan
- autofix for figure explanation prose and citation modes
- supports both normal prose and figure explanation sentences
- supports
support-note,inline-marker, andinternal-anchor - renders final citations into GB/T 7714 or APA-style output
- supports template-aware citation rendering profiles
git clone https://github.com/NanAquarius/paper-intake-router.git
cd paper-intake-router
chmod +x scripts/install.sh
./scripts/install.sh
source .venv/bin/activategit clone https://github.com/NanAquarius/paper-intake-router.git
cd paper-intake-router
powershell -ExecutionPolicy Bypass -File .\scripts\install.ps1
.\.venv\Scripts\Activate.ps1python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements-minimal.txtThe repository includes a thin CLI wrapper for the main workflow scripts:
python3 scripts/paper_router.py smoke-test
python3 scripts/paper_router.py build-task -- --input examples/intake.json --out-json /tmp/task.json
python3 scripts/paper_router.py init-workspace -- --base-dir /tmp/paper-runs --task /tmp/task.json --out-json /tmp/workspace.json
python3 scripts/paper_router.py build-figure-plan -- --task /tmp/task.json --out-json /tmp/figure-plan.jsonThis wrapper does not replace the underlying scripts. It makes the common path easier to discover and easier to document.
python3 scripts/build_task_sheet.py \
--input examples/intake.json \
--out-json /tmp/task.json \
--out-md /tmp/task.mdpython3 scripts/init_task_workspace.py \
--base-dir /tmp/paper-runs \
--task /tmp/task.json \
--out-json /tmp/workspace.jsonpython3 scripts/build_figure_table_plan.py \
--task /tmp/task.json \
--out-json /tmp/figure-plan.json \
--out-md /tmp/figure-plan.mdOptional inputs:
--evidence-pack--citation-plan
python3 scripts/generate_figure_table_codegen.py \
--plan /tmp/figure-plan.json \
--base-dir /tmp/paper-artifactspython3 scripts/validate_figure_table_refs.py \
--plan /tmp/figure-plan.json \
--draft /tmp/draft.md \
--out-json /tmp/figure-validation.json \
--out-md /tmp/figure-validation.mdpython3 scripts/render_final_citations.py \
--draft /tmp/fixed.md \
--reference-pack examples/reference-pack.json \
--citation-profile-json /tmp/profile.json \
--style 'APA' \
--out /tmp/final.mdThe local core workflow does not require API keys for:
- task sheet generation
- figure/table planning
- local validation and autofix
- citation rendering
- smoke tests
Full literature-search and evidence-building flows often benefit from or depend on external providers such as:
- Semantic Scholar API
- OpenAlex
- Tavily / Exa / other search providers
Document clearly in your own deployment which upstream providers are required and which steps depend on them.
Included examples:
examples/intake.jsonexamples/draft.mdexamples/reference-pack.jsonexamples/layout-samples/README.md
Minimal validation entrypoints:
python3 scripts/smoke_test_pipeline.py
python3 scripts/paper_router.py smoke-testThe current smoke test covers:
- task normalization
- task workspace initialization
- figure/table plan generation
- figure reference autofix
- figure reference validation
- final citation rendering
paper-intake-router/
├── SKILL.md
├── scripts/
├── references/
├── paper-template-library/
├── examples/
├── paper_intake_router/
├── README.md
├── README.zh-CN.md
└── LICENSE
MIT