|
51 | 51 | | Research runtime patterns | ReAct, ReWOO, ToT, and Reflexion are used where they make sense | |
52 | 52 | | Local ACI execution | `implement_experiments` and `run_experiments` execute through file, command, and test actions | |
53 | 53 |
|
54 | | -## Quick Start |
| 54 | +## Start Here |
| 55 | + |
| 56 | +- If this is your first time, start with `autolabos web`. It gives you guided onboarding, the dashboard, logs, checkpoints, and artifact browsing in one place. |
| 57 | +- Use `autolabos` when you prefer a terminal-first loop with slash commands. |
| 58 | +- Run either command from the research project directory you want AutoLabOS to manage. Workspace state lives under `.autolabos/`. |
| 59 | + |
| 60 | +## What You Need |
55 | 61 |
|
56 | | -> [!IMPORTANT] |
57 | | -> `SEMANTIC_SCHOLAR_API_KEY` is required. `OPENAI_API_KEY` is only needed when the main provider or PDF analysis mode is `api`. |
| 62 | +| Item | When it is needed | Notes | |
| 63 | +| --- | --- | --- | |
| 64 | +| `SEMANTIC_SCHOLAR_API_KEY` | Always | Required for paper discovery and metadata lookup | |
| 65 | +| `OPENAI_API_KEY` | Only when the primary provider or PDF mode is `api` | Used for OpenAI API model execution | |
| 66 | +| Codex CLI login | Only when the primary provider or PDF mode is `codex` | AutoLabOS uses your local Codex session | |
| 67 | + |
| 68 | +## Quick Start |
58 | 69 |
|
59 | | -1. Install and build |
| 70 | +1. Install and build AutoLabOS. |
60 | 71 |
|
61 | 72 | ```bash |
62 | 73 | npm install |
63 | 74 | npm run build |
64 | 75 | npm link |
65 | 76 | ``` |
66 | 77 |
|
67 | | -2. Add environment variables |
| 78 | +2. Move into the research project directory you want to use as the workspace. |
68 | 79 |
|
69 | 80 | ```bash |
70 | | -cp .env.example .env |
71 | | -echo 'SEMANTIC_SCHOLAR_API_KEY=your_key_here' >> .env |
72 | | -echo 'OPENAI_API_KEY=your_openai_key_here' >> .env |
| 81 | +cd /path/to/your-research-project |
73 | 82 | ``` |
74 | 83 |
|
75 | | -3. Launch the TUI |
76 | | - |
77 | | -```bash |
78 | | -autolabos |
79 | | -``` |
80 | | - |
81 | | -4. Launch the web UI |
| 84 | +3. Start the recommended browser workflow. |
82 | 85 |
|
83 | 86 | ```bash |
84 | 87 | autolabos web |
85 | 88 | ``` |
86 | 89 |
|
87 | | -The web server listens on `http://127.0.0.1:4317` by default. |
88 | | -Run this from the research project directory you want AutoLabOS to use as its workspace. |
| 90 | +The web server listens on `http://127.0.0.1:4317` by default. Use `autolabos` instead if you want the TUI first. |
89 | 91 |
|
90 | | -If you are using a repository checkout and the CLI says the installed web assets are missing, build the web bundle once from the AutoLabOS package root: |
| 92 | +4. Finish onboarding. If `.autolabos/config.yaml` does not exist yet, the web app opens onboarding and the TUI opens the setup wizard. Both flows write the same workspace scaffold and config. |
91 | 93 |
|
92 | | -```bash |
93 | | -cd /path/to/AutoLabOS |
94 | | -npm --prefix web run build |
95 | | -autolabos web |
96 | | -``` |
97 | | - |
98 | | -Use a custom bind address or port when needed: |
| 94 | +5. Confirm the first run worked. You should now have `.autolabos/config.yaml`, a configured workspace, and either the dashboard or the TUI home screen ready for a run. |
99 | 95 |
|
100 | | -```bash |
101 | | -autolabos web --host 0.0.0.0 --port 8080 |
102 | | -``` |
| 96 | +6. Create or select a run, then start with `/new`, `/agent collect "your topic"`, or the workflow cards in the web UI. |
103 | 97 |
|
104 | | -Development mode: |
| 98 | +## What Happens On First Run |
105 | 99 |
|
106 | | -```bash |
107 | | -npm run dev |
108 | | -npm run dev:web |
109 | | -``` |
| 100 | +- AutoLabOS stores workspace config in `.autolabos/config.yaml` and reads `SEMANTIC_SCHOLAR_API_KEY` and `OPENAI_API_KEY` from `process.env` or `.env`. |
| 101 | +- Choose the primary LLM provider: `codex` uses your local Codex session, while `api` uses OpenAI API models. |
| 102 | +- Choose the PDF analysis mode separately: `codex` keeps PDF extraction local before analysis, while `api` sends the PDF to the Responses API. |
| 103 | +- If the primary provider or PDF mode is `api`, onboarding and `/settings` let you choose the OpenAI model. |
| 104 | +- `/model` lets you switch the active backend first, then choose the slot and model later. |
110 | 105 |
|
111 | | -Without `npm link`, you can still run: |
| 106 | +## Common First-Run Fixes |
112 | 107 |
|
113 | | -```bash |
114 | | -node dist/cli/main.js |
115 | | -node dist/cli/main.js web |
116 | | -``` |
| 108 | +- If a repository checkout says the installed web assets are missing, build them once from the AutoLabOS package root with `npm --prefix web run build`, then restart `autolabos web`. |
| 109 | +- If you do not want `npm link`, you can still run `node dist/cli/main.js` or `node dist/cli/main.js web` from the AutoLabOS repository root. |
| 110 | +- If you need a different bind address or port, run `autolabos web --host 0.0.0.0 --port 8080`. |
| 111 | +- For local development, use `npm run dev` and `npm run dev:web`. |
117 | 112 |
|
118 | 113 | > [!NOTE] |
119 | 114 | > External entrypoints are `autolabos` and `autolabos web`. `autolabos init` is intentionally not supported. |
120 | 115 |
|
121 | | -## First Run |
122 | | - |
123 | | -1. Run `autolabos` or `autolabos web` in an empty project. |
124 | | -2. If `.autolabos/config.yaml` does not exist, the TUI opens the setup wizard and the web app shows the onboarding form. |
125 | | -3. Both flows create the same scaffold/config, store your Semantic Scholar key, and open the main dashboard. |
126 | | -4. Choose the primary LLM provider: |
127 | | - - `codex`: use Codex ChatGPT login for the main workflow (default) |
128 | | - - `api`: use OpenAI API models for the main workflow (`OPENAI_API_KEY` required) |
129 | | -5. Choose the PDF analysis mode: |
130 | | - - `codex`: download and extract PDF text locally, then analyze with Codex (default) |
131 | | - - `api`: send the PDF directly to the Responses API (`OPENAI_API_KEY` required) |
132 | | -6. If the provider or PDF mode is `api`, setup wizard and `/settings` let you choose a model. |
133 | | - - Current built-in catalog: `gpt-5.4`, `gpt-5`, `gpt-5-mini`, `gpt-4.1`, `gpt-4o`, `gpt-4o-mini` |
134 | | -7. `/model` now lets you choose the active backend first, then select the slot/model: |
135 | | - - Codex CLI backend: Codex model selector |
136 | | - - OpenAI API backend: OpenAI API model selector |
137 | | -8. At runtime, AutoLabOS reads `SEMANTIC_SCHOLAR_API_KEY` and `OPENAI_API_KEY` from `process.env` or `.env`. |
138 | | - |
139 | 116 | ## Web Ops UI |
140 | 117 |
|
141 | 118 | `autolabos web` starts a local single-user browser UI on top of the same runtime used by the TUI. |
@@ -246,7 +223,7 @@ stateDiagram-v2 |
246 | 223 | write_paper --> [*]: approve |
247 | 224 | ``` |
248 | 225 |
|
249 | | -Default `agent_approval` mode pauses after every node. `implement_experiments` is the one forward step that can skip its pause through automatic handoff to `run_experiments`, `analyze_results` can emit explicit backward recommendations, and `review` now packages a review decision that approval can turn into an advance, a backtrack, or a human hold. |
| 226 | +Default `agent_approval` mode now runs with `workflow.approval_mode: minimal`, so successful nodes auto-advance and human approval is only requested when a transition explicitly requires human judgment. Set `workflow.approval_mode: manual` if you want the legacy pause-after-each-node behavior. `implement_experiments` can still auto-handoff into `run_experiments`, `analyze_results` can emit explicit backward recommendations, and `review` still turns panel output into an advance, a backtrack, or a human hold. |
250 | 227 |
|
251 | 228 | ### Phase-by-Phase Connection Graphs |
252 | 229 |
|
|
0 commit comments