You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[ICML 2024] WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks? [[Paper]](https://arxiv.org/abs/2403.07718)
6
+
7
+
* WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [[Paper]](https://arxiv.org/abs/2407.05291)
8
+
9
+
10
+
`WorkArena` is a suite of browser-based tasks tailored to gauge web agents' effectiveness in supporting routine tasks for knowledge workers.
11
+
By harnessing the ubiquitous [ServiceNow](https://www.servicenow.com/what-is-servicenow.html) platform, this benchmark will be instrumental in assessing the widespread state of such automations in modern knowledge work environments.
12
+
13
+
WorkArena is included in [BrowserGym](https://github.com/ServiceNow/BrowserGym), a conversational gym environment for the evaluation of web agents.
At the moment, WorkArena-L1 includes `19,912` unique instances drawn from `33` tasks that cover the main components of the ServiceNow user interface, otherwise referred to as "atomic" tasks. WorkArena++ contains 682 tasks, each one sampling among thousands of potential configurations.
57
+
58
+
The following videos show an agent built on `GPT-4-vision` interacting with every atomic component of the benchmark. As emphasized by our results, this benchmark is not solved and thus, the performance of the agent is not always on point.
59
+
60
+
### Knowledge Bases
61
+
62
+
**Goal:** The agent must search for specific information in the company knowledge base.
63
+
64
+
_The agent interacts with the user via BrowserGym's conversational interface._
Note: the following example executes WorkArena's oracle (cheat) function to solve each task. To evaluate an agent, calls to `env.step()` must be used instead.
44
108
109
+
- To run a demo of WorkArena-L1 (ICML 2024) tasks using BrowserGym, use the following script:
110
+
```python
111
+
import random
112
+
113
+
from browsergym.core.env import BrowserEnv
114
+
from browsergym.workarena importALL_WORKARENA_TASKS
115
+
from time import sleep
116
+
117
+
118
+
random.shuffle(ALL_WORKARENA_TASKS)
119
+
for task inALL_WORKARENA_TASKS:
120
+
print("Task:", task)
121
+
122
+
# Instantiate a new environment
123
+
env = BrowserEnv(task_entrypoint=task,
124
+
headless=False)
125
+
env.reset()
126
+
127
+
# Cheat functions use Playwright to automatically solve the task
128
+
env.chat.add_message(role="assistant", msg="On it. Please wait...")
reward, stop, message, info = env.task.validate(env.page, cheat_messages)
141
+
if reward ==1:
142
+
env.chat.add_message(role="user", msg="Yes, that works. Thanks!")
143
+
else:
144
+
env.chat.add_message(role="user", msg=f"No, that doesn't work. {info.get('message', '')}")
145
+
146
+
sleep(3)
147
+
env.close()
148
+
```
149
+
150
+
151
+
152
+
- To run a demo of WorkArena-L2 (WorkArena++) tasks using BrowserGym, use the following script. Change the filter on line 6 to `l3` to sample L3 tasks.
153
+
45
154
```python
46
155
import random
47
156
@@ -80,3 +189,32 @@ for (task, seed) in zip(AGENT_L2_SAMPLED_TASKS, AGENT_L2_SEEDS):
80
189
sleep(3)
81
190
env.close()
82
191
```
192
+
193
+
Note: the following example executes WorkArena's oracle (cheat) function to solve each task. To evaluate an agent, calls to `env.step()` must be used instead.
194
+
195
+
## Citing This Work
196
+
197
+
Please use the following BibTeX to cite our work:
198
+
199
+
```
200
+
@misc{workarena2024,
201
+
title={WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?},
202
+
author={Alexandre Drouin and Maxime Gasse and Massimo Caccia and Issam H. Laradji and Manuel Del Verme and Tom Marty and Léo Boisvert and Megh Thakkar and Quentin Cappart and David Vazquez and Nicolas Chapados and Alexandre Lacoste},
title={WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks},
213
+
author={Léo Boisvert and Megh Thakkar and Maxime Gasse and Massimo Caccia and Thibault Le Sellier De Chezelles and Quentin Cappart and Nicolas Chapados and Alexandre Lacoste and Alexandre Drouin},
0 commit comments