Skip to content

Commit 0c575ef

Browse files
Merge pull request #49 from openml/update-concept-description
Update descriptions for Flow and Runs
2 parents 2a768bd + 9405b17 commit 0c575ef

2 files changed

Lines changed: 58 additions & 12 deletions

File tree

docs/concepts/flows.md

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,24 @@
11
# Flows
22

3-
Flows are machine learning pipelines, models, or scripts. They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis). Associated code (e.g., on GitHub) can be referenced by URL.
3+
Flows are machine learning pipelines, models, or scripts that can transform data into a model.
4+
They often have a number of hyperparameters which may be configured (e.g., a Random Forest's "number of trees" hyperparameter).
5+
Flows are, for example, scikit-learn's `RandomForestClassifier`, mlr3's `"classif.rpart"`, or WEKA's `J48`, but can also be "AutoML Benchmark's autosklearn integration" or any other script.
6+
The metadata of a flow describes, if provided, the configurable hyperparameters, their default values, and recommended ranges.
7+
They _do not_ describe a specific configuration (Setups log the configuration of a flow used in a [run](./runs.md)).
8+
9+
They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis), but is possible to define them manually too (see also [this example of openml-python](http://openml.github.io/openml-python/latest/examples/Basics/simple_flows_and_runs_tutorial/) or the REST API documentation). Associated code (e.g., on GitHub) can be referenced by URL.
10+
11+
12+
!!! note "Versions"
13+
14+
It is convention to distinguish between software versions through the Flow's `external_version` property.
15+
This is because both internal and external changes can be made to code the Flow references, which would affect people using them.
16+
For example, hyperparameters may be introduced or deprecated across different versions of the same algorithm, or their internal behavior may change (and result in different models).
17+
Automatically generated flows from e.g. `openml-python` or `mlr3oml` automatically populated the `external_version` property.
418

519
## Analysing algorithm performance
620

7-
Every flow gets a dedicated page with all known information. The Analysis tab shows an automated interactive analysis of all collected results. For instance, below are the results of a <a href="https://www.openml.org/f/17691" target="_blank">scikit-learn pipeline</a> including missing value imputation, feature encoding, and a RandomForest model. It shows the results across multiple tasks, and how the AUC score is affected by certain hyperparameters.
21+
Every flow gets a dedicated page with information about the flow, such as its dependencies, hyperparameters, and which runs used it. The Analysis tab shows an automated interactive analysis of all collected results. For instance, below are the results of a <a href="https://www.openml.org/f/17691" target="_blank">scikit-learn pipeline</a> including missing value imputation, feature encoding, and a RandomForest model. It shows the results across multiple tasks and configurations, and how the AUC score is affected by certain hyperparameters.
822

923
<!-- <img src="img/flow_top.png" style="width:100%; max-width:800px;"/> -->
1024
![](../img/flow_top.png)
@@ -13,7 +27,7 @@ This helps to better understand specific models, as well as their strengths and
1327

1428
## Automated sharing
1529

16-
When you evaluate algorithms and share the results, OpenML will automatically extract all the details of the algorithm (dependencies, structure, and all hyperparameters), and upload them in the background.
30+
When you evaluate algorithms and share the results using `openml-python` or `mlr3oml` details of the algorithm (dependencies, structure, and all hyperparameters) are automatically extracted and can easily be shared. When the Flow is used in a Run, the specific hyperparameter configuration used in the experiment is also saved separately in a Setup. The code snippet below creates a Flow description for the RandomForestClassifier, and also runs the experiment. The resulting Run contains information about the used configuration of the Flow in the experiment (Setup).
1731

1832
``` python
1933
from sklearn import ensemble
@@ -41,4 +55,4 @@ Given an OpenML run, the exact same algorithm or model, with exactly the same hy
4155
```
4256

4357
!!! note
44-
You may need the exact same library version to reconstruct flows. The API will always state the required version. We aim to add support for VMs so that flows can be easily (re)run in any environment <i class="fa fa-heart fa-fw fa-lg" style="color:red"></i>
58+
You may need the exact same library version to reconstruct flows. The API will always state the required version.

docs/concepts/runs.md

Lines changed: 40 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,48 @@
11
# Runs
22

3+
Runs are the results of experiments evaluating a flow with a specific configuration on a specific task.
4+
They contain at least a description of the hyperparameter configuration of the Flow and the predictons produced for the machine learning Task.
5+
Users may also provide additional metadata related to the experiment, such as the time it took to train or evaluate the model, or their predictive performance.
6+
The OpenML server will also compute several common metrics on the provided predictions as appropriate for the task, such as accuracy for a classification task or root mean squared error for regression tasks.
7+
8+
For example, [this run](https://www.openml.org/search?type=run&id=10452858&run_flow.flow_id=17691&sort=date) describes an experiment that:
9+
10+
- evaluates a Random Forest pipeline ([flow 17650](https://www.openml.org/f/17650) linked to the task)
11+
- with the configuration `min_samples_leaf=1, n_estimators=500, ...` ([setup 8261828](https://www.openml.org/api/v1/json/setup/8261928) linked to the task)
12+
- in a 10-fold CV experiment ([task 3481](https://www.openml.org/t/3481) linked to the run)
13+
- on dataset "isolet" ([dataset 300](https://www.openml.org/d/300) as described by the task)
14+
- produced predictions in arff format ([predictions.arff](https://www.openml.org/data/download/21829039/predictions.arff))
15+
- several metadata (e.g., metric evaluations) as seen on the run page
16+
317
## Automated reproducible evaluations
4-
Runs are experiments (benchmarks) evaluating a specific flows on a specific task. As shown above, they are typically submitted automatically by machine learning
5-
libraries through the OpenML [APIs](https://www.openml.org/apis)), including lots of automatically extracted meta-data, to create reproducible experiments. With a few for-loops you can easily run (and share) millions of experiments.
18+
While the REST API and the OpenML connectors allow you to manually submit Run data, openml-python and mlr3oml also support automated running of experiments and data collection.
19+
The openml-python example below will evaluate the `RandomForestClassifier` on a given task and automatically track information such as the duration of the experiment, the hyperparameter configuration of the model, and version information about the software used in the experiment, and bundle it for convenient upload to OpenML.
620

7-
## Online organization
8-
OpenML organizes all runs online, linked to the underlying data, flows, parameter settings, people, and other details. See the many examples above, where every dot in the scatterplots is a single OpenML run.
21+
``` python
22+
from sklearn import ensemble
23+
from openml import tasks, runs
24+
25+
# Build any model you like.
26+
clf = ensemble.RandomForestClassifier()
927

10-
## Independent (server-side) evaluation
11-
OpenML runs include all information needed to independently evaluate models. For most tasks, this includes all predictions, for all train-test splits, for all instances in the dataset, including all class confidences. When a run is uploaded, OpenML automatically evaluates every run using a wide array of evaluation metrics. This makes them directly comparable with all other runs shared on OpenML. For completeness, OpenML will also upload locally computed evaluation metrics and runtimes.
28+
# Evaluate the model on a task
29+
run = runs.run_model_on_task(clf, task)
1230

13-
New metrics can also be added to OpenML's evaluation engine, and computed for all runs afterwards. Or, you can download OpenML runs and analyse the results any way you like.
31+
# Share the results, including the flow and all its details.
32+
run.publish()
33+
```
34+
35+
The standardized way of accessing datasets and tasks makes it easy to run large scale experiments in this manner.
1436

1537
!!! note
16-
Please note that while OpenML tries to maximise reproducibility, exactly reproducing all results may not always be possible because of changes in numeric libraries, operating systems, and hardware.
38+
While OpenML tries to facilitate reproducibility, exactly reproducing all results is not generally possible because of changes in numeric libraries, operating systems, hardware, and even random factors (such as hardware errors).
39+
40+
## Online organization
41+
42+
All runs are available from the OpenML platform, through either direct access with the REST API or through visualizations in the website.
43+
The scatterplot below shows many runs for a single Flow, each dot represents a Run.
44+
For each run, all metadata is available online, as well as the produced predictions and any other provided artefacts.
45+
You can download OpenML runs and analyse the results any way you like.
46+
47+
<!-- <img src="img/flow_top.png" style="width:100%; max-width:800px;"/> -->
48+
![](../img/flow_top.png)

0 commit comments

Comments
 (0)