Skip to content

Commit 3733d9e

Browse files
Updated documentation
1 parent ce1941c commit 3733d9e

7 files changed

Lines changed: 56 additions & 27 deletions

docs/Configuring a contest.rst

Lines changed: 34 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -26,16 +26,20 @@ The limits can be set both for individual tasks and for the whole contest. A sub
2626
Each of these fields can be left unset to prevent the corresponding limitation from being enforced.
2727

2828

29-
.. _configuringacontest_tokens:
29+
Feedback to contestants
30+
=======================
31+
32+
Each testcase can be marked as public or private. After sending a submission, a contestant can always see its results on the public testcases: a brief passed / partial / not passed status for each testcase, and the partial score that is computable from the public testcases only. Note that input and output data are always hidden.
33+
34+
Tokens were introduced to provide contestants with limited access to the detailed results of their submissions on the private testcases as well. If a contestant uses a token on a submission, then they will be able to see its result on all testcases, and the global score.
3035

31-
Feedback and tokens
32-
===================
3336

34-
Each testcase can be marked as public or private. During the contest, contestants can see the result of their submissions on the public testcases (the content of the input and output data themselves remain hidden, though). Tokens are a concept introduced to provide contestants with limited access to the detailed results of their submissions on the private testcases as well.
37+
.. _configuringacontest_tokens:
3538

36-
For every submission sent in for evaluation, a contestant is always able to see if it succesfully compiled. They are also able to see its scores on the public testcases of the task, if any. All information about the other so-called private testcases is kept hidden. Yet, a contestant can choose to use one of its tokens to "unlock" a certain submission of their choice. After they do so, detailed results are available for all testcases, as if they were all public. A token, once used, is consumed and lost forever. Contestants have a set of available tokens at their disposal, where the ones they use are picked from. These sets are managed by CMS according to rules defined by the contest administrators, as explained later in this section. For all official :doc:`score types <Score types>`, the public score is the score on public testcases, whereas the detailed score is the score on all testcases. This is not necessarily true for custom score types, as they can implement arbitrary logics to compute those values.
39+
Tokens rules
40+
------------
3741

38-
Tokens also affect the score computation. That is, all "tokened" submissions will be considered, together with the last submitted one, when computing the score for a task. See also :ref:`configuringacontest_score-rounding`.
42+
Each contestant have a set of available tokens at their disposal; when they use a token it is taken from this set, and cannot be use again. These sets are managed by CMS according to rules defined by the contest administrators, as explained later in this section.
3943

4044
There are two types of tokens: contest-tokens and task-tokens. When a contestant uses a token to unlock a submission, they are really using one token of each type, and therefore needs to have both available. As the names suggest, contest-tokens are bound to the contest while task-tokens are bound to a specific task. That means that there is just one set of contest-tokens but there can be many sets of task-tokens (precisely one for every task). These sets are controlled independently by rules defined either on the contest or on the task.
4145

@@ -49,16 +53,33 @@ Having a finite set of both contest- and task-tokens can be very confusing, for
4953

5054
Note that "token sets" are "intangible": they're just a counter shown to the user, computed dynamically every time. Yet, once a token is used, a Token object will be created, stored in the database and associated with the submission it was used on.
5155

52-
.. note::
53-
The full-feedback mode introduced in IOI 2013 has not been ported upstream yet (see :gh_issue:`246`). Note that although disabling tokens and making all testcases public would give full feedback, the final scores would be computed differently: the one of the latest submission would be used instead of the maximum among all submissions. To achieve the correct scoring behavior, get in touch with the developers or check `the ML archives <http://www.freelists.org/post/contestms/applying-tokens-automatically,1>`_.
54-
5556
Changing token rules during a contest may lead to inconsistencies. Do so at your own risk!
5657

5758

58-
.. _configuringacontest_score-rounding:
59+
.. _configuringacontest_score:
60+
61+
Computation of the score
62+
========================
63+
64+
65+
Released submissions
66+
--------------------
67+
68+
The score of a contestant for the contest is always the sum of the score for each task. The score for a task is the best score among the set of "released" submissions.
69+
70+
Admins can use the configuration "Score mode" in AdminWebServer to change the way CMS defines the set of released submission. There are two ways, corresponding to the rules of IOI 2010-2012 and IOI 2013-.
71+
72+
In the first mode, used in IOI from 2010 to 2012, the released submissions are those on which the contestant used a token, plus the latest one submitted.
73+
74+
In the second mode, used since 2013, the released submissions are all submissions.
75+
76+
Usually, a task using the first mode will have a certain number of private testcases, and a limited sets of tokens. In this situation, you can think that contestants are required to "choose" the submission they want to use for grading, by submitting it last, or by using a token on it.
77+
78+
On the other hand, a task using the second mode usually has all testcases public, and therefore it would be silly to ask contestants to choose the submission (as they would always choose the one with the best score).
79+
5980

6081
Score rounding
61-
==============
82+
--------------
6283

6384
Based on the ScoreTypes in use and on how they are configured, some submissions may be given a floating-point score. Contest administrators will probably want to show only a small number of these decimal places in the scoreboard. This can be achieved with the ``score_precision`` fields on the contest and tasks.
6485

@@ -92,7 +113,7 @@ When CWS needs to show a timestamp to the user it first tries to show it accordi
92113
User login
93114
==========
94115

95-
Users log into CWS using a username and a password. These have to be specified, respectively, in the ``username`` and ``password`` fields (in cleartext!). These credentials need to be inserted (i.e. there's no way to have an automatic login, a "guest" session, etc.) and, if they match, the login (usually) succeeds. The user needs to login again if they do not navigate the site for ``cookie_duration`` seconds (specified in the :file:`cms.conf` file).
116+
Users log into CWS using a username and a password. These have to be specified, respectively, in the ``username`` and ``password`` fields (in cleartext!). These credentials need to be inserted by the admins (i.e. there's no way to have an automatic login, a "guest" session, etc.). The user needs to login again if they do not navigate the site for ``cookie_duration`` seconds (specified in the :file:`cms.conf` file).
96117

97118
In fact, there are other reasons that can cause the login to fail. If the ``ip_lock`` option (in :file:`cms.conf`) is set to ``true`` then the login will fail if the IP address that attempted it doesn't match the address or subnet in the ``ip`` field of the specified user. If ``ip`` is not set then this check is skipped, even if ``ip_lock`` is ``true``. Note that if a reverse-proxy (like nginx) is in use then it is necessary to set ``is_proxy_used`` (in :file:`cms.conf`) to ``true`` and configure the proxy in order to properly pass the ``X-Forwarded-For``-style headers (see :ref:`running-cms_recommended-setup`).
98119

@@ -141,7 +162,7 @@ Language details
141162

142163
* C/C++ support is provided by the GNU Compiler Collection. Submissions are optimized with ``-O2``. The standards used by default by CMS are gnu90 for C (that is, C90 with the GNU extension, the default for ``gcc``) and C++11 for C++. Note that C++11 support in ``g++`` is still incomplete and experimental. Please refer to the `C++11 Support in GCC <https://gcc.gnu.org/projects/cxx0x.html>`_ page for more information.
143164

144-
* Java programs are first compiled using ``gcj`` (optimized with ``-O3``), and then run as normal executables.
165+
* Java programs are first compiled using ``gcj`` (optimized with ``-O3``), and then run as normal executables. Proper Java support using a JVM will most probably come in the next CMS version.
145166

146167
* Python submissions are interpreted using Python 2 (you need to have ``/usr/bin/python2``).
147168

docs/External contest formats.rst

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,9 @@ The task YAML files require the following keys.
134134

135135
- ``n_input`` (integer): number of test cases to be evaluated for this task; the actual test cases are retrieved from the :ref:`task directory <externalcontestformats_task-directory>`.
136136

137-
- ``token_mode``: the token mode for the task, as in :ref:`configuringacontest_tokens`; it can be ``disabled``, ``infinite`` or ``finite``; if this is not specified, the loader will try to infer it from the remaining token parameters (in order to retain compatibility with the past), but you are not advised to relay on this behavior..
137+
- ``score_mode``: the score mode for the task, as in :ref:`configuringacontest_score`; it can be ``max_tokened_last`` (for the legacy behavior), or ``max`` (for the modern behavior).
138+
139+
- ``token_mode``: the token mode for the task, as in :ref:`configuringacontest_tokens`; it can be ``disabled``, ``infinite`` or ``finite``; if this is not specified, the loader will try to infer it from the remaining token parameters (in order to retain compatibility with the past), but you are not advised to relay on this behavior.
138140

139141
The following are optional keys.
140142

@@ -158,3 +160,14 @@ The following are optional keys that must be present for some task type or score
158160

159161
- ``primary_language`` (string): the statement will be imported with this language code; defaults to ``it`` (Italian), in order to ensure backward compatibility.
160162

163+
164+
Polygon format
165+
==============
166+
167+
`Polygon <https://polygon.codeforces.com>`_ is a popular platform for the creation of tasks, and a task format, used among others by Codeforces.
168+
169+
Since Polygon doesn't support CMS directly, some task parameters cannot be set using the standard Polygon configuration. The importer reads from an optional file :file:`cms_conf.py` additional configuration specifics to CMS. Additionally, user can add file named contestants.txt to allow importing some set of users.
170+
171+
By default, all tasks are batch files, with custom checker and score type is Sum. Loaders assumes that checker is check.cpp and written with usage of testlib.h. It provides customized version of testlib.h which allows using Polygon checkers with CMS. Checkers will be compiled during importing the contest. This is important in case the architecture where the loading happens is different from the architecture of the workers.
172+
173+
Polygon (by now) doesn't allow custom contest-wide files, so general contest options should be hard-coded in the loader.

docs/Installation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ These are our requirements (in particular we highlight those that are not usuall
7474

7575
* `PyPDF2 <https://pypi.python.org/pypi/PyPDF2>`_ (only for printing)
7676

77-
You will also require a Linux kernel with support for control groups and namespaces. Support has been in the Linux kernel since 2.6.32, and is provided by Ubuntu 12.04 and later. Other distributions, or systems with custom kernels, may not have support enabled. At a minimum, you will need to enable the following Linux kernel options: ``CONFIG_CGROUPS``, ``CONFIG_CGROUP_CPUACCT``, ``CONFIG_MEMCG`` (previously called as ``CONFIG_CGROUP_MEM_RES_CTLR``), ``CONFIG_CPUSETS``, ``CONFIG_PID_NS``, ``CONFIG_IPC_NS``, ``CONFIG_NET_NS``. It is anyway suggested to use Linux kernel version at least 3.8.
77+
You will also require a Linux kernel with support for control groups and namespaces. Support has been in the Linux kernel since 2.6.32. Other distributions, or systems with custom kernels, may not have support enabled. At a minimum, you will need to enable the following Linux kernel options: ``CONFIG_CGROUPS``, ``CONFIG_CGROUP_CPUACCT``, ``CONFIG_MEMCG`` (previously called as ``CONFIG_CGROUP_MEM_RES_CTLR``), ``CONFIG_CPUSETS``, ``CONFIG_PID_NS``, ``CONFIG_IPC_NS``, ``CONFIG_NET_NS``. It is anyway suggested to use Linux kernel version at least 3.8.
7878

7979
Then you require the compilation and execution environments for the languages you will use in your contest:
8080

docs/Internals.rst

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -58,16 +58,6 @@ The response is of the form::
5858
The value of ``__id`` must of course be the same as in the request.
5959
If ``__error`` is not null, then ``__data`` is expected to be null.
6060

61-
Historical notes
62-
----------------
63-
64-
In the past the RPC protocol used to be a bit more powerful, having
65-
the ability of complement the JSON message with a blob of binary
66-
data. This feature has been removed now, both because it was unused
67-
and because its implementation actually has a subtle bug that caused
68-
messages of a specific length to mess around with the ``\r\n``
69-
terminator.
70-
7161
Backdoor
7262
========
7363

docs/RankingWebServer.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ Some final suggestions
198198

199199
The suggested setup (the one that we also used at the IOI 2012) is to make RWS listen on both HTTP and HTTPS ports (we used 8080 and 8443), to use nginx to map port 80 to port 8080, to make all three ports (80, 8080 and 8443) accessible from the internet, to make PS connect to RWS via HTTPS on port 8443 and to use a Certificate Authority to generate certificates (the last one is probably an overkill).
200200

201-
At the IOI we had only one server, running on a 2 GHz machine, and we were able to serve about 1500 clients simultaneously (and, probably, we were limited to this value by a misconfiguration of nginx). This is to say that you'll likely need only one public RWS server.
201+
At the IOI 2012, we had only one server, running on a 2 GHz machine, and we were able to serve about 1500 clients simultaneously (and, probably, we were limited to this value by a misconfiguration of nginx). This is to say that you'll likely need only one public RWS server.
202202

203203
If you're starting RWS on your server remotely, for example via SSH, make sure the ``screen`` command is your friend :-).
204204

docs/Score types.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ For every submission, the score type of a task comes into play after the :doc:`t
1010
Standard score types
1111
====================
1212

13-
Like task types, CMS has the most common score types built in. They are Sum, GroupMin, GroupMul, GroupThreshold. There is also a score type called Relative, but it is an experiment not meant for usage.
13+
Like task types, CMS has the most common score types built in. They are Sum, GroupMin, GroupMul, GroupThreshold.
1414

1515
The first of the four well-tested score types, Sum, is the simplest you can imagine, just assigning a certain amount of points for each correct testcases. The other three are useful for grouping together testcases and assigning points for that group only if some conditions held. Groups are also known as subtasks in some contests. The group score types also allow test cases to be weighted, even for groups of size 1.
1616

docs/Troubleshooting.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,3 +40,8 @@ Sandbox
4040
- *Symptom.* The Worker fails to evaluate a submission logging about an invalid (empty) output from the manager.
4141

4242
*Possible cause.* You might have been used a non-statically linked checker. The sandbox prevent dynamically linked executables to work. Try compiling the checker with ```-static```. Also, make sure that the checker was compiled for the architecture of the workers (e.g., 32 or 64 bits).
43+
44+
45+
- *Symptom.* The Worker fails to evaluate a submission with a generic failure.
46+
47+
*Possible cause.* Make sure that the isolate binary that CMS is using has the correct permissions (in particular, its owner is root and it has the suid bit set). Be careful of having multiple isolate binaries in your path. Another reason could be that you are using an old version of isolate.

0 commit comments

Comments
 (0)