@@ -30,6 +30,7 @@ execute arbitrary commands within isolated containers, stop running containers a
3030 * [ Commands] ( #commands )
3131 * [ Promises] ( #promises )
3232 * [ Blocking] ( #blocking )
33+ * [ Command streaming] ( #command-streaming )
3334 * [ TAR streaming] ( #tar-streaming )
3435 * [ JSON streaming] ( #json-streaming )
3536 * [ JsonProgressException] ( #jsonprogressexception )
@@ -183,6 +184,52 @@ $inspections = Block\awaitAll($promises, $loop);
183184
184185Please refer to [ clue/block-react] ( https://github.com/clue/php-block-react#readme ) for more details.
185186
187+ #### Command streaming
188+
189+ The following API endpoint resolves with a buffered string of the command output
190+ (STDOUT and/or STDERR):
191+
192+ ``` php
193+ $client->execStart($exec);
194+ ```
195+
196+ Keep in mind that this means the whole string has to be kept in memory.
197+ If you want to access the individual output chunks as they happen or
198+ for bigger command outputs, it's usually a better idea to use a streaming
199+ approach.
200+
201+ This works for (any number of) commands of arbitrary sizes.
202+ The following API endpoint complements the default Promise-based API and returns
203+ a [ ` Stream ` ] ( https://github.com/reactphp/stream ) instance instead:
204+
205+ ``` php
206+ $stream = $client->execStartStream($exec);
207+ ```
208+
209+ The resulting stream is a well-behaving readable stream that will emit
210+ the normal stream events:
211+
212+ ``` php
213+ $stream = $client->execStartStream($exec, $config);
214+ $stream->on('data', function ($data) {
215+ // data will be emitted in multiple chunk
216+ echo $data;
217+ });
218+ $stream->on('close', function () {
219+ // the stream just ended, this could(?) be a good thing
220+ echo 'Ended' . PHP_EOL;
221+ });
222+ ```
223+
224+ See also the [ streaming exec example] ( examples/exec-stream.php ) and the [ exec benchmark example] ( examples/benchmark-exec.php ) .
225+
226+ Running this benchmark on my personal (rather mediocre) VM setup reveals that
227+ the benchmark achieves a throughput of ~ 300 MiB/s while the (totally unfair)
228+ comparison script using the plain Docker client only yields ~ 100 MiB/s.
229+ Instead of me posting more details here, I encourage you to re-run the benchmark
230+ yourself and adjust it to better suite your problem domain.
231+ The key takeway here is: * PHP is faster than you probably thought* .
232+
186233#### TAR streaming
187234
188235The following API endpoints resolve with a string in the [ TAR file format] ( https://en.wikipedia.org/wiki/Tar_%28computing%29 ) :
0 commit comments