Browse Source

doc: *.md formatting fixes in the benchmark dir

* Add language specification for the txt code blocks.
* Move the definitions to the bottom.

Ref: https://github.com/nodejs/node/pull/7727

PR-URL: https://github.com/nodejs/node/pull/7727
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Michaël Zasso <mic.besace@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
v7.x
Сковорода Никита Андреевич 8 years ago
parent
commit
f3f5a89a10
  1. 27
      benchmark/README.md

27
benchmark/README.md

@ -30,8 +30,6 @@ install.packages("ggplot2")
install.packages("plyr") install.packages("plyr")
``` ```
[wrk]: https://github.com/wg/wrk
## Running benchmarks ## Running benchmarks
### Running individual benchmarks ### Running individual benchmarks
@ -43,7 +41,7 @@ conclusions about the performance.
Individual benchmarks can be executed by simply executing the benchmark script Individual benchmarks can be executed by simply executing the benchmark script
with node. with node.
``` ```console
$ node benchmark/buffers/buffer-tostring.js $ node benchmark/buffers/buffer-tostring.js
buffers/buffer-tostring.js n=10000000 len=0 arg=true: 62710590.393305704 buffers/buffer-tostring.js n=10000000 len=0 arg=true: 62710590.393305704
@ -65,7 +63,7 @@ measured in ops/sec (higher is better).**
Furthermore you can specify a subset of the configurations, by setting them in Furthermore you can specify a subset of the configurations, by setting them in
the process arguments: the process arguments:
``` ```console
$ node benchmark/buffers/buffer-tostring.js len=1024 $ node benchmark/buffers/buffer-tostring.js len=1024
buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504 buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504
@ -78,7 +76,7 @@ Similar to running individual benchmarks, a group of benchmarks can be executed
by using the `run.js` tool. Again this does not provide the statistical by using the `run.js` tool. Again this does not provide the statistical
information to make any conclusions. information to make any conclusions.
``` ```console
$ node benchmark/run.js arrays $ node benchmark/run.js arrays
arrays/var-int.js arrays/var-int.js
@ -98,7 +96,7 @@ arrays/zero-int.js n=25 type=Buffer: 90.49906662339653
``` ```
It is possible to execute more groups by adding extra process arguments. It is possible to execute more groups by adding extra process arguments.
``` ```console
$ node benchmark/run.js arrays buffers $ node benchmark/run.js arrays buffers
``` ```
@ -119,13 +117,13 @@ First build two versions of node, one from the master branch (here called
The `compare.js` tool will then produce a csv file with the benchmark results. The `compare.js` tool will then produce a csv file with the benchmark results.
``` ```console
$ node benchmark/compare.js --old ./node-master --new ./node-pr-5134 string_decoder > compare-pr-5134.csv $ node benchmark/compare.js --old ./node-master --new ./node-pr-5134 string_decoder > compare-pr-5134.csv
``` ```
For analysing the benchmark results use the `compare.R` tool. For analysing the benchmark results use the `compare.R` tool.
``` ```console
$ cat compare-pr-5134.csv | Rscript benchmark/compare.R $ cat compare-pr-5134.csv | Rscript benchmark/compare.R
improvement significant p.value improvement significant p.value
@ -159,8 +157,6 @@ _For the statistically minded, the R script performs an [independent/unpaired
same for both versions. The significant field will show a star if the p-value same for both versions. The significant field will show a star if the p-value
is less than `0.05`._ is less than `0.05`._
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances
The `compare.R` tool can also produce a box plot by using the `--plot filename` The `compare.R` tool can also produce a box plot by using the `--plot filename`
option. In this case there are 48 different benchmark combinations, thus you option. In this case there are 48 different benchmark combinations, thus you
may want to filter the csv file. This can be done while benchmarking using the may want to filter the csv file. This can be done while benchmarking using the
@ -168,7 +164,7 @@ may want to filter the csv file. This can be done while benchmarking using the
afterwards using tools such as `sed` or `grep`. In the `sed` case be sure to afterwards using tools such as `sed` or `grep`. In the `sed` case be sure to
keep the first line since that contains the header information. keep the first line since that contains the header information.
``` ```console
$ cat compare-pr-5134.csv | sed '1p;/encoding=ascii/!d' | Rscript benchmark/compare.R --plot compare-plot.png $ cat compare-pr-5134.csv | sed '1p;/encoding=ascii/!d' | Rscript benchmark/compare.R --plot compare-plot.png
improvement significant p.value improvement significant p.value
@ -190,7 +186,7 @@ example to analyze the time complexity.
To do this use the `scatter.js` tool, this will run a benchmark multiple times To do this use the `scatter.js` tool, this will run a benchmark multiple times
and generate a csv with the results. and generate a csv with the results.
``` ```console
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv $ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
``` ```
@ -198,7 +194,7 @@ After generating the csv, a comparison table can be created using the
`scatter.R` tool. Even more useful it creates an actual scatter plot when using `scatter.R` tool. Even more useful it creates an actual scatter plot when using
the `--plot filename` option. the `--plot filename` option.
``` ```console
$ cat scatter.csv | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log $ cat scatter.csv | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log
aggregating variable: inlen aggregating variable: inlen
@ -229,7 +225,7 @@ can be solved by filtering. This can be done while benchmarking using the
afterwards using tools such as `sed` or `grep`. In the `sed` case be afterwards using tools such as `sed` or `grep`. In the `sed` case be
sure to keep the first line since that contains the header information. sure to keep the first line since that contains the header information.
``` ```console
$ cat scatter.csv | sed -E '1p;/([^,]+, ){3}128,/!d' | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log $ cat scatter.csv | sed -E '1p;/([^,]+, ){3}128,/!d' | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log
chunk encoding mean confidence.interval chunk encoding mean confidence.interval
@ -290,3 +286,6 @@ function main(conf) {
bench.end(conf.n); bench.end(conf.n);
} }
``` ```
[wrk]: https://github.com/wg/wrk
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances

Loading…
Cancel
Save