You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Rich Trott 5c2707c1b2 doc: benchmark/README.md copyedit 10 years ago
..
arrays bench: Consistency in benchmark filenames 12 years ago
buffers buffer: optimize Buffer.byteLength 10 years ago
crypto benchmark: add rsa/aes-gcm performance test 10 years ago
events benchmark: bump eventemitter number of iterations 10 years ago
fs benchmark: Correct the bufferSize to highWaterMark 11 years ago
http benchmark: fix chunky client benchmark execution 10 years ago
misc node: improve nextTick performance 10 years ago
net benchmark: fix chunky client benchmark execution 10 years ago
querystring benchmark: add a few querystring benchmarks 10 years ago
tls benchmark: fixate `ciphers` in tls benchmarks 11 years ago
url lib: micro-optimize url.resolve() 10 years ago
README.md doc: benchmark/README.md copyedit 10 years ago
common.js benchmark: don't check wrk in non-http benchmark 10 years ago
compare.js benchmark: allow compare via fine-grained filters 10 years ago
fs-write-stream-throughput.js fs: Change default WriteStream config, increase perf 12 years ago
http-flamegraph.sh benchmark: fix command name in benchmark scripts 10 years ago
http.sh benchmark: fix command name in benchmark scripts 10 years ago
http_bench.js Remove excessive copyright/license boilerplate 10 years ago
http_server_lag.js Typo in http_server_lag.js script 13 years ago
http_simple.js bench: Make http easier to profile 12 years ago
http_simple.rb fix whitespace errors 15 years ago
http_simple_auto.js bench: use res.end() for chunked encoding 12 years ago
http_simple_bench.sh benchmark: fix command name in benchmark scripts 10 years ago
http_simple_cluster.js bench: start a worker for each CPU 13 years ago
idle_clients.js Add extra anti-DoS tech to net.Server 14 years ago
idle_server.js Abstract out a Server.prototype.pause method 14 years ago
io.c bench: Make io.c output easier to read 12 years ago
plot.R benchmark: fix command name in benchmark scripts 10 years ago
plot_csv.R benchmark: add plot_csv R graphing script 10 years ago
report-startup-memory.js Add startup memory script to benchmarks 14 years ago
static_http_server.js benchmark: Add resume() in static_http_server 12 years ago

README.md

io.js core benchmark tests

This folder contains benchmark tests to measure the performance for certain io.js APIs.

Prerequisites

Most of the http benchmarks require wrk and ab being installed. These are most often available through your preferred package manager.

How to run tests

There are three ways to run benchmark tests:

Run all tests of a given type

For example, buffers:

iojs benchmark/common.js buffers

The above command will find all scripts under buffers directory and require each of them as a module. When a test script is required, it creates an instance of Benchmark (a class defined in common.js). In the next tick, the Benchmark constructor iterates through the configuration object property values and runs the test function with each of the combined arguments in spawned processes. For example, buffers/buffer-read.js has the following configuration:

var bench = common.createBenchmark(main, {
    noAssert: [false, true],
    buffer: ['fast', 'slow'],
    type: ['UInt8', 'UInt16LE', 'UInt16BE',
        'UInt32LE', 'UInt32BE',
        'Int8', 'Int16LE', 'Int16BE',
        'Int32LE', 'Int32BE',
        'FloatLE', 'FloatBE',
        'DoubleLE', 'DoubleBE'],
        millions: [1]
});

The runner takes one item from each of the property array value to build a list of arguments to run the main function. The main function will receive the conf object as follows:

  • first run:
    {   noAssert: false,
        buffer: 'fast',
        type: 'UInt8',
        millions: 1
    }
  • second run:
    {
        noAssert: false,
        buffer: 'fast',
        type: 'UInt16LE',
        millions: 1
    }

...

In this case, the main function will run 2214*1 = 56 times. The console output looks like the following:

buffers//buffer-read.js
buffers/buffer-read.js noAssert=false buffer=fast type=UInt8 millions=1: 271.83
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16LE millions=1: 239.43
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16BE millions=1: 244.57
...

The last number is the rate of operations. Higher is better.

Run an individual test

For example, buffer-slice.js:

iojs benchmark/buffers/buffer-read.js

The output:

buffers/buffer-read.js noAssert=false buffer=fast type=UInt8 millions=1: 246.79
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16LE millions=1: 240.11
buffers/buffer-read.js noAssert=false buffer=fast type=UInt16BE millions=1: 245.91
...

Run tests with options

This example will run only the first type of url test, with one iteration. (Note: benchmarks require many iterations to be statistically accurate.)

iojs benchmark/url/url-parse.js type=one n=1

Output:

url/url-parse.js type=one n=1: 1663.74402

How to write a benchmark test

The benchmark tests are grouped by types. Each type corresponds to a subdirectory, such as arrays, buffers, or fs.

Let's add a benchmark test for Buffer.slice function. We first create a file buffers/buffer-slice.js.

The code snippet

var common = require('../common.js'); // Load the test runner

var SlowBuffer = require('buffer').SlowBuffer;

// Create a benchmark test for function `main` and the configuration variants
var bench = common.createBenchmark(main, {
  type: ['fast', 'slow'], // Two types of buffer
  n: [512] // Number of times (each unit is 1024) to call the slice API
});

function main(conf) {
  // Read the parameters from the configuration
  var n = +conf.n;
  var b = conf.type === 'fast' ? buf : slowBuf;
  bench.start(); // Start benchmarking
  for (var i = 0; i < n * 1024; i++) {
    // Add your test here
    b.slice(10, 256);
  }
  bench.end(n); // End benchmarking
}