You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Brian White f385f77f1d process: improve memoryUsage() performance 8 years ago
..
arrays benchmark: reformat code for clarity 8 years ago
assert benchmark: add assert.deep[Strict]Equal benchmarks 8 years ago
buffers buffer: improve compare() performance 8 years ago
child_process benchmark: cleanup child_process IPC benchmark 8 years ago
crypto benchmark: favor === over == 9 years ago
dgram benchmark: strip BOM in dgram/bind-params 8 years ago
domain tools: enable one-var-declaration-per-line ESLint rule 8 years ago
es tools: enable one-var-declaration-per-line ESLint rule 8 years ago
events events: improve once() performance 8 years ago
fixtures benchmark: move url data to fixtures 8 years ago
fs benchmark: add benches for fs.stat & fs.statSync 8 years ago
http benchmark: add ClientRequest creation benchmark 8 years ago
misc benchmark: move setImmediate benchmarks to timers 8 years ago
module benchmark: add module loader benchmark parameter 9 years ago
net benchmark: improve readability of net benchmarks 8 years ago
path tools,benchmark: increase lint compliance 9 years ago
process process: improve memoryUsage() performance 8 years ago
querystring benchmark: clean up legacy url benchmarks 8 years ago
streams stream: avoid additional validation for Buffers 8 years ago
string_decoder string_decoder: rewrite implementation 9 years ago
timers benchmark: move setImmediate benchmarks to timers 8 years ago
tls benchmark: reformat code for clarity 8 years ago
url benchmark: clean up legacy url benchmarks 8 years ago
util util: improve readability of normalizeEncoding 8 years ago
vm benchmark: add benchmark for vm.runIn*() 8 years ago
README.md doc: add benchmark/README.md and fix guide 8 years ago
_benchmark_progress.js benchmark: fix timer display in progress output 8 years ago
_cli.R benchmark: use t-test for comparing node versions 9 years ago
_cli.js benchmark: add progress indicator to compare.js 8 years ago
_http-benchmarkers.js benchmark: allow zero when parsing http req/s 8 years ago
common.js benchmark: add progress indicator to compare.js 8 years ago
compare.R benchmark: use "confidence" in output of compare.R 8 years ago
compare.js benchmark: add progress indicator to compare.js 8 years ago
run.js benchmark: add progress indicator to compare.js 8 years ago
scatter.R benchmark: ignore significance when using --runs 1 8 years ago
scatter.js benchmark: add progress indicator to compare.js 8 years ago

README.md

Node.js Core Benchmarks

This folder contains code and data used to measure performance of different Node.js implementations and different ways of writing JavaScript run by the built-in JavaScript engine.

For a detailed guide on how to write and run benchmarks in this directory, see the guide on benchmarks.

Table of Contents

Benchmark Directories

Directory Purpose
arrays Benchmarks for various operations on array-like objects, including Array, Buffer, and typed arrays.
assert Benchmarks for the assert subsystem.
buffers Benchmarks for the buffer subsystem.
child_process Benchmarks for the child_process subsystem.
crypto Benchmarks for the crypto subsystem.
dgram Benchmarks for the dgram subsystem.
domain Benchmarks for the domain subsystem.
es Benchmarks for various new ECMAScript features and their pre-ES2015 counterparts.
events Benchmarks for the events subsystem.
fixtures Benchmarks fixtures used in various benchmarks throughout the benchmark suite.
fs Benchmarks for the fs subsystem.
http Benchmarks for the http subsystem.
misc Miscellaneous benchmarks and benchmarks for shared internal modules.
module Benchmarks for the module subsystem.
net Benchmarks for the net subsystem.
path Benchmarks for the path subsystem.
process Benchmarks for the process subsystem.
querystring Benchmarks for the querystring subsystem.
streams Benchmarks for the streams subsystem.
string_decoder Benchmarks for the string_decoder subsystem.
timers Benchmarks for the timers subsystem, including setTimeout, setInterval, .etc.
tls Benchmarks for the tls subsystem.
url Benchmarks for the url subsystem, including the legacy url implementation and the WHATWG URL implementation.
util Benchmarks for the util subsystem.
vm Benchmarks for the vm subsystem.

Other Top-level files

The top-level files include common dependencies of the benchmarks and the tools for launching benchmarks and visualizing their output. The actual benchmark scripts should be placed in their corresponding directories.

  • _benchmark_progress.js: implements the progress bar displayed when running compare.js
  • _cli.js: parses the command line arguments passed to compare.js, run.js and scatter.js
  • _cli.R: parses the command line arguments passed to compare.R
  • _http-benchmarkers.js: selects and runs external tools for benchmarking the http subsystem.
  • common.js: see Common API.
  • compare.js: command line tool for comparing performance between different Node.js binaries.
  • compare.R: R script for statistically analyzing the output of compare.js
  • run.js: command line tool for running individual benchmark suite(s).
  • scatter.js: command line tool for comparing the performance between different parameters in benchmark configurations, for example to analyze the time complexity.
  • scatter.R: R script for visualizing the output of scatter.js with scatter plots.

Common API

The common.js module is used by benchmarks for consistency across repeated tasks. It has a number of helpful functions and properties to help with writing benchmarks.

createBenchmark(fn, configs[, options])

See the guide on writing benchmarks.

default_http_benchmarker

The default benchmarker used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.

PORT

The default port used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.

sendResult(data)

Used in special benchmarks that can't use createBenchmark and the object it returns to accomplish what they need. This function reports timing data to the parent process (usually created by running compare.js, run.js or scatter.js).

v8ForceOptimization(method[, ...args])

Force V8 to mark the method for optimization with the native function %OptimizeFunctionOnNextCall() and return the optimization status after that.

It can be used to prevent the benchmark from getting disrupted by the optimizer kicking in halfway through. However, this could result in a less effective optimization. In general, only use it if you know what it actually does.