This increases fs.WriteStream throughput dramatically by removing the
"higher default water marks" for fs.WriteStream.
Also includes a benchmark. Current performance is significantly higher
than v0.8 for strings at all tested levels except size=1. Buffer
performance is still lackluster.
Further improvement in the stream.Writable base class is required, but
this is a start.
Improvements:
* floating point operations are approx 4x's faster
* Now write quiet NaN's
* all read/write on floating point now done in C, so no more need for
lib/buffer_ieee754.js
* float values have more accurate min/max value checks
* add additional benchmarks for buffers read/write
* created benchmark/_bench_timer.js which is a simple library that
can be included into any benchmark and provides an intelligent tracker
for sync and async tests
* add benchmarks for DataView set methods
* add checks and tests to make sure offset is greater than 0
Remove a lot of branches from the inner loop. Speeds up buf.toString('base64')
by about 20%.
Before:
$ time out/Release/node benchmark/buffer-base64-encode.js
real 0m6.607s
user 0m5.508s
sys 0m1.088s
After:
$ time out/Release/node benchmark/buffer-base64-encode.js
real 0m5.520s
user 0m4.520s
sys 0m0.992s
This is very similar to http.sh, but generates a flamegraph
with dtrace, pruning off the single-hit stacks so that we can
more easily see the places where relevant amounts of time are
spent.
Just sends a buffer to a server, which echoes it back, and then measures
the Gbits/second. Very similar to throughput.js, but using a single
process, so that it's possible to dtrace and get the jsstack frames for
profile comparison.
Use static buffers. Most clock ticks were spent in malloc() and free() instead
of read() and write().
Fix measurements. Really fast runs would result in bogus results like:
Wrote 1048576000 bytes in -0.731630s using 8192 byte buffers: -1366.811093mB/s