* V8: Upgrade to 3.15.11.7
* npm: Upgrade to 1.2.2
* punycode: Upgrade to 1.2.0 (Mathias Bynens)
* repl: make built-in modules available by default (Felix Böhm)
* windows: add support for '_Total' perf counters (Scott Blomquist)
* cluster: make --prof work for workers (Ben Noordhuis)
* child_process: do not keep list of sent sockets (Fedor Indutny)
* tls: Follow RFC6125 more strictly (Fedor Indutny)
* buffer: floating point read/write improvements (Trevor Norris)
* TypedArrays: Improve dataview perf without endian param (Dean McNamee)
* module: assert require() called with a non-empty string (Felix Böhm, James Campos)
* stdio: Set readable/writable flags properly (isaacs)
* stream: Properly handle large reads from push-streams (isaacs)
Reject negative offsets in SlowBuffer::MakeFastBuffer(), it allows
the creation of buffers that point to arbitrary addresses.
Reported by Trevor Norris.
V8 seems to be particularly slow converting an undefined value to false
in BooleanValue.
Revert this when we upgrade to V8 3.17, or whenever the fix discussed
in http://code.google.com/p/v8/issues/detail?id=2487 lands in V8.
Improvements:
* floating point operations are approx 4x's faster
* Now write quiet NaN's
* all read/write on floating point now done in C, so no more need for
lib/buffer_ieee754.js
* float values have more accurate min/max value checks
* add additional benchmarks for buffers read/write
* created benchmark/_bench_timer.js which is a simple library that
can be included into any benchmark and provides an intelligent tracker
for sync and async tests
* add benchmarks for DataView set methods
* add checks and tests to make sure offset is greater than 0
The int8_t and uint8_t typedefs on sunos/smartos depend on a number of
compiler directives. Avoid ambiguity and specify signed and unsigned
char explicitly.
Fixes the following build error:
../src/stream_wrap.cc: In static member function 'static void*
node::WriteWrap::operator new(size_t)':
../src/stream_wrap.cc:70:49: warning: no return statement in function
returning non-void [-Wreturn-type]
In file included from ../src/v8_typed_array.cc:26:0:
../src/v8_typed_array_bswap.h: In function 'T
v8_typed_array::SwapBytes(T) [with T = signed char]':
../src/v8_typed_array_bswap.h:150:23: instantiated from 'T
v8_typed_array::LoadAndSwapBytes(void*) [with T = signed char]'
../src/v8_typed_array.cc:694:7: instantiated from 'static
v8::Handle<v8::Value> {anonymous}::DataView::getGeneric(const
v8::Arguments&) [with T = signed char]'
../src/v8_typed_array.cc:738:40: instantiated from here
../src/v8_typed_array_bswap.h:125:16: error: size of array is
negative
Implement load and store swizzling operations. This reduces an unneeded
back and forth between types and additionally keeps the value in the
swappable type until it is swapped. This is important for correctness
when dealing with floating point, to avoid the possibility of loading
the bits of a signaling NaN (because it isn't yet swapped) into the FPU.
This additionally produces better code (comments are mine):
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)
setValue<double>:
movd %xmm0, %rax ; fp reg -> gen reg
bswapq %rax ; 64-bit byte swap
movq %rax, (%r15,%r12) ; store
Implement swizzling with compiler intrinsics and be aware of the native
endianness to correctly swap on big endian machines.
This introduces a template function to swap the bytes of a value,
and macros for the low level swap (taking advantage of gcc and msvc
intrinsics). This produces code like the following (comments are mine):
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)
setValue<double>:
movd %xmm0, %rax ; fp reg -> gen reg
bswapq %rax ; 64-bit byte swap
movd %rax, %xmm0 ; gen reg -> fp reg
movq %xmm0, (%r15,%r12) ; store
V8 3.15 has new API functions that let you specify the Isolate. V8 and
node.js generally spend 0.5-3.5% of the time in pthread_getspecific(),
looking up the current Isolate. Avoid that overhead by making "our"
isolate global so we can pass it around. The change to the new API is
introduced in follow-up commits.
Use static_cast instead of reinterpret_cast when casting from void*
to another type.
This is mostly an aesthetic change but may help catch bugs when the
affected code is modified.
Some performance counter related functions are not available on Windows
XP and Windows Server 2003, which caused node to call a NULL pointer.
Closes#4462Closes#4511
* assert: improve support for new execution contexts (lukebayes)
* domain: use camelCase instead of snake_case (isaacs)
* domain: Do not use uncaughtException handler (isaacs)
* fs: make 'end' work with ReadStream without 'start' (Ben Noordhuis)
* https: optimize createConnection() (Ryunosuke SATO)
* buffer: speed up base64 encoding by 20% (Ben Noordhuis)
* doc: Colorize API stabilitity index headers in docs (Luke Arduini)
* net: socket.readyState corrections (bentaber)
* http: Performance enhancements for http under streams2 (isaacs)
* stream: fix to emit end event on http.ClientResponse (Shigeki Ohtsu)
* stream: fix event handler leak in readstream pipe and unpipe (Andreas Madsen)
* build: Support ./configure --tag switch (Maciej Małecki)
* repl: don't touch `require.cache` (Nathan Rajlich)
* node: Emit 'exit' event when exiting for an uncaught exception (isaacs)
While it's true that error objects have a history of getting snake_case
properties attached by the host system, it's a point of confusion to
Node users that comes up a lot. It's still 'experimental', so best to
change this sooner rather than later.
This adds a process._fatalException method which is called into from
C++ in order to either emit the 'uncaughtException' method, or emit
'error' on the active domain.
The 'uncaughtException' event is an implementation detail that it would
be nice to deprecate one day, so exposing it as part of the domain
machinery is not ideal.
Fix#4375
Remove a lot of branches from the inner loop. Speeds up buf.toString('base64')
by about 20%.
Before:
$ time out/Release/node benchmark/buffer-base64-encode.js
real 0m6.607s
user 0m5.508s
sys 0m1.088s
After:
$ time out/Release/node benchmark/buffer-base64-encode.js
real 0m5.520s
user 0m4.520s
sys 0m0.992s