Every constant is certainly 4 bytes now, but freebsd's objdump utility
prints out odd byte sequences (5-bytes, 6-bytes and even 9-bytes long)
for v8's data section. We can safely ignore all upper bytes, because all
constants that we're using are just `int`s. Since on all supported
platforms `int` is 32bit long (and anyway v8's constants are 32bit too),
we ignore all higher bits if they were read.
Fix an off-by-one error introduced in 9fe3734 that caused a regression
in the default endianness used for writes in DataView::setGeneric().
Fixes#4626.
Due to the nature of asyncronous programming, it's impossible to know
what will run on the next tick. Because of this, it's not correct to
maintain domain stack state between ticks
Since the _fatalException handler is only invoked after the stack is
unwound, once it exits the tick will end. The only reasonable thing to
do in that case is to exit *all* domains.
The first example in cluster.markdown requires NODE_DEBUG env to show
debug message.
And also fix the message because it was a little bit different with
the actual message.
* V8: Upgrade to 3.15.11.7
* npm: Upgrade to 1.2.2
* punycode: Upgrade to 1.2.0 (Mathias Bynens)
* repl: make built-in modules available by default (Felix Böhm)
* windows: add support for '_Total' perf counters (Scott Blomquist)
* cluster: make --prof work for workers (Ben Noordhuis)
* child_process: do not keep list of sent sockets (Fedor Indutny)
* tls: Follow RFC6125 more strictly (Fedor Indutny)
* buffer: floating point read/write improvements (Trevor Norris)
* TypedArrays: Improve dataview perf without endian param (Dean McNamee)
* module: assert require() called with a non-empty string (Felix Böhm, James Campos)
* stdio: Set readable/writable flags properly (isaacs)
* stream: Properly handle large reads from push-streams (isaacs)
Profiling in clustered environments doesn't work out of the box.
By default, V8 writes the profile data of all processes to a single
v8.log.
Running that log file through a tick processor produces bogus numbers
because many events won't match up with the recorded memory mappings
and you end up with graphs where 80+% of ticks is unaccounted for.
Fixing the tick processor to deal with multi-process output is not very
useful because the processes may be running wildly disparate workloads.
That's why we fix up the command line arguments to include
a "--logfile=v8-%p.log" argument (where %p is expanded to the PID)
unless it already contains a --logfile argument.
Fixes#4617.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want to receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want `server.connections` to be a reliable
number, and receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Unfortunately, it's just too slow to do this in events.js. Users will
just have to live with not having events named __proto__ or toString.
This reverts commit b48e303af0.
'Stability: 5' is described as 'Locked' not as 'API Locked'
in other documents.
For example:
- `/doc/api/assert.markdown`
- `/doc/api/util.markdown`
This word was injected in 192192a.
This test starts two clustered HTTP servers on the same port.
It expects the first cluster to succeed and the second cluster
to fail with EADDRINUSE.
Reapplies commit cacd3ae, accidentally reverted in a2851b6.
Reject negative offsets in SlowBuffer::MakeFastBuffer(), it allows
the creation of buffers that point to arbitrary addresses.
Reported by Trevor Norris.
V8 seems to be particularly slow converting an undefined value to false
in BooleanValue.
Revert this when we upgrade to V8 3.17, or whenever the fix discussed
in http://code.google.com/p/v8/issues/detail?id=2487 lands in V8.
Problem 1: If stream.push() triggers a 'readable' event, and the user
calls `read(n)` with some n > the highWaterMark, then the push() will
return false (indicating that they should not push any more), but no
future 'readable' event is coming (because we're above the
highWaterMark).
Solution: return true from push() when needReadable is set.
Problem 2: A read(n) for n != 0, after the stream had encountered an
EOF, would not trigger the 'end' event if the EOF was pushed in
synchronously by the _read() function.
Solution: Check for ended in stream.read() and schedule an end event if
the length now equals 0.
Fix#4585
Improved assert check order of execution and added additional checks on
parameters to ensure no bad values make it through (e.g. negative offset
values).
Improvements:
* floating point operations are approx 4x's faster
* Now write quiet NaN's
* all read/write on floating point now done in C, so no more need for
lib/buffer_ieee754.js
* float values have more accurate min/max value checks
* add additional benchmarks for buffers read/write
* created benchmark/_bench_timer.js which is a simple library that
can be included into any benchmark and provides an intelligent tracker
for sync and async tests
* add benchmarks for DataView set methods
* add checks and tests to make sure offset is greater than 0