TCPWrap::Initialize() and PipeWrap::Initialize() should be called before
any data will be read from received socket. But, because of lazy
initialization of these bindings, Initialize() method isn't called.
Init bindings manually upon socket receiving.
See #4669
Fix the following OOM error in pummel/test-net-connect-memleak
and pummel/test-tls-connect-memleak:
FATAL ERROR: CALL_AND_RETRY_0 Allocation failed - process out of
memory
Commit v8/v8@91afd39 increases the size of the deoptimization table
to the extent that a 64M float array pushes it over the brink. Switch
to SMIs so it stays below the limit.
pummel/test-net-connect-memleak is still failing albeit with a different
error this time. Needs further investigation.
=== release test-net-connect-memleak ===
Path: pummel/test-net-connect-memleak
-64 kB reclaimed
assert.js:102
throw new assert.AssertionError({
^
AssertionError: false == true
at done [as _onTimeout] (/home/bnoordhuis/src/nodejs/master/
test/pummel/test-net-connect-memleak.js:48:3)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
at process._makeCallback (node.js:306:20)
mainly to allow native addons to export single functions on
rather than being restricted to operating on an existing
object.
Init functions now receive exports as the first argument, like
before, but also the module object as the second argument, if they
support it.
Related to #4634
cc: @rvagg
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
Backport of 16bbecc from master branch. Closes#4633.
Argument checks were simplified by setting all undefined/NaN or out of
bounds values equal to their defaults.
Also copy() tests had a flaw that each buffer had the same bit pattern at
the same offset. So even if the copy failed, the bit-by-bit comparison
would have still been true. This was fixed by filling each buffer with a
unique value before copy operations.
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
This adds a proxy for bytesWritten to the tls.CryptoStream. This
change makes the connection object more similar between HTTP and
HTTPS requests in an effort to avoid confusion.
See issue #4650 for more background information.
We detect for non-string and non-buffer values in onread and
turn the stream into an "objectMode" stream.
If we are in "objectMode" mode then howMuchToRead will
always return 1, state.length will always have 1 appended
to it when there is a new item and fromList always takes
the first value from the list.
This means that for object streams, the n in read(n) is
ignored and read() will always return a single value
Fixed a bug with unpipe where the pipe would break because
the flowing state was not reset to false.
Fixed a bug with sync cb(null, null) in _read which would
forget to end the readable stream
This is similar to commit 2cbf458 but this time for 204 No Content
instead of 304 Not Modified responses.
When the user sends a 204 response with a Transfer-Encoding: chunked
header, suppress sending the zero chunk and force the connection to
close.
Force the connection to close when the response is a 304 Not Modified
and the user has set a "Transfer-Encoding: chunked" header.
RFC 2616 mandates that 304 responses MUST NOT have a body but node.js
used to send out a zero chunk anyway to accommodate clients that don't
have special handling for 304 responses.
It was pointed out that this might confuse reverse proxies to the point
of creating security liabilities, so suppress the zero chunk and force
the connection to close.
Fix an off-by-one error introduced in 9fe3734 that caused a regression
in the default endianness used for writes in DataView::setGeneric().
Fixes#4626.
Due to the nature of asyncronous programming, it's impossible to know
what will run on the next tick. Because of this, it's not correct to
maintain domain stack state between ticks
Since the _fatalException handler is only invoked after the stack is
unwound, once it exits the tick will end. The only reasonable thing to
do in that case is to exit *all* domains.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want to receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want `server.connections` to be a reliable
number, and receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Unfortunately, it's just too slow to do this in events.js. Users will
just have to live with not having events named __proto__ or toString.
This reverts commit b48e303af0.
This test starts two clustered HTTP servers on the same port.
It expects the first cluster to succeed and the second cluster
to fail with EADDRINUSE.
Reapplies commit cacd3ae, accidentally reverted in a2851b6.
Reject negative offsets in SlowBuffer::MakeFastBuffer(), it allows
the creation of buffers that point to arbitrary addresses.
Reported by Trevor Norris.
Problem 1: If stream.push() triggers a 'readable' event, and the user
calls `read(n)` with some n > the highWaterMark, then the push() will
return false (indicating that they should not push any more), but no
future 'readable' event is coming (because we're above the
highWaterMark).
Solution: return true from push() when needReadable is set.
Problem 2: A read(n) for n != 0, after the stream had encountered an
EOF, would not trigger the 'end' event if the EOF was pushed in
synchronously by the _read() function.
Solution: Check for ended in stream.read() and schedule an end event if
the length now equals 0.
Fix#4585
Improvements:
* floating point operations are approx 4x's faster
* Now write quiet NaN's
* all read/write on floating point now done in C, so no more need for
lib/buffer_ieee754.js
* float values have more accurate min/max value checks
* add additional benchmarks for buffers read/write
* created benchmark/_bench_timer.js which is a simple library that
can be included into any benchmark and provides an intelligent tracker
for sync and async tests
* add benchmarks for DataView set methods
* add checks and tests to make sure offset is greater than 0
There was previously an assert() in there, but this part of the code is
so high-volume that the added cost made a measurable dent in http_simple.
Just checking inline is fine, though, and prevents a lot of potential
hazards.
Say that a stream's current read queue has 101 bytes in it, and the
underlying resource has ended (ie, reached EOF).
If you do something like this:
stream.read(100); // leave a byte behind
stream.read(0); // read(0) for some reason
then the read(0) will get 0 from the howMuchToRead function. Since the
stream was ended, this was incorrectly treating the 0 as a "there is no
more in the buffer", and emitting 'end' before that last byte was read.
Why have the read(0) in the first place? We do this in some cases to
trigger the last few bytes of a net socket (such as a child process's
stdio pipes). This was causing issues when piping a `git archive` job
to a file: the resulting tarball was incomplete, because it occasionally
was not getting the last chunk.
Fix the following exception:
http.js:974
this._httpMessage.emit('close');
^
TypeError: Cannot call method 'emit' of null
at Socket.onServerResponseClose (http.js:974:21)
at Socket.EventEmitter.emit (events.js:124:20)
at net.js:421:10
at process._tickCallback (node.js:386:13)
at process._makeCallback (node.js:304:15)
Fixes#4586.
This also slightly changes the semantics, in that a 'readable'
event may be triggered by the first write() call, even if a
user has not yet called read().
This happens because the Transform _write() handler is calling
read(0) to start the flow of data. Technically, the new behavior
is more 'correct', since it is more in line with the semantics
of the 'readable' event in other streams.
When switching into compatibility mode by setting `data` event listener,
`_read()` method will be called immediately. If method implementation
invokes callback in the same tick - all emitted `data` events will be
discarded, because `data` listener wasn't set yet.