Prior to v0.10, Node ignored ECONNRESET errors in many situations.
There *are* valid cases in which ECONNRESET should be ignored as a
normal part of the TCP dance, but in many others, it's a very relevant
signal that must be heeded with care.
Exacerbating this problem, if the OutgoingMessage does not have a
req.connection._handle, it assumes that it is in the process of
connecting, and thus buffers writes up in an array.
The problem happens when you reuse a socket between two requests, and it
is destroyed abruptly in between them. The writes will be buffered,
because the socket has no handle, but it's not ever going to GET a
handle, because it's not connecting, it's destroyed.
The proper fix is to treat ECONNRESET correctly. However, this is a
behavior/semantics change, and cannot land in a stable branch.
Fix#4775
Previously, we were only destroying sockets on end if their readable
side had already been ended. This causes a problem for non-readable
streams, since we don't expect to ever see an 'end' event from those.
Treat the lack of a 'readable' flag the same as if it was an ended
readable stream.
Fix#4751
node 0.9.6 introduced Buffer changes that cause the key argument of
Hmac::HmacInit (used in crypto.createHmac) to be NULL when the key is
empty. This argument is passed to OpenSSL's HMAC_Init, which does not
like NULL keys.
This change works around the issue by passing an empty string to
HMAC_Init when the key is empty, and adds crypto.createHmac tests for
the edge cases of empty keys and values.
Fix an exception that was raised when the WriteStream was closed
immediately after creating it:
TypeError: Cannot read property 'fd' of undefined
at WriteStream.close (fs.js:1537:18)
<snip>
Avoid the TypeError and make sure the file descriptor is closed.
Fixes#4745.
Convert the Buffer to an ArrayBuffer. The typed_array.buffer property
should be an ArrayBuffer to avoid confusion: a Buffer doesn't have a
byteLength property and more importantly, its slice() method works
subtly different.
That means that before this commit:
var buf = new Buffer(1);
var arr = new Int8Array(buf);
assert.equal(arr.buffer, buf);
assert(arr.buffer instanceof Buffer);
And now:
var buf = new Buffer(1);
var arr = new Int8Array(buf);
assert.notEqual(arr.buffer, buf);
assert(arr.buffer instanceof ArrayBuffer);
This is commit 01ee551, except for the DataView type this time.
Make the behavior of DataView consistent with that of typed arrays:
make a copy of the backing store.
Otherwise sockets that are 'finish'ed won't be unpiped and `writing to
ended stream` error will arise.
This might sound unrealistic, but it happens in net.js. When
`socket.allowHalfOpen === false`, EOF will cause `.destroySoon()` call which
ends the writable side of net.Socket.
Check that having a worker bind to a port that's already taken doesn't
leave the master process in a confused state. Releasing the port and
trying again should Just Work[TM].
Follow browser behavior, only share the backing store when it's a
ArrayBuffer. That is:
var abuf = new ArrayBuffer(32);
var a = new Int8Array(abuf);
var b = new Int8Array(abuf);
a[0] = 0;
b[0] = 1;
assert(a[0] === b[0]); // a and b share memory
But:
var a = new Int8Array(32);
var b = new Int8Array(a);
a[0] = 0;
b[0] = 1;
assert(a[0] !== b[0]); // a and b don't share memory
The typed arrays spec allows both `a[0] === b[0]` and `a[0] !=== b[0]`
but Chrome and Firefox implement the behavior where memory is not
shared.
Copying the memory is less efficient but let's do it anyway for the
sake of the Principle of Least Surprise.
Fixes#4714.
This is more backwards-compatible with stream1 streams like `fs.WriteStream`
which would allow a callback function to be passed in as the only argument.
Closes#4719.
If the NODE_DEBUGGER_TIMEOUT environment variable is set, then use
that as the number of ms to wait for the debugger to start.
This is primarily to work around a race condition that almost never
happens in real usage with the debugger, but happens EVERY FRACKING
TIME when the debugger tests run as part of 'make test'.
Those values, if passed to the _read() cb, will not signal an EOF. Only
null or undefined will mark the end of data, and trigger the end event.
However, great care must be taken if you are returning an empty string
or buffer! There must be some other thing somewhere that will trigger
a read() call, because there will never be a readable event fired later.
This is in preparation for CryptoStreams being ported to streams2, where
it is safe to simply stop reading, because the crypto cycle process will
cause it to read(0) again at some future date.
Make lines ending \r\n emit one 'line' event, not two (where the second
one is an empty string).
This adds a new keypress name: 'return' (as in: 'carriage return').
Fixes#3305.
This commit breaks simple/test-stream2-stderr-sync. Need to figure out
a better way, or just accept that `(function W(){stream.write(b,W)})()`
is going to be noisy. People should really be using the `'drain'` event
for this use-case anyway.
This reverts commit 02f7d1bfd8.
Always defer the _write callback. The optimization here was only
relevant in some oddball edge cases that we don't actually care about.
Our benchmarks confirm that just always deferring the Socket._write cb
is perfectly fine to do, and in some cases, even slightly more
performant.
The refactor in b43e544140 to use
stream.push() in Transform inadvertently caused it to immediately
consume all the written data, regardless of whether or not the readable
side was being consumed.
Only pull data through the _transform() process when the readable side
is being consumed.
Fix#4667
TCPWrap::Initialize() and PipeWrap::Initialize() should be called before
any data will be read from received socket. But, because of lazy
initialization of these bindings, Initialize() method isn't called.
Init bindings manually upon socket receiving.
See #4669
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
Backport of 16bbecc from master branch. Closes#4633.
Argument checks were simplified by setting all undefined/NaN or out of
bounds values equal to their defaults.
Also copy() tests had a flaw that each buffer had the same bit pattern at
the same offset. So even if the copy failed, the bit-by-bit comparison
would have still been true. This was fixed by filling each buffer with a
unique value before copy operations.
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
This adds a proxy for bytesWritten to the tls.CryptoStream. This
change makes the connection object more similar between HTTP and
HTTPS requests in an effort to avoid confusion.
See issue #4650 for more background information.
We detect for non-string and non-buffer values in onread and
turn the stream into an "objectMode" stream.
If we are in "objectMode" mode then howMuchToRead will
always return 1, state.length will always have 1 appended
to it when there is a new item and fromList always takes
the first value from the list.
This means that for object streams, the n in read(n) is
ignored and read() will always return a single value
Fixed a bug with unpipe where the pipe would break because
the flowing state was not reset to false.
Fixed a bug with sync cb(null, null) in _read which would
forget to end the readable stream
This is similar to commit 2cbf458 but this time for 204 No Content
instead of 304 Not Modified responses.
When the user sends a 204 response with a Transfer-Encoding: chunked
header, suppress sending the zero chunk and force the connection to
close.
Force the connection to close when the response is a 304 Not Modified
and the user has set a "Transfer-Encoding: chunked" header.
RFC 2616 mandates that 304 responses MUST NOT have a body but node.js
used to send out a zero chunk anyway to accommodate clients that don't
have special handling for 304 responses.
It was pointed out that this might confuse reverse proxies to the point
of creating security liabilities, so suppress the zero chunk and force
the connection to close.
Fix an off-by-one error introduced in 9fe3734 that caused a regression
in the default endianness used for writes in DataView::setGeneric().
Fixes#4626.