A ReadStream constructed from an existing file descriptor failed to start
reading automatically. Avoids a userspace call to ReadStream.prototype._read().
The base64 decoder would intermittently throw an out-of-bounds exception when
the buffer in `buf.write('', 'base64')` was a zero-sized buffer located at the
end of the slab.
Fixes#2657.
Honor the length argument in `buf.write(s, 0, buf.length, 'base64')`. Before
this commit, the length argument was ignored. The decoder would keep writing
until it hit the end of the buffer. Since most buffers in Node are slices of
a parent buffer (the slab), this bug would overwrite the content of adjacent
buffers.
The bug is trivially demonstrated with the following test case:
var assert = require('assert');
var a = Buffer(3);
var b = Buffer('xxx');
a.write('aaaaaaaa', 'base64');
assert.equal(b.toString(), 'xxx');
This commit coincidentally also fixes a bug where Buffer._charsWritten was not
updated for zero length buffers.
* check exit code of child processes
* wait 1000 ms to exit the child process
* prefix log messages with [PARENT] or [CHILD] to help debugging
* kill all child processes before exiting
Conflicts:
test/simple/test-dgram-multicast-multi-process.js
When using isolate the .fork would break because it had
no .disconnect method. This remove the exit handler there
would call .disconnect since it was not required.
It also change .disconnect to throw if the channel is closed,
this was not possible before because .disconnect would be called
twice.
If a timer callback throws and the user's uncaughtException handler ignores the
exception, other timers that expire on the current tick should still run.
If #2582 goes through, this hack should be removed.
Fixes#2631.
Also, if an error is already provided, then raise the provided
error, rather than throwing it with a less helpful 'stdout cannot
be closed' message.
This is important for properly handling EPIPEs.
- watch for the death of child processes and fail the test if they all die
- use setTimeout to fail the test if responses are not received and processed in 5000ms
As RFC 2616 says we should, assume that servers will provide a persistent
connection by default.
> A significant difference between HTTP/1.1 and earlier versions of
> HTTP is that persistent connections are the default behavior of any
> HTTP connection. That is, unless otherwise indicated, the client
> SHOULD assume that the server will maintain a persistent connection,
> even after error responses from the server.
> HTTP/1.1 applications that do not support persistent connections MUST
> include the "close" connection option in every message.
Fixes#2436.
`path.exists*` functions show a deprecation warning and call functions
from `fs`. They should be removed later.
test: fix references to `path.exists*` in tests
test fs: add test for `fs.exists` and `fs.existsSync`
doc: reflect moving `path.exists*` to `fs`
uncaughtException handlers installed by the user override the default one that
the cluster module installs, the one that kills off the master process.
Fixes#2556.
Passing a non-buffer or non-string argument to Socket.prototype.write triggered
an assert:
Assertion failed: (Buffer::HasInstance(args[0])), function Write,
file ../src/stream_wrap.cc, line 289.
Fixes#2532.
With Upgrade or CONNECT request, http.ClientRequest emits 'close' event
after its socket is closed. However, after receiving a response, the socket
is not under management by the request.
http.ClientRequest should detach the socket before 'upgrade'/'connect'
event is emitted to pass the socket to a user. After that, it should
emit 'close' event immediately without waiting for closing of the socket.
Fixes#2510.