The null signal test existed, but only tested the case where the target
process existed, not when it did not exist.
Also clarified that SIGUSR1 is reserved by Node.js only for receiveing,
its not at all reserved when sending a signal with kill().
kill(pid, 'O_RDWR'), or any other node constant, "worked". I fixed this
by also checking for 'SIG'. The same as done in the isSignal() function.
Now the signal names supported by process.kill() are the same as those
supported by process.on().
HTTP Parser instance was freed twice, leading to the reusal of it
in several different requests simultaneously.
The flow:
`socketCloseListener` is firing, which calls `socket.read()` to flush
any queued data, `socket.buffer` has data which emits and fires
`socketOnData` in sync, this triggers a parser error which frees the
parser, `socketCloseListener` resumes execution only to have the wrong
parser associated with the socket.
The fix is to only cache the parser after the flushing from the socket,
and to assert in `socketOnData` that `socket === parser.socket`
fix#6451
Add a 'serialNumber' property to the object that is returned by
tls.CryptoStream#getPeerCertificate(). Contains the certificate's
serial number encoded as a hex string. The format is identical to
`openssl x509 -serial -in path/to/certificate`.
Fixes#6583.
Format negative zero as '-0' instead of as '0', as it does not behave
identically to positive zero. ((-0).toString() still returns '0' as
required by ES5 9.8.1.2).
Fixesjoyent/node#6548.
Closesjoyent/node#6550.
context._asyncQueue shouldn't be exposed as asyncQueue, as it allows
modification of queues already attached to an event. Which is not
supposed to happend. Instead context._asyncQueue should be copied.
Check that `listeners` is actually an array before trying to manipulate it
because it won't be if no regular event listeners have been registered yet
but there are 'removeListener' event listeners.
Removing the depth counter while processing the nextTickQueue made it
possible to run out of memory if in an infinite recursive loop using
nextTick(). There was also an edge case where too many callbacks were
pushed onto the nextTickQueue, while not actually being recursive.
This is being done to prevent possible cryptic FATAL ERROR messages from
popping up, and issues being posted about them.
v8's `messages.js` file's `CallSiteGetMethodName` is running through all
object properties and getter to figure out method name of function that
appears in stack trace. This run-through will also read `fd` property of
`UDPWrap` instance's javascript object, making `UNWRAP()` fail.
As a simple alternative to the test case above, one could just keep
reference to the dgram handle and try accessing `handle.fd` after it has
been fully closed.
fix#6536
Before this commit, passing --debugger and other V8 debug switches to
node.js made node print a usage message and exit.
Rewrite the debug argument parser so it only consumes switches that we
understand and pass everything else as-is to V8.
A side effect of this change is that switches like --debugger_agent and
--debugger_port now work. That kind of obsoletes our debugger switches
because they implement pretty much the same functionality but let's
leave them in for now for the sake of convenience and backwards
compatibility.
Fixes#6526.
Emitting an event within a `EventEmitter#once` callback of the same
event name will cause subsequent `EventEmitter#once` listeners of the
same name to be called multiple times.
var emitter = new EventEmitter();
emitter.once('e', function() {
emitter.emit('e');
console.log(1);
});
emitter.once('e', function() {
console.log(2);
});
emitter.emit('e');
// Output
// 2
// 1
// 2
Fix the issue, by calling the listener method only if it was not
already called.
Fix invalid `hasOwnProperty` function usage.
For example, before in the REPL:
```
> Ar<Tab>
Array
Array ArrayBuffer
```
Now:
```
> Ar<Tab>
Array
ArrayBuffer
```
Fixes#6255.
Closes#6498.
This commit removes the simple/test-event-emitter-memory-leak test for
being unreliable with the new garbage collector: the memory pressure
exerted by the test case is too low for the garbage collector to kick
in. It can be made to work again by limiting the heap size with the
--max_old_space_size=x flag but that won't be very reliable across
platforms and architectures.
Otherwise the string triggers an assertion error in node_http_parser.c,
line 370:
assert(Buffer::HasInstance(args[0]) == true);
because the first argument is not a Buffer object.
When socket, passed in `tls.connect()` `options` argument is not yet
connected to the server, `_handle` gets assigned to a `net.Socket`,
instead of `TLSSocket`.
When socket is connecting to the remote server (i.e. not yet connected,
but already past dns resolve phase), derive `_connecting` property from
it, because otherwise `afterConnect()` will throw an assertion.
fix#6443
The domain module has been switched over to use the domain module API as
much as currently possible. There are still some hooks in the
EventEmitter, but hopefully we can remove those in the future.
Since RandomBytesRequest makes a call to MakeCallback, needed it to be
a class so AsyncWrap could handle any async listeners.
Also added a simple test for an issue had during implementation where
the memory was being released and returned.
AsyncListener is a JS API that works in tandem with the AsyncWrap class
to allow the user to be alerted to key events in the life cycle of an
asynchronous event. The AsyncWrap class has its own MakeCallback
implementation that core will be migrated to use, and uses state sharing
techniques to allow quicker communication between JS and C++ whether the
async event callbacks need to be called.
When `tls.connect()` is called with `socket` option, it should try to
reuse hostname previously passed to `net.connect()` and only after that
fall back to `'localhost'`.
fix#6409
Currently fs.watch does not have an option to specify if a directory
should be recursively watched for events across all subdirectories.
Several file watcher APIs support this. FSEvents on OS X > 10.5 is
one example. libuv has added support for FSEvents, but fs.watch had
no way to specify that a recursive watch was required.
fs.watch now has an additional boolean option 'recursive'. When set
to true, and when supported, fs.watch will return notifications for
the entire directory tree hierarchy rooted at the specified path.
Previous behaviour was to drop to an openssl prompt
("Enter PEM pass phrase:") when supplying a private key with a
passphrase. This change adds a fourth, optional, paramter that
will be used as the passphrase.
To include this parameter in a backwards compatible way it was
necessary to expose the previously undocumented (and unexposed)
feature of being able to explitly setting the output encoding.
This addresses a current shortcoming of the V8 SetNamedPropertyHandler
function.
It does not provide a way to intercept Object.defineProperty(..) calls.
As a result, these properties are not copied onto the contextified
sandbox when a new global property is added via either a function
declaration or a Object.defineProperty(global, ...) call.
Note that any function declarations or Object.defineProperty() globals
that are created asynchronously (in a setTimeout, callback, etc.) will
happen AFTER the call to copy properties, and thus not be caught.
The way to properly fix this is to add some sort of a
Object::SetNamedDefinePropertyHandler() function that takes a callback,
which receives the property name and property descriptor as arguments.
Luckily, such situations are rare, and asynchronously-added globals
weren't supported by Node's VM module until 0.12 anyway. But, this
should be fixed properly in V8, and this copy function should be removed
once there is a better way.
Fix#6416
The list of supported HTTP methods is available in JS land now so there
is no longer any need to pass a stringified version of the method to the
parser callback, it can look up the method name for itself.
Saves a call to v8::Eternal::Get() in the common case and a costly
v8::String::NewFromOneByte() in the uncommon case.
When the socket closes, the client's http incoming message object was
emitting an 'aborted' event if it had not yet been ended.
However, it's possible, when a response is being repeatedly paused and
resumed (eg, if piped to a slow FS write stream), that there will be a
final chunk remaining in the js-land buffer when the socket is torn
down.
When that happens, the socketCloseListener function detects that we have
not yet reached the end of the response message data, and treats this as
an abrupt abort, immediately (and forcibly) ending the incoming message
data stream, and discarding that final chunk of data.
The result is that, for example, npm will have problems because tarballs
are missing a few bytes off the end, every time.
Closes GH-6402
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Backport of 085dd30
Before this commit, the SIGUSR1 signal handler wasn't installed until
late in the bootstrapping process and we were prone to miss signals
sent by other processes.
This commit installs an early-boot signal handler that merely records
the fact that we received a signal. Once the debugger infrastructure
is in place, the signal is re-raised, kickstarting the debugger.
Among other things, this means that simple/test-debugger-client is
now _much_ less likely to fail.
Commit 30e5366b ("core: Use a uv_signal for debug listener") changed
SIGUSR1 handling from a signal handler to libuv's uv_signal_*()
functionality to fix a race condition (and possible hang) in the
signal handler.
While a good change in itself, it made it impossible to interrupt
long running scripts. When a script is stuck in a busy loop, control
never returns to the event loop, which in turn means the signal
callback - and therefore the debugger - is never invoked.
This commit changes SIGUSR1 handling back to a normal signal handler
but one that treads _very_ carefully.
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Fix GH-6214