Otherwise the string triggers an assertion error in node_http_parser.c,
line 370:
assert(Buffer::HasInstance(args[0]) == true);
because the first argument is not a Buffer object.
When socket, passed in `tls.connect()` `options` argument is not yet
connected to the server, `_handle` gets assigned to a `net.Socket`,
instead of `TLSSocket`.
When socket is connecting to the remote server (i.e. not yet connected,
but already past dns resolve phase), derive `_connecting` property from
it, because otherwise `afterConnect()` will throw an assertion.
fix#6443
The domain module has been switched over to use the domain module API as
much as currently possible. There are still some hooks in the
EventEmitter, but hopefully we can remove those in the future.
Since RandomBytesRequest makes a call to MakeCallback, needed it to be
a class so AsyncWrap could handle any async listeners.
Also added a simple test for an issue had during implementation where
the memory was being released and returned.
AsyncListener is a JS API that works in tandem with the AsyncWrap class
to allow the user to be alerted to key events in the life cycle of an
asynchronous event. The AsyncWrap class has its own MakeCallback
implementation that core will be migrated to use, and uses state sharing
techniques to allow quicker communication between JS and C++ whether the
async event callbacks need to be called.
When `tls.connect()` is called with `socket` option, it should try to
reuse hostname previously passed to `net.connect()` and only after that
fall back to `'localhost'`.
fix#6409
Currently fs.watch does not have an option to specify if a directory
should be recursively watched for events across all subdirectories.
Several file watcher APIs support this. FSEvents on OS X > 10.5 is
one example. libuv has added support for FSEvents, but fs.watch had
no way to specify that a recursive watch was required.
fs.watch now has an additional boolean option 'recursive'. When set
to true, and when supported, fs.watch will return notifications for
the entire directory tree hierarchy rooted at the specified path.
Previous behaviour was to drop to an openssl prompt
("Enter PEM pass phrase:") when supplying a private key with a
passphrase. This change adds a fourth, optional, paramter that
will be used as the passphrase.
To include this parameter in a backwards compatible way it was
necessary to expose the previously undocumented (and unexposed)
feature of being able to explitly setting the output encoding.
This addresses a current shortcoming of the V8 SetNamedPropertyHandler
function.
It does not provide a way to intercept Object.defineProperty(..) calls.
As a result, these properties are not copied onto the contextified
sandbox when a new global property is added via either a function
declaration or a Object.defineProperty(global, ...) call.
Note that any function declarations or Object.defineProperty() globals
that are created asynchronously (in a setTimeout, callback, etc.) will
happen AFTER the call to copy properties, and thus not be caught.
The way to properly fix this is to add some sort of a
Object::SetNamedDefinePropertyHandler() function that takes a callback,
which receives the property name and property descriptor as arguments.
Luckily, such situations are rare, and asynchronously-added globals
weren't supported by Node's VM module until 0.12 anyway. But, this
should be fixed properly in V8, and this copy function should be removed
once there is a better way.
Fix#6416
The list of supported HTTP methods is available in JS land now so there
is no longer any need to pass a stringified version of the method to the
parser callback, it can look up the method name for itself.
Saves a call to v8::Eternal::Get() in the common case and a costly
v8::String::NewFromOneByte() in the uncommon case.
When the socket closes, the client's http incoming message object was
emitting an 'aborted' event if it had not yet been ended.
However, it's possible, when a response is being repeatedly paused and
resumed (eg, if piped to a slow FS write stream), that there will be a
final chunk remaining in the js-land buffer when the socket is torn
down.
When that happens, the socketCloseListener function detects that we have
not yet reached the end of the response message data, and treats this as
an abrupt abort, immediately (and forcibly) ending the incoming message
data stream, and discarding that final chunk of data.
The result is that, for example, npm will have problems because tarballs
are missing a few bytes off the end, every time.
Closes GH-6402
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Backport of 085dd30
Before this commit, the SIGUSR1 signal handler wasn't installed until
late in the bootstrapping process and we were prone to miss signals
sent by other processes.
This commit installs an early-boot signal handler that merely records
the fact that we received a signal. Once the debugger infrastructure
is in place, the signal is re-raised, kickstarting the debugger.
Among other things, this means that simple/test-debugger-client is
now _much_ less likely to fail.
Commit 30e5366b ("core: Use a uv_signal for debug listener") changed
SIGUSR1 handling from a signal handler to libuv's uv_signal_*()
functionality to fix a race condition (and possible hang) in the
signal handler.
While a good change in itself, it made it impossible to interrupt
long running scripts. When a script is stuck in a busy loop, control
never returns to the event loop, which in turn means the signal
callback - and therefore the debugger - is never invoked.
This commit changes SIGUSR1 handling back to a normal signal handler
but one that treads _very_ carefully.
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Fix GH-6214
Don't emit the 'disconnect' event until all workers have gone away.
Before this commit, the event was emitted when all open handles were
closed, which usually - but not always - amounts to the same thing.
Fixes#6346.
fs.truncate() and its synchronous sibling are implemented in terms of
open() + ftruncate(). Unfortunately, it opened the target file with
mode 'w' a.k.a. 'write-only and create or truncate at open'.
The subsequent call to ftruncate() then moved the end-of-file pointer
from zero to the requested offset with the net result of a file that's
neatly truncated at the right offset and filled with zero bytes only.
This bug was introduced in commit 168a5557 but in fairness, before that
commit fs.truncate() worked like fs.ftruncate() so it seems we've never
had a working fs.truncate() until now.
Fixes#6233.
Don't forget to initialize the c-ares task tree head when creating a
new Environment. Oversight from the multi-context work that landed
in commit 756b622.
Fixes#6244.
Fix "Assertion failed" when trying to connect to non-int ports:
Assertion failed: (args[2]->Uint32Value()), function Connect,
file ../src/tcp_wrap.cc, line 379.
Abort trap: 6
Apparently, context->Global() won't be destroyed if the context itself
isn't marked as weak and independent.
Also, the weakness flag should be cleared once the weak callback is
executed, otherwise we'll get crashes in Debug builds.
fix#6115 and #6201
The `options` that were being passed in before here are specific to a
single request, which kinda defeats the purpose of using an Agent in the
first place.
On a worse note, these `options` have not yet been "processed" by the
`http.ClientRequest` class, so if `port: null` is set (like it is as the
result of a `url.parse()` call), then they take preference over the
processed values since the agent's "options" get mixed in last in the
`createSocket()` function.
Fixes#6197.
Fixes#6199.
Closes#6231.
Otherwise the data ends up "on the wire" twice, and
switching between consuming the stream using `ondata`
vs. `read()` would yield duplicate data, which was bad.
Functions created using: `vm.runInNewContext('(function() { })')` will
reference only `proxy_global_` object and not `sandbox_`. Thus in case,
where there're no references to sandbox (such as in example above),
`ContextifyContext` will be destroyed and use-after-free might happen.