There was no need to share state between C++ and JS for these two
values. So they have been moved to their respective locations. This will
help performance only a tiny bit, but it does help code complexity much
more.
We need to keep ObjectWrap around for module authors (we think), but
v8 3.21 broke node_object_wrap.h with respect to MSVC. Coincidentally,
we no longer use ObjectWrap at all in core, and native modules might
as well use their own entirely internal implementation if they need it.
Turns out that we don't use node_object_wrap.h any more in core,
and, with v8 3.21, it's breaking our Windows build. Removing refs
to it everywhere (and adding node.h in one case where it was the
only way node.h was being included), we have restored the Windows
build.
Previous behaviour was to drop to an openssl prompt
("Enter PEM pass phrase:") when supplying a private key with a
passphrase. This change adds a fourth, optional, paramter that
will be used as the passphrase.
To include this parameter in a backwards compatible way it was
necessary to expose the previously undocumented (and unexposed)
feature of being able to explitly setting the output encoding.
This addresses a current shortcoming of the V8 SetNamedPropertyHandler
function.
It does not provide a way to intercept Object.defineProperty(..) calls.
As a result, these properties are not copied onto the contextified
sandbox when a new global property is added via either a function
declaration or a Object.defineProperty(global, ...) call.
Note that any function declarations or Object.defineProperty() globals
that are created asynchronously (in a setTimeout, callback, etc.) will
happen AFTER the call to copy properties, and thus not be caught.
The way to properly fix this is to add some sort of a
Object::SetNamedDefinePropertyHandler() function that takes a callback,
which receives the property name and property descriptor as arguments.
Luckily, such situations are rare, and asynchronously-added globals
weren't supported by Node's VM module until 0.12 anyway. But, this
should be fixed properly in V8, and this copy function should be removed
once there is a better way.
Fix#6416
The list of supported HTTP methods is available in JS land now so there
is no longer any need to pass a stringified version of the method to the
parser callback, it can look up the method name for itself.
Saves a call to v8::Eternal::Get() in the common case and a costly
v8::String::NewFromOneByte() in the uncommon case.
Make the build rule depend on the build artifact (weakref.node) itself
rather than the directory it's built in. Depending on the directory
means that a build failure won't trigger a rebuild on the next
invocation because the directory's timestamp has been updated.
This is a back-port of commit 1189571 from the master branch that
hopefully fixes the following CI error:
executing: make test/gc/node_modules/weak/build/
make: *** No rule to make target `test/gc/node_modules/weak/build/'.
Command exited with non-zero: make test/gc/node_modules/weak/build/
Build step 'Execute NodeJS script' marked build as failure
When the socket closes, the client's http incoming message object was
emitting an 'aborted' event if it had not yet been ended.
However, it's possible, when a response is being repeatedly paused and
resumed (eg, if piped to a slow FS write stream), that there will be a
final chunk remaining in the js-land buffer when the socket is torn
down.
When that happens, the socketCloseListener function detects that we have
not yet reached the end of the response message data, and treats this as
an abrupt abort, immediately (and forcibly) ending the incoming message
data stream, and discarding that final chunk of data.
The result is that, for example, npm will have problems because tarballs
are missing a few bytes off the end, every time.
Closes GH-6402
Make the build rule depend on the build artifact (weakref.node) itself
rather than the directory it's built in. Depending on the directory
means that a build failure won't trigger a rebuild on the next
invocation because the directory's timestamp has been updated.
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Backport of 085dd30
- fixed some incomprehensible wording ("event assigned to..."?)
- removed undocumented and unnecessary process properties from example
- corrected the docs on the default for the exec setting
- described when workers are removed from cluster.workers
- described addressType, which was documented as existing, but not what
values it might have
- spell out more clearly the limitations of setupMaster
- describe disconnect in sufficient detail that why a child does or does
not exit can be understood
- clarify which cluster functions and events are available on process or
just on the worker, as well as which are not available in children,
- don't describe events as the same, when they have receive different
arguments
- fix misleading disconnect example: since disconnect already calls
close on all servers, doing it again in the example is a no-op, not
the "force close" it was claimed to be
- document the error event, not catching it will kill your node
- describe suicide better, it is important, and a bit unintuitive
(process.exit() is not suicide?)
- use worker consistently throughout, instead of child.
- Make explicit that .disconnected is set before the disconnect event,
and it is not allowed to send messages after calling .disconnect(),
even while waiting for a delayed disconect event.
- Remove obsolete claim that explicit exit is required
- Describe silent: in the options for fork()
- Describe .connected as the property it is, not just as an aside in
the disconnect() method
A follow-up commit will save the domain name on the request object but
we can't call that property 'domain' because that gets intercepted by
src/node.cc and lib/domain.js to implement the node.js feature of the
same name.
To avoid confusion, rename all variables called 'domain' to 'hostname'.
Before this commit, the SIGUSR1 signal handler wasn't installed until
late in the bootstrapping process and we were prone to miss signals
sent by other processes.
This commit installs an early-boot signal handler that merely records
the fact that we received a signal. Once the debugger infrastructure
is in place, the signal is re-raised, kickstarting the debugger.
Among other things, this means that simple/test-debugger-client is
now _much_ less likely to fail.
Commit 30e5366b ("core: Use a uv_signal for debug listener") changed
SIGUSR1 handling from a signal handler to libuv's uv_signal_*()
functionality to fix a race condition (and possible hang) in the
signal handler.
While a good change in itself, it made it impossible to interrupt
long running scripts. When a script is stuck in a busy loop, control
never returns to the event loop, which in turn means the signal
callback - and therefore the debugger - is never invoked.
This commit changes SIGUSR1 handling back to a normal signal handler
but one that treads _very_ carefully.
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Fix GH-6214