Do not send signal to children if they are already in debug mode.
Node.js on Windows does not register signal handler, and thus calling
`process._debugProcess()` will throw an error.
Reviewed-By: Trevor Norris <trevnorris@gmail.com>
PR-URL: https://github.com/joyent/node/pull/8476
Do not send signal to children if they are already in debug mode.
Node.js on Windows does not register signal handler, and thus calling
`process._debugProcess()` will throw an error.
Reviewed-By: Trevor Norris <trevnorris@gmail.com>
PR-URL: https://github.com/joyent/node/pull/8476
Currently, cluster workers can be removed from the workers list in three
different places:
- In the exit event handler for the worker process.
- In the disconnect event handler of the worker process.
- In the disconnect event handler of the cluster master.
However, handles for a given worker are cleaned up only in one of these
places: in the cluster master's disconnect event handler.
Because these events happen asynchronously, it is possible that the
workers list is empty before we even clean up one handle. This makes
the assert that makes sure that no handle is left when the workers
list is empty fail.
This commit removes the worker from the cluster.workers list only when
the worker is dead _and_ disconnected, at which point we're sure that
its associated handles are cleaned up.
Fixes#8191 and #8192.
Reviewed-By: Fedor Indutny <fedor@indutny.com>
Callbacks in node are usually asynchronous, and should never be
sometimes synchronous, and sometimes asynchronous.
Reviewed-by: Trevor Norris <trev.norris@gmail.com>
Between 0.11.1 and 0.11.2, the message and error events stopped
being usable via the cluster.worker object. This commit makes
them usable again. Closes#7998.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Emits on every call to cluster.setupMaster(), even if no new settings
are given. This is because calling cluster.setupMaster() without
arguments (or with an empty options object) results in the settings
being restored to their defaults.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Only attributes of 'cluster.settings' will be modified after the first
call, leaving all other cluster initialization alone. Each call that
includes a 'settings' argument triggers a 'setup' event to be emitted.
Instead of each call resetting all values to their defaults, use the
current settings (if any) as the default. This retains setupMaster's
support how cluster.fork() uses setupMaster() to ensure
cluster.settings has been populated.
Update example in docs to use current node coding style and include
an example of progressive configuration.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
In v0.10.x, process.argv and process.execArgv would only be
evaluated and copied into cluster.settings on the first call to
cluster.setupMaster() (either directly or via cluster.fork()),
allowing them to be modified as needed before initializing the
settings.
In 41b75ca the behaviour was changed so that these values are
initialized at the time of the first require('cluster').
Fixes#7670.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
This is a problem present in both v0.10, and v0.11, where the 'setup'
event is synchronously emitted by `cluster.setupMaster()`, a mostly
harmless anti-pattern.
Fix inadvertent v0.11 changes to the definition of suicide, particularly
the relationship between suicide state, the disconnect event, and when
exit should occur.
In v0.10, workers don't forcibly exit on disconnect, it doesn't give
them time to do a graceful finish of open client connections, they exit
under normal node rules - when there is nothing left to do. But on
unexpected disconnect they do exit so the workers aren't left around
after the master.
Note that a test as-written was invalid, it failed against the v0.10
cluster API, demonstrating that it was an undocumented API change.
Fixes issue in 0.11 where callback doesn't occur if worker count is
currently zero. In 0.10 callback occurs after worker count is zero, and
occurs in next tick if worker count is currently zero.
Don't emit the 'disconnect' event until all workers have gone away.
Before this commit, the event was emitted when all open handles were
closed, which usually - but not always - amounts to the same thing.
Fixes#6346.
If `obj` given to `cluster._getServer` has `_setServerData` or
`_getServerData` methods, the data will be synchronized across workers
and stored in master.
Libuv now returns errors directly. Make everything in src/ and lib/
follow suit.
The changes to lib/ are not strictly necessary but they remove the need
for the abominations that are process._errno and node::SetErrno().
Empirical evidence suggests that OS-level load balancing (that is,
having multiple processes listen on a socket and have the operating
system wake up one when a connection comes in) produces skewed load
distributions on Linux, Solaris and possibly other operating systems.
The observed behavior is that a fraction of the listening processes
receive the majority of the connections. From the perspective of the
operating system, that somewhat makes sense: a task switch is expensive,
to be avoided whenever possible. That's why the operating system likes
to give preferential treatment to a few processes, because it reduces
the number of switches.
However, that rather subverts the purpose of the cluster module, which
is to distribute the load as evenly as possible. That's why this commit
adds (and defaults to) round-robin support, meaning that the master
process accepts connections and distributes them to the workers in a
round-robin fashion, effectively bypassing the operating system.
Round-robin is currently disabled on Windows due to how IOCP is wired
up. It works and you can select it manually but it probably results in
a heavy performance hit.
Fixes#4435.
Implement support for debugging cluster workers. Each worker process
is assigned a new debug port in an increasing sequence.
I.e. when master process uses port 5858, then worker 1 uses port 5859,
worker 2 uses port 5860, and so on.
Introduce new command-line parameter '--debug-port=' which sets debug_port
but does not start debugger. This option works for all node processes, it
is not specific to cluster workers.
Fixesjoyent/node#5318.
Clean up and DRY the cluster source code. Fix a few bugs while we're
here:
* Short-lived handles in long-lived worker processes were never
reclaimed, resulting in resource leaks.
* Handles in the master process are now closed when the last worker
that holds a reference to them quits. Previously, they were only
closed at cluster shutdown.
* The cluster object no longer exposes functions/properties that are
only valid in the 'other' process, e.g. cluster.fork() is no longer
exported in worker processes.
So much goodness and still manages to reduce the line count from 590
to 320.
Don't scan the whole string for a "NODE_CLUSTER_" substring, just check
that the string starts with the expected prefix. The linear scan was
causing a noticeable (but unsurprising) slowdown on messages with a
large .cmd string property.
Profiling in clustered environments doesn't work out of the box.
By default, V8 writes the profile data of all processes to a single
v8.log.
Running that log file through a tick processor produces bogus numbers
because many events won't match up with the recorded memory mappings
and you end up with graphs where 80+% of ticks is unaccounted for.
Fixing the tick processor to deal with multi-process output is not very
useful because the processes may be running wildly disparate workloads.
That's why we fix up the command line arguments to include
a "--logfile=v8-%p.log" argument (where %p is expanded to the PID)
unless it already contains a --logfile argument.
Fixes#4617.
Make the 'listening' event handler in the master process see the actual port
that the worker bound to when the worker specified port 0, i.e. a random port.
This commit reverts the following commits (in reverse chronological order):
74d076c errnoException must be done immediately
ddb02b9 net: support Server.listen(Pipe)
085a098 cluster: do not use internal server API
d138875 net: lazy listen on handler
Commit d138875 introduced a backwards incompatible change that broke the
simple/test-net-socket-timeout and simple/test-net-lazy-listen tests - it
defers listening on the target port until the `net.Server` instance has at
least one 'connection' event listener.
The other patches had to be reverted in order to revert d138875.
Fixes#3832.
This implements server.listen({ fd: <filedescriptor> }). The fd should
refer to an underlying resource that is already bound and listening, and
causes the new server to also accept connections on it.
Not supported on Windows. Raises ENOTSUP.
In case a worker would spawn a new subprocess with process.env, NODE_UNIQUE_ID
would have been a part of the env. Making the new subprocess believe it is a
worker, this would result in some confusion if the subprocess where to listen to
a port, since the server handle request would then be relayed to the worker.
This patch removes the NODE_UNIQUE_ID flag from process.env on startup so any
subprocess spawned by a worker is a normal process with no cluster stuff.
Regarding discussion in #3198. Passing the worker as an argument
to an event emitted on the worker is redundant, and an unnecessary
break in consistency vs the events on the ChildProcess objects.
It was removed from 'exit', but 'listening' and others were
overlooked. This corrects that oversight.
test: fixes due to new cluster api.
- changed worker `death` to `exit`.
- corrected argument type expected by worker `exit` handler.
test: more tests of cluster.worker death
cluster: fixed arguments on worker 'exit' event
worker 'exit' event now emits arguments consistent with the
corresponding event in child_process module.
This patch will kill the worker once it has lost its connection with the parent.
However if the worker are doing a suicide, other measures will be used.
This patch add a worker.disconnect() method there will stop the worker from accepting
new connections and then stop the IPC. This allow the worker to die graceful.
When the IPC has been disconnected a 'disconnect' event will emit.
The patch also add a cluster.disconnect() method, this will call worker.disconnect() on
all connected workers. When the workers are disconneted it will then close all server
handlers. This allow the cluster itself to self terminate in a graceful way.