This simplifies the stability index to 4 levels:
0 - deprecated
1 - experimental / feature-flagged
2 - stable
3 - locked
Domains has been downgraded to deprecated, assert has been
downgraded to stable. Timers and Module remain locked. All
other APIs are now stable.
PR-URL: https://github.com/iojs/io.js/pull/943
Fixes: https://github.com/iojs/io.js/issues/930
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Jeremiah Senkpiel <fishrock123@rocketmail.com>
Reviewed-By: Vladimir Kurchatkin <vladimir.kurchatkin@gmail.com>
Documentation incorrectly used bracket notation for optional parameters.
This caused inconsistencies in usage because of examples like the
following:
fs.write(fd, data[, position[, encoding]], callback)
This simply fixes all uses of bracket notation in documentation.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Reviewed-by: Fedor Indutny <fedor@indutny.com>
Currently, cluster workers can be removed from the workers list in three
different places:
- In the exit event handler for the worker process.
- In the disconnect event handler of the worker process.
- In the disconnect event handler of the cluster master.
However, handles for a given worker are cleaned up only in one of these
places: in the cluster master's disconnect event handler.
Because these events happen asynchronously, it is possible that the
workers list is empty before we even clean up one handle. This makes
the assert that makes sure that no handle is left when the workers
list is empty fail.
This commit removes the worker from the cluster.workers list only when
the worker is dead _and_ disconnected, at which point we're sure that
its associated handles are cleaned up.
Fixes#8191 and #8192.
Reviewed-By: Fedor Indutny <fedor@indutny.com>
Emits on every call to cluster.setupMaster(), even if no new settings
are given. This is because calling cluster.setupMaster() without
arguments (or with an empty options object) results in the settings
being restored to their defaults.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Only attributes of 'cluster.settings' will be modified after the first
call, leaving all other cluster initialization alone. Each call that
includes a 'settings' argument triggers a 'setup' event to be emitted.
Instead of each call resetting all values to their defaults, use the
current settings (if any) as the default. This retains setupMaster's
support how cluster.fork() uses setupMaster() to ensure
cluster.settings has been populated.
Update example in docs to use current node coding style and include
an example of progressive configuration.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
As discussed on the mailing list: the module will not go away but the
API will continue to receive updates as the need arises.
Link: https://groups.google.com/forum/#!topic/nodejs/uqyTcQfimAI
Message-ID: <7384b30e-b64c-4086-b78f-b5acca9842a9@googlegroups.com>
- fixed some incomprehensible wording ("event assigned to..."?)
- removed undocumented and unnecessary process properties from example
- corrected the docs on the default for the exec setting
- described when workers are removed from cluster.workers
- described addressType, which was documented as existing, but not what
values it might have
- spell out more clearly the limitations of setupMaster
- describe disconnect in sufficient detail that why a child does or does
not exit can be understood
- clarify which cluster functions and events are available on process or
just on the worker, as well as which are not available in children,
- don't describe events as the same, when they have receive different
arguments
- fix misleading disconnect example: since disconnect already calls
close on all servers, doing it again in the example is a no-op, not
the "force close" it was claimed to be
- document the error event, not catching it will kill your node
- describe suicide better, it is important, and a bit unintuitive
(process.exit() is not suicide?)
- use worker consistently throughout, instead of child.
Empirical evidence suggests that OS-level load balancing (that is,
having multiple processes listen on a socket and have the operating
system wake up one when a connection comes in) produces skewed load
distributions on Linux, Solaris and possibly other operating systems.
The observed behavior is that a fraction of the listening processes
receive the majority of the connections. From the perspective of the
operating system, that somewhat makes sense: a task switch is expensive,
to be avoided whenever possible. That's why the operating system likes
to give preferential treatment to a few processes, because it reduces
the number of switches.
However, that rather subverts the purpose of the cluster module, which
is to distribute the load as evenly as possible. That's why this commit
adds (and defaults to) round-robin support, meaning that the master
process accepts connections and distributes them to the workers in a
round-robin fashion, effectively bypassing the operating system.
Round-robin is currently disabled on Windows due to how IOCP is wired
up. It works and you can select it manually but it probably results in
a heavy performance hit.
Fixes#4435.
The first example in cluster.markdown requires NODE_DEBUG env to show
debug message.
And also fix the message because it was a little bit different with
the actual message.
This implements server.listen({ fd: <filedescriptor> }). The fd should
refer to an underlying resource that is already bound and listening, and
causes the new server to also accept connections on it.
Not supported on Windows. Raises ENOTSUP.
Regarding discussion in #3198. Passing the worker as an argument
to an event emitted on the worker is redundant, and an unnecessary
break in consistency vs the events on the ChildProcess objects.
It was removed from 'exit', but 'listening' and others were
overlooked. This corrects that oversight.
test: fixes due to new cluster api.
- changed worker `death` to `exit`.
- corrected argument type expected by worker `exit` handler.
test: more tests of cluster.worker death
cluster: fixed arguments on worker 'exit' event
worker 'exit' event now emits arguments consistent with the
corresponding event in child_process module.
This patch add a worker.disconnect() method there will stop the worker from accepting
new connections and then stop the IPC. This allow the worker to die graceful.
When the IPC has been disconnected a 'disconnect' event will emit.
The patch also add a cluster.disconnect() method, this will call worker.disconnect() on
all connected workers. When the workers are disconneted it will then close all server
handlers. This allow the cluster itself to self terminate in a graceful way.