spawn stdio options can be a 'stream', but the following code
fails with "Incorrect value for stdio stream: [object Object]",
despite being a stream. The problem is the test isn't really
for a stream, its for an object with a numeric `.fd` property,
and streams do not have an fd until their async 'open' event
has occurred. This is reasonable, but was not documented.
child_process.spawn('date', [], {stdio: [
'ignore',
fs.createWriteStream('out.txt',{flags:'a'}),
'ignore']})
The RR cluster scheduler replaces the normal StreamWrap handle. Because
of this the AsyncListener method failed to be in place when domains were
in use.
The issue was resolved in 828f145 by reverting having domains use
AsyncListeners.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Before when an AsyncListener object was created and the "create"
callback returned a value, it was necessary to construct a new Object
with the same callbacks but add a place for the new storage value.
Now, instead, a separate storage array is kept on the context which is
used for any return value of the "create" callback. This significantly
reduces the number of Objects that need to be created.
Also added a flags property to the context to quickly check if a
specific callback was available either on the context or on the
AsyncListener instance itself.
Few other minor changes for readability that were difficult to separate
into their own commit.
This has not been optimized yet.
This is a slightly modified revert of bc39bdd.
Getting domains to use AsyncListeners became too much of a challenge
with many edge cases. While this is still a goal, it will have to be
deferred for now until more test coverage can be provided.
Forcibly disable -Werror, the old { 'werror': '' } hack in node.gyp
no longer works with newer versions of V8.
We support a wide range of compilers, it's simply not feasible to
squelch all warnings, never mind that the libraries in deps/ are
not under our control.
Fixes#6817.
If a write is above the highWaterMark, _write still manages to
fully send it synchronously, _writableState.length will be adjusted down
to 0 synchronously with the write returning false, but 'drain' will
not be emitted until process.nextTick.
If another small write which is below highWaterMark is issued before
process.nextTick happens, _writableState.needDrain will be reset to false,
and the drain event will never be fired.
So we should check needDrain before setting it up, which prevents it
from inproperly resetting to false.
Instead of checking the uid on the array index of the queue, instead the
object property "uid" was checked on the queue iteself. Because this
will always evaluate to "undefined" the same listener could be added
multiple times to the same context.
There was a flaw in the old API that has been fixed. Now the
asyncListener callback is now the "create" object property in the
callback object, and is optional.
The fact that the "exit" event passes the exit code as an argument
as omitted from the documentation. This adds the explanation and
augments the example code to show that.
Master was disconnecting its workers as soon as they both started up.
Meanwhile, the workers were trying to listen. Its a race, sometimes the
disconnect would happen between when worker gets the response message,
and acks that message with a 'listening'. This worked OK after v0.11
introduced a behaviour where disconnect would always exit the worker,
but once that backwards-incompatible behaviour is removed, the worker
lives long enough to try and respond to the master, and child_process
errors at the attempt to send from a disconnected child.
This is a problem present in both v0.10, and v0.11, where the 'setup'
event is synchronously emitted by `cluster.setupMaster()`, a mostly
harmless anti-pattern.
Fix inadvertent v0.11 changes to the definition of suicide, particularly
the relationship between suicide state, the disconnect event, and when
exit should occur.
In v0.10, workers don't forcibly exit on disconnect, it doesn't give
them time to do a graceful finish of open client connections, they exit
under normal node rules - when there is nothing left to do. But on
unexpected disconnect they do exit so the workers aren't left around
after the master.
Note that a test as-written was invalid, it failed against the v0.10
cluster API, demonstrating that it was an undocumented API change.
Fixes issue in 0.11 where callback doesn't occur if worker count is
currently zero. In 0.10 callback occurs after worker count is zero, and
occurs in next tick if worker count is currently zero.