Socket may become not `readable`, but http should not rely on this
property and should not think that it means that no data will ever
arrive from it. In fact, it may arrive in a next tick and, since
`this.push(null)` was already called, it will result in a error like
this:
Error: stream.push() after EOF
at readableAddChunk (_stream_readable.js:143:15)
at IncomingMessage.Readable.push (_stream_readable.js:123:10)
at HTTPParser.parserOnBody (_http_common.js:132:22)
at Socket.socketOnData (_http_client.js:277:20)
at Socket.EventEmitter.emit (events.js:101:17)
at Socket.Readable.read (_stream_readable.js:367:10)
at Socket.socketCloseListener (_http_client.js:196:10)
at Socket.EventEmitter.emit (events.js:123:20)
at TCP.close (net.js:479:12)
fix#6784
The test was calling server.close() after write on the socket
had completed. However the fact that the write had completed was
not valid indication that the server had received the data.
This would result in a premutaure closing of the server and
an ECONNRESET event on the client.
When creating TLSSocket on top of the regular socket that already
contains some received data, `_tls_wrap.js` should try to write all that
data to the internal `SSL*` instance.
fix#6940
Make the HMAC digest method configurable. Update crypto.pbkdf2() and
crypto.pbkdf2Sync() to take an extra, optional digest argument.
Before this commit, SHA-1 (admittedly the most common method) was used
exclusively.
Fixes#6553.
After one of OpenSSL updates we have stopped accepting PEM private keys
and certificates that doesn't end with a newline (`\n`) character.
Handle this regression in `crypto.js` to make less trouble to our users.
fix#6892
It was possible that the same AL instance was run twice if it were both
attached to the currentContext then again added to the new asyncQueue
generated for the new stack.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
The ability to add/remove an AsyncListener to an object after its
creation was an artifact of trying to get AL working with the domain
module. Now that is no longer necessary and other features are going to
be implemented that would be affected by this functionality. So the code
will be removed for now to simplify the implementation process.
In the future this code will likely be reintroduced, but after some
other more important matters have been addressed.
None of this functionality was documented, as is was meant specifically
for domain specific implementation work arounds.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
This test was originally intended to guard against regressions for
commit 16b59cbc74.
As such, it only needs to ensure that process exit has not been held up
by the date cache timer, which would fire on the next second.
We now wait to connect to the debuggee until we know that
its error stream has data, to ensure that the output message
"connecting..... ok" appears after "Debugger listening on port xyz"
I also increased the test timeout to let the more complex
tests finish in time on Windows
This change fixes the following unit tests on Windows:
test-debugger-repl.js
test-debugger-repl-term.js
test-debugger-repl-utf8.js
test-debugger-repl-restart.js
Before this commit, verification exceptions had err.message set to the
OpenSSL error code (e.g. 'UNABLE_TO_VERIFY_LEAF_SIGNATURE').
This commit moves the error code to err.code and replaces err.message
with a human-readable error. Example:
// before
{
message: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
// after
{
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
message: 'unable to verify the first certificate'
}
UNABLE_TO_VERIFY_LEAF_SIGNATURE is a good example of why you want this:
the error code suggests that it's the last certificate that fails to
validate while it's actually the first certificate in the chain.
Going by the number of mailing list posts and StackOverflow questions,
it's a source of confusion to many people.
domain.create().exit() should not clear the domain stack if the domain
instance does not exist within the stack.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
The RR cluster scheduler replaces the normal StreamWrap handle. Because
of this the AsyncListener method failed to be in place when domains were
in use.
The issue was resolved in 828f145 by reverting having domains use
AsyncListeners.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
If a write is above the highWaterMark, _write still manages to
fully send it synchronously, _writableState.length will be adjusted down
to 0 synchronously with the write returning false, but 'drain' will
not be emitted until process.nextTick.
If another small write which is below highWaterMark is issued before
process.nextTick happens, _writableState.needDrain will be reset to false,
and the drain event will never be fired.
So we should check needDrain before setting it up, which prevents it
from inproperly resetting to false.
Instead of checking the uid on the array index of the queue, instead the
object property "uid" was checked on the queue iteself. Because this
will always evaluate to "undefined" the same listener could be added
multiple times to the same context.
There was a flaw in the old API that has been fixed. Now the
asyncListener callback is now the "create" object property in the
callback object, and is optional.
Master was disconnecting its workers as soon as they both started up.
Meanwhile, the workers were trying to listen. Its a race, sometimes the
disconnect would happen between when worker gets the response message,
and acks that message with a 'listening'. This worked OK after v0.11
introduced a behaviour where disconnect would always exit the worker,
but once that backwards-incompatible behaviour is removed, the worker
lives long enough to try and respond to the master, and child_process
errors at the attempt to send from a disconnected child.
This is a problem present in both v0.10, and v0.11, where the 'setup'
event is synchronously emitted by `cluster.setupMaster()`, a mostly
harmless anti-pattern.
Fix inadvertent v0.11 changes to the definition of suicide, particularly
the relationship between suicide state, the disconnect event, and when
exit should occur.
In v0.10, workers don't forcibly exit on disconnect, it doesn't give
them time to do a graceful finish of open client connections, they exit
under normal node rules - when there is nothing left to do. But on
unexpected disconnect they do exit so the workers aren't left around
after the master.
Note that a test as-written was invalid, it failed against the v0.10
cluster API, demonstrating that it was an undocumented API change.
Fixes issue in 0.11 where callback doesn't occur if worker count is
currently zero. In 0.10 callback occurs after worker count is zero, and
occurs in next tick if worker count is currently zero.
QueryString.stringify() allowed a fourth argument that was used as a
conditional in the return value, but was undocumented, not used by core
and always was always false/undefiend. So the argument and conditional
have been removed.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>