`tls_wrap.cc` was crashing in an `Unwrap` call, when non
`SecureContext` object was passed to it. Check that the passed object
is a `SecureContext` instance before unwrapping it.
fix#7008
The test is no longer valid for the original scenario.
It now fails intermittently because of two other issues:
1. Since the client is only processing one readable event, the
client request is not enough to keep the process alive and the
process can exit before the desired events have been raised.
2. Reading just 1 byte is not enough to guarantee that the parser
will eventually consume all the data and raise the desired
parse error. I tried postponing the server.close() to address
the issue at [1], but then the test just hangs sometimes.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
It's saner to check exit codes or signals to determine if the process
actually aborted. On OSX and Linux the exit code is 134, on SunOS it
propagates the SIGABRT signal
Right now no default ciphers are use in, e.g. https.get, meaning that
weak export ciphers like TLS_RSA_EXPORT_WITH_DES40_CBC_SHA are
accepted.
To reproduce:
node -e "require('https').get({hostname: 'www.howsmyssl.com', \
path: '/a/check'}, function(res) {res.on('data', \
function(d) {process.stdout.write(d)})})"
The test was not waiting for all the worker-created sockets
to be listening before calling cluster.disconnect().
As a result, the channels with the workers could get closed
before all the socket handles had been passed to them, leading
to various errors.
Socket may become not `readable`, but http should not rely on this
property and should not think that it means that no data will ever
arrive from it. In fact, it may arrive in a next tick and, since
`this.push(null)` was already called, it will result in a error like
this:
Error: stream.push() after EOF
at readableAddChunk (_stream_readable.js:143:15)
at IncomingMessage.Readable.push (_stream_readable.js:123:10)
at HTTPParser.parserOnBody (_http_common.js:132:22)
at Socket.socketOnData (_http_client.js:277:20)
at Socket.EventEmitter.emit (events.js:101:17)
at Socket.Readable.read (_stream_readable.js:367:10)
at Socket.socketCloseListener (_http_client.js:196:10)
at Socket.EventEmitter.emit (events.js:123:20)
at TCP.close (net.js:479:12)
fix#6784
The test was calling server.close() after write on the socket
had completed. However the fact that the write had completed was
not valid indication that the server had received the data.
This would result in a premutaure closing of the server and
an ECONNRESET event on the client.
When creating TLSSocket on top of the regular socket that already
contains some received data, `_tls_wrap.js` should try to write all that
data to the internal `SSL*` instance.
fix#6940
Make the HMAC digest method configurable. Update crypto.pbkdf2() and
crypto.pbkdf2Sync() to take an extra, optional digest argument.
Before this commit, SHA-1 (admittedly the most common method) was used
exclusively.
Fixes#6553.
After one of OpenSSL updates we have stopped accepting PEM private keys
and certificates that doesn't end with a newline (`\n`) character.
Handle this regression in `crypto.js` to make less trouble to our users.
fix#6892
It was possible that the same AL instance was run twice if it were both
attached to the currentContext then again added to the new asyncQueue
generated for the new stack.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
The ability to add/remove an AsyncListener to an object after its
creation was an artifact of trying to get AL working with the domain
module. Now that is no longer necessary and other features are going to
be implemented that would be affected by this functionality. So the code
will be removed for now to simplify the implementation process.
In the future this code will likely be reintroduced, but after some
other more important matters have been addressed.
None of this functionality was documented, as is was meant specifically
for domain specific implementation work arounds.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
This test was originally intended to guard against regressions for
commit 16b59cbc74.
As such, it only needs to ensure that process exit has not been held up
by the date cache timer, which would fire on the next second.
We now wait to connect to the debuggee until we know that
its error stream has data, to ensure that the output message
"connecting..... ok" appears after "Debugger listening on port xyz"
I also increased the test timeout to let the more complex
tests finish in time on Windows
This change fixes the following unit tests on Windows:
test-debugger-repl.js
test-debugger-repl-term.js
test-debugger-repl-utf8.js
test-debugger-repl-restart.js
Before this commit, verification exceptions had err.message set to the
OpenSSL error code (e.g. 'UNABLE_TO_VERIFY_LEAF_SIGNATURE').
This commit moves the error code to err.code and replaces err.message
with a human-readable error. Example:
// before
{
message: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
// after
{
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
message: 'unable to verify the first certificate'
}
UNABLE_TO_VERIFY_LEAF_SIGNATURE is a good example of why you want this:
the error code suggests that it's the last certificate that fails to
validate while it's actually the first certificate in the chain.
Going by the number of mailing list posts and StackOverflow questions,
it's a source of confusion to many people.
domain.create().exit() should not clear the domain stack if the domain
instance does not exist within the stack.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
The RR cluster scheduler replaces the normal StreamWrap handle. Because
of this the AsyncListener method failed to be in place when domains were
in use.
The issue was resolved in 828f145 by reverting having domains use
AsyncListeners.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
If a write is above the highWaterMark, _write still manages to
fully send it synchronously, _writableState.length will be adjusted down
to 0 synchronously with the write returning false, but 'drain' will
not be emitted until process.nextTick.
If another small write which is below highWaterMark is issued before
process.nextTick happens, _writableState.needDrain will be reset to false,
and the drain event will never be fired.
So we should check needDrain before setting it up, which prevents it
from inproperly resetting to false.
Instead of checking the uid on the array index of the queue, instead the
object property "uid" was checked on the queue iteself. Because this
will always evaluate to "undefined" the same listener could be added
multiple times to the same context.