Before sending a socket from a cluster master to a worker,
we would call listen in UV but not handle the error.
I made createServerHandle call listen on Windows so we get a chance
the handle any bind/listen errors early.
This fix is 100% windows specific.
It fixes test-cluster-bind-twice and
test-cluster-shared-handle-bind-error on Windows.
This implements the user-facing APIs that lets one run a child process
and block until it exits.
Logic shared with the async counterpart of each function was refactored
to enable code reuse.
Docs and tests are included.
Don't use argument as callback if it's not a valid callback function.
Throw a valid exception instead explaining the issue.
Adds to #7070 ("DNS — Throw meaningful error(s)").
Don't use argument as callback if it's not a valid callback function.
Throw a valid exception instead explaining the issue. Adds to #7070
("DNS — Throw meaningful error(s)").
Before, `new String('foo')` would be inspected as `"{}"` which
is simply not very helpful. Now, a more meaningful
`"[String: 'foo']"` result will be returned from `util.inspect()`.
Boxed String, Boolean, and Number types are all supported.
Closes#7047
The AsyncListener API has been moved into the "tracing" module in order
to keep the process object free from unnecessary clutter.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
Add a new 'tracing' module with a v8 property that lets the user
register listeners for gc events. The listeners are invoked after
every garbage collection cycle with 'before' and 'after' statistics.
Useful for monitoring tools that want to keep track of memory usage.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
This was originally introduced in 6034701 to prevent the closing
brace being pushed onto the next line if an object is longer than
the max width, however the functionality was removed in d164989 but
the supplementary variables (and operations) were left behind
Right now no default ciphers are use in, e.g. https.get, meaning that
weak export ciphers like TLS_RSA_EXPORT_WITH_DES40_CBC_SHA are
accepted.
To reproduce:
node -e "require('https').get({hostname: 'www.howsmyssl.com', \
path: '/a/check'}, function(res) {res.on('data', \
function(d) {process.stdout.write(d)})})"
Socket may become not `readable`, but http should not rely on this
property and should not think that it means that no data will ever
arrive from it. In fact, it may arrive in a next tick and, since
`this.push(null)` was already called, it will result in a error like
this:
Error: stream.push() after EOF
at readableAddChunk (_stream_readable.js:143:15)
at IncomingMessage.Readable.push (_stream_readable.js:123:10)
at HTTPParser.parserOnBody (_http_common.js:132:22)
at Socket.socketOnData (_http_client.js:277:20)
at Socket.EventEmitter.emit (events.js:101:17)
at Socket.Readable.read (_stream_readable.js:367:10)
at Socket.socketCloseListener (_http_client.js:196:10)
at Socket.EventEmitter.emit (events.js:123:20)
at TCP.close (net.js:479:12)
fix#6784
When creating TLSSocket on top of the regular socket that already
contains some received data, `_tls_wrap.js` should try to write all that
data to the internal `SSL*` instance.
fix#6940
Make the HMAC digest method configurable. Update crypto.pbkdf2() and
crypto.pbkdf2Sync() to take an extra, optional digest argument.
Before this commit, SHA-1 (admittedly the most common method) was used
exclusively.
Fixes#6553.
Performance gains are ~4x (~1.5us), but still much slower than a naive
approach. There is some duplicate work done between join(), normalize()
and normalizeArray() so additional optimizations are possible.
Note that this only improves the POSIX implementation.
Thanks to @isaacs and @othiym23 for helping with this optimization.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
After one of OpenSSL updates we have stopped accepting PEM private keys
and certificates that doesn't end with a newline (`\n`) character.
Handle this regression in `crypto.js` to make less trouble to our users.
fix#6892
The ability to add/remove an AsyncListener to an object after its
creation was an artifact of trying to get AL working with the domain
module. Now that is no longer necessary and other features are going to
be implemented that would be affected by this functionality. So the code
will be removed for now to simplify the implementation process.
In the future this code will likely be reintroduced, but after some
other more important matters have been addressed.
None of this functionality was documented, as is was meant specifically
for domain specific implementation work arounds.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
We now wait to connect to the debuggee until we know that
its error stream has data, to ensure that the output message
"connecting..... ok" appears after "Debugger listening on port xyz"
I also increased the test timeout to let the more complex
tests finish in time on Windows
This change fixes the following unit tests on Windows:
test-debugger-repl.js
test-debugger-repl-term.js
test-debugger-repl-utf8.js
test-debugger-repl-restart.js
Before this commit, verification exceptions had err.message set to the
OpenSSL error code (e.g. 'UNABLE_TO_VERIFY_LEAF_SIGNATURE').
This commit moves the error code to err.code and replaces err.message
with a human-readable error. Example:
// before
{
message: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
// after
{
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
message: 'unable to verify the first certificate'
}
UNABLE_TO_VERIFY_LEAF_SIGNATURE is a good example of why you want this:
the error code suggests that it's the last certificate that fails to
validate while it's actually the first certificate in the chain.
Going by the number of mailing list posts and StackOverflow questions,
it's a source of confusion to many people.
Spawn's arguments were documented to be optional, as they are for the
other similar child_process APIs, but the code was missing. Result was
`child_process.spawn('node', {})` errored when calling slice() on an
Object, now it behaves as the documentation said it would.
domain.create().exit() should not clear the domain stack if the domain
instance does not exist within the stack.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Before when an AsyncListener object was created and the "create"
callback returned a value, it was necessary to construct a new Object
with the same callbacks but add a place for the new storage value.
Now, instead, a separate storage array is kept on the context which is
used for any return value of the "create" callback. This significantly
reduces the number of Objects that need to be created.
Also added a flags property to the context to quickly check if a
specific callback was available either on the context or on the
AsyncListener instance itself.
Few other minor changes for readability that were difficult to separate
into their own commit.
This has not been optimized yet.
This is a slightly modified revert of bc39bdd.
Getting domains to use AsyncListeners became too much of a challenge
with many edge cases. While this is still a goal, it will have to be
deferred for now until more test coverage can be provided.
If a write is above the highWaterMark, _write still manages to
fully send it synchronously, _writableState.length will be adjusted down
to 0 synchronously with the write returning false, but 'drain' will
not be emitted until process.nextTick.
If another small write which is below highWaterMark is issued before
process.nextTick happens, _writableState.needDrain will be reset to false,
and the drain event will never be fired.
So we should check needDrain before setting it up, which prevents it
from inproperly resetting to false.