Given the assert message, and the fact that endCb is always true
in the assert, I am pretty sure the test author was intending
to test for finishEvent, not endCb.
In this test, an HTTP server was ending the response before
consuming all the data sent in the PUT request.
Ending the response would cause the socket to be destroyed,
and since there is some data still to be read, an ECONNRESET is
surfaced on the client side, event though the client has already
ended its side and even seen a 'finish' event.
See:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.2
While it is certainly admissible for the server to send a response
before consuming the entire request, it seems reasonable to
expect that the server would close the connection afterwards
and that the ECONNRESET would be raised on the client.
So I have changed the test to wait until the entire request has been
consumed before sending the response.
This implements the user-facing APIs that lets one run a child process
and block until it exits.
Logic shared with the async counterpart of each function was refactored
to enable code reuse.
Docs and tests are included.
Don't use argument as callback if it's not a valid callback function.
Throw a valid exception instead explaining the issue.
Adds to #7070 ("DNS — Throw meaningful error(s)").
The number of connections achieved by the test can vary by platform
and by machine. Lowering the acceptance threshold so that the
test passes on Windows.
Make vm.runInContext() and vm.runInNewContext() stop copying the Proxy
object from the parent context into the new context when --harmony or
--harmony_proxies is in effect because it overwrites the new context's
native Proxy object.
This commit also adds a regression test for Harmony symbols. They work
okay in the current implementation and the test should ensure it stays
that way.
Conditional globals like 'gc' should only be recognized when --expose_gc
is set. The global.gc feature check works only when done eagerly, else
it lets through a leaked variable called 'gc'.
Before, `new String('foo')` would be inspected as `"{}"` which
is simply not very helpful. Now, a more meaningful
`"[String: 'foo']"` result will be returned from `util.inspect()`.
Boxed String, Boolean, and Number types are all supported.
Closes#7047
Try embedding the ` ... ^` lines inside the `SyntaxError` (or any other
native error) object before giving up and printing them to the stderr.
fix#6920fix#1310
The AsyncListener API has been moved into the "tracing" module in order
to keep the process object free from unnecessary clutter.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
Add a new 'tracing' module with a v8 property that lets the user
register listeners for gc events. The listeners are invoked after
every garbage collection cycle with 'before' and 'after' statistics.
Useful for monitoring tools that want to keep track of memory usage.
If the call to writeBuffer completes asynchronously, we need to have
an oncomplete callback on the request object no matter what. The
writeQueueSize seems irrelvant to that regard.
Note that on Windows writeBuffer always completes asynchronously.
See related commit 9836a4eeda
`tls_wrap.cc` was crashing in an `Unwrap` call, when non
`SecureContext` object was passed to it. Check that the passed object
is a `SecureContext` instance before unwrapping it.
fix#7008
The test is no longer valid for the original scenario.
It now fails intermittently because of two other issues:
1. Since the client is only processing one readable event, the
client request is not enough to keep the process alive and the
process can exit before the desired events have been raised.
2. Reading just 1 byte is not enough to guarantee that the parser
will eventually consume all the data and raise the desired
parse error. I tried postponing the server.close() to address
the issue at [1], but then the test just hangs sometimes.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
It's saner to check exit codes or signals to determine if the process
actually aborted. On OSX and Linux the exit code is 134, on SunOS it
propagates the SIGABRT signal
Right now no default ciphers are use in, e.g. https.get, meaning that
weak export ciphers like TLS_RSA_EXPORT_WITH_DES40_CBC_SHA are
accepted.
To reproduce:
node -e "require('https').get({hostname: 'www.howsmyssl.com', \
path: '/a/check'}, function(res) {res.on('data', \
function(d) {process.stdout.write(d)})})"
The test was not waiting for all the worker-created sockets
to be listening before calling cluster.disconnect().
As a result, the channels with the workers could get closed
before all the socket handles had been passed to them, leading
to various errors.
Socket may become not `readable`, but http should not rely on this
property and should not think that it means that no data will ever
arrive from it. In fact, it may arrive in a next tick and, since
`this.push(null)` was already called, it will result in a error like
this:
Error: stream.push() after EOF
at readableAddChunk (_stream_readable.js:143:15)
at IncomingMessage.Readable.push (_stream_readable.js:123:10)
at HTTPParser.parserOnBody (_http_common.js:132:22)
at Socket.socketOnData (_http_client.js:277:20)
at Socket.EventEmitter.emit (events.js:101:17)
at Socket.Readable.read (_stream_readable.js:367:10)
at Socket.socketCloseListener (_http_client.js:196:10)
at Socket.EventEmitter.emit (events.js:123:20)
at TCP.close (net.js:479:12)
fix#6784
The test was calling server.close() after write on the socket
had completed. However the fact that the write had completed was
not valid indication that the server had received the data.
This would result in a premutaure closing of the server and
an ECONNRESET event on the client.
When creating TLSSocket on top of the regular socket that already
contains some received data, `_tls_wrap.js` should try to write all that
data to the internal `SSL*` instance.
fix#6940
Make the HMAC digest method configurable. Update crypto.pbkdf2() and
crypto.pbkdf2Sync() to take an extra, optional digest argument.
Before this commit, SHA-1 (admittedly the most common method) was used
exclusively.
Fixes#6553.
After one of OpenSSL updates we have stopped accepting PEM private keys
and certificates that doesn't end with a newline (`\n`) character.
Handle this regression in `crypto.js` to make less trouble to our users.
fix#6892