These will be used to allow users to filter for which types of calls
they wish their callbacks to run.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
"flags" could mean one of many things, and multiple flag types could be
checked. So make the field more explicit on what type of flags are being
stored.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
Add a new 'tracing' module with a v8 property that lets the user
register listeners for gc events. The listeners are invoked after
every garbage collection cycle with 'before' and 'after' statistics.
Useful for monitoring tools that want to keep track of memory usage.
Create a new HandleScope before looking up the object context with
v8::Object::CreationContext(), else we leak the Local<Context> into
the current HandleScope.
That's relatively harmless unless the HandleScope is long-lived and
MakeCallback() is called a lot. In a scenario like that, we may end
up leaking a lot of memory.
What is unfortunate about this change is that we're trying hard to
eradicate the node_isolate global. Longer term, we will probably have
to change the MakeCallback() prototype to one that requires an explicit
v8::Isolate* argument.
Make it possible to invoke MakeCallback() on a v8::Value but only for
the variant that takes a v8::Function as the thing to call.
The const char* and v8::String variants still require a v8::Object
because the function to call is looked up as a property on the receiver,
but that only works when the receiver is an object, not a primitive.
If the call to writeBuffer completes asynchronously, we need to have
an oncomplete callback on the request object no matter what. The
writeQueueSize seems irrelvant to that regard.
Note that on Windows writeBuffer always completes asynchronously.
See related commit 9836a4eeda
`tls_wrap.cc` was crashing in an `Unwrap` call, when non
`SecureContext` object was passed to it. Check that the passed object
is a `SecureContext` instance before unwrapping it.
fix#7008
The test is no longer valid for the original scenario.
It now fails intermittently because of two other issues:
1. Since the client is only processing one readable event, the
client request is not enough to keep the process alive and the
process can exit before the desired events have been raised.
2. Reading just 1 byte is not enough to guarantee that the parser
will eventually consume all the data and raise the desired
parse error. I tried postponing the server.close() to address
the issue at [1], but then the test just hangs sometimes.
Even if stdio streams are opened as file streams, we should not ever try
to close them. This could be accomplished by passing `autoClose: false`
in options on their creation.
This was originally introduced in 6034701 to prevent the closing
brace being pushed onto the next line if an object is longer than
the max width, however the functionality was removed in d164989 but
the supplementary variables (and operations) were left behind
This matches how libuv handles the definition of ssize_t, by
typedef'ing intptr_t to ssize_t.
However, in the future we will use portable types from stddef.h
It's saner to check exit codes or signals to determine if the process
actually aborted. On OSX and Linux the exit code is 134, on SunOS it
propagates the SIGABRT signal
Built-in modules should be automatically registered, replacing the
static module list. Add-on modules should also be automatically
registered via DSO constructors. This improves flexibility in adding
built-in modules and is also a prerequisite to pure-C addon modules.
Right now no default ciphers are use in, e.g. https.get, meaning that
weak export ciphers like TLS_RSA_EXPORT_WITH_DES40_CBC_SHA are
accepted.
To reproduce:
node -e "require('https').get({hostname: 'www.howsmyssl.com', \
path: '/a/check'}, function(res) {res.on('data', \
function(d) {process.stdout.write(d)})})"
The test was not waiting for all the worker-created sockets
to be listening before calling cluster.disconnect().
As a result, the channels with the workers could get closed
before all the socket handles had been passed to them, leading
to various errors.
Original commit message:
ares_parse_txt_reply: return a ares_txt_reply node for each sub-string
Previously, the function would wrongly return all substrings merged into
one.
fix#6931
Socket may become not `readable`, but http should not rely on this
property and should not think that it means that no data will ever
arrive from it. In fact, it may arrive in a next tick and, since
`this.push(null)` was already called, it will result in a error like
this:
Error: stream.push() after EOF
at readableAddChunk (_stream_readable.js:143:15)
at IncomingMessage.Readable.push (_stream_readable.js:123:10)
at HTTPParser.parserOnBody (_http_common.js:132:22)
at Socket.socketOnData (_http_client.js:277:20)
at Socket.EventEmitter.emit (events.js:101:17)
at Socket.Readable.read (_stream_readable.js:367:10)
at Socket.socketCloseListener (_http_client.js:196:10)
at Socket.EventEmitter.emit (events.js:123:20)
at TCP.close (net.js:479:12)
fix#6784