If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Backport of 085dd30
A follow-up commit will save the domain name on the request object but
we can't call that property 'domain' because that gets intercepted by
src/node.cc and lib/domain.js to implement the node.js feature of the
same name.
To avoid confusion, rename all variables called 'domain' to 'hostname'.
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Fix GH-6214
Don't emit the 'disconnect' event until all workers have gone away.
Before this commit, the event was emitted when all open handles were
closed, which usually - but not always - amounts to the same thing.
Fixes#6346.
Destroying the TLS session implies destroying the underlying socket but
before this commit, that was done with net.Socket#destroy() rather than
net.Socket#destroySoon(). The former closes the connection right away,
even when there is still data to write. In other words, sometimes the
final TLS record got truncated.
Fixes#6107.
fs.truncate() and its synchronous sibling are implemented in terms of
open() + ftruncate(). Unfortunately, it opened the target file with
mode 'w' a.k.a. 'write-only and create or truncate at open'.
The subsequent call to ftruncate() then moved the end-of-file pointer
from zero to the requested offset with the net result of a file that's
neatly truncated at the right offset and filled with zero bytes only.
This bug was introduced in commit 168a5557 but in fairness, before that
commit fs.truncate() worked like fs.ftruncate() so it seems we've never
had a working fs.truncate() until now.
Fixes#6233.
I haven't actually tested this code, but was reading it due to a
post that linked to the code here:
http://dailyjs.com/2013/09/26/libuv/
As I was reading through the code, I noticed a path that can't
be reached.
I didn't strictly follow the contributing guide:
https://github.com/joyent/node/wiki/Contributing
but the change seems safe.
Feel free to close this out. I'm not sure if it was just an oversight
or what.
The test case from the previous commit exposed a regression in the way
that c-ares errors are reported to JS land. Said regression was
introduced in commit 756b622 ("src: add multi-context support").
Fixes the following test failure:
$ out/Release/node test/simple/test-dns-regress-6244
util.js:675
var errname = uv.errname(err);
^
Error: err >= 0
at Object.exports._errnoException (util.js:675:20)
at errnoException (dns.js:43:15)
at Object.onresolve [as oncomplete] (dns.js:145:19)
lib/dns.js erroneously assumed that the error code was a libuv error
code when it's really a c-ares status code. Libuv handles getaddrinfo()
style lookups (which is by far the most common type of lookup), that's
why this bug wasn't discovered earlier.
Fix "Assertion failed" when trying to connect to non-int ports:
Assertion failed: (args[2]->Uint32Value()), function Connect,
file ../src/tcp_wrap.cc, line 379.
Abort trap: 6
The `options` that were being passed in before here are specific to a
single request, which kinda defeats the purpose of using an Agent in the
first place.
On a worse note, these `options` have not yet been "processed" by the
`http.ClientRequest` class, so if `port: null` is set (like it is as the
result of a `url.parse()` call), then they take preference over the
processed values since the agent's "options" get mixed in last in the
`createSocket()` function.
Fixes#6197.
Fixes#6199.
Closes#6231.
Otherwise the data ends up "on the wire" twice, and
switching between consuming the stream using `ondata`
vs. `read()` would yield duplicate data, which was bad.
String#toLowerCase() is incredibly slow and was costing a 15-30%
performance hit for Buffers less than 1KB. Now instead it'll attempt to
find the correct encoding directly from the passed encoding, only then
afterwards it'll lowercase.
The optimization for not passing any encoding at all is still at the top
of the method.
At most this may add 10% performance hit for passing a mixed case
encoding.
The NPN protocols was set on `require('tls')` or `global` object instead
of being a local property. This fact lead to strange persistence of NPN
protocols, and sometimes incorrect protocol selection (when no NPN
protocols were passed in client options).
fix#6168
In cases where the Agent has maxSockets=Infinity, and
keepAlive=false, there's no case where we won't immediately close the
connection after the response is completed.
Since we're going to close it anyway, send a `connection:close` header
rather than a `connection:keep-alive` header. Still send the
`connection:keep-alive` if the agent will actually reuse the socket,
however.
Closes#5838
This simplifies the logic that was in isSyntaxError, as well as the
choice to wrap command input in parens to coerce to an expression
statement.
1. Rather than a growing blacklist of allowed-to-throw syntax errors,
just sniff for the one we really care about ("Unexpected end of input")
and let all the others pass through.
2. Wrapping {a:1} in parens makes sense, because blocks and line labels
are silly and confusing and should not be in JavaScript at all.
However, wrapping functions and other types of programs in parens is
weird and required yet *more* hacking to work around. By only wrapping
statements that start with { and end with }, we can handle the confusing
use-case, without having to then do extra work for functions and other
cases.
This also fixes the repl wart where `console.log)(` works in the repl,
but only by virtue of the fact that it's wrapped in parens first, as
well as potential side effects of double-running the commands, such as:
> x = 1
1
> eval('x++; throw new SyntaxError("e")')
... ^C
> x
3
Adding a new `repl-harmony` test file here because adding the
`--use_strict --harmony` flags on the main repl test file was causing
lots of unrelated failures, due to global variable assignments and
things like that. This new test file is based off of the original
repl.js test file, but has a lot of the tests stripped out. A test case
for this commit is included though.
Fixes#6132.
Replace the growing list of 'isSyntaxError' whackamole conditions with a
smarter approach. This creates a vm Script object *first*, which will
parse the code and raise a SyntaxError right away.
We still do need the test function, but only because strict mode syntax
errors are not recoverable, and should be raised right away. Really, we
should probably *only* continue on "unexpected end of input" SyntaxErrors.
Also fixes a very difficult-to-test nit where the '...' indentation is
not properly cleared when you ^C out of a syntax error.
Closes#6093
Passing a filename is still supported in place of certain options
arguments, for backward-compatibility, but timeout and display-errors
are not translated since those were undocumented.
Also managed to eliminate an extra stack trace line by not calling
through the `createScript` export.
Added a few message tests to show how `displayErrors` works.
Previously, calling `vm.createContext(o)` repeatedly on the same `o`
would cause new C++ `ContextifyContext`s to be created and stored on
`o`, while the previous resident went off into leaked-memory limbo.
Now, repeatedly trying to contextify a sandbox will do nothing after
the first time.
To detect this, an independently-useful `vm.isContext(sandbox)` export
was added.
There's no need to create a new Buffer instance if we're just going to
immediately call toString() at the end anyway. Better to create a
string up front, and setEncoding() on the streams, and do a string
concatenation instead.
Since the encoding is no longer relevant once it is decoded to a Buffer,
it is confusing and incorrect to pass the encoding as 'utf8' or whatever
in those cases.
Closes#6119
Length arguments passed to SlowBuffer were coerced to Int32, not Uint32,
so passing a negative number would throw the following:
node: ../src/smalloc.cc:244: void node::smalloc::Alloc(): Assertion `length <= kMaxLength' failed.
Aborted (core dumped)
That has been fixed by coercing to Uint32 and comparing the value
against kMaxLength.
Due to a lot of the util.is* checks there was much unnecessary overhead
for the most common use case of Buffer. Which is creating a new Buffer
instance for data from incoming I/O. NativeBuffer is a simple way to
bypass all the unneeded checks and simply hand back a Buffer instance
while setting the length.
Instead of doing all the domain handling in core, allow the domain to
set an error handler that'll take care of it all. This way the domain
error handling can be abstracted enough for any user to use it.
All the Buffer#{ascii,hex,etc.}Slice() methods are intentionally strict
to alert if a Buffer instance was attempting to be accessed out of
bounds. Buffer#toString() is the more user friendly way of accessing the
data, and will coerce values to their min/max on overflow.