Destroying the TLS session implies destroying the underlying socket but
before this commit, that was done with net.Socket#destroy() rather than
net.Socket#destroySoon(). The former closes the connection right away,
even when there is still data to write. In other words, sometimes the
final TLS record got truncated.
Fixes#6107.
fs.truncate() and its synchronous sibling are implemented in terms of
open() + ftruncate(). Unfortunately, it opened the target file with
mode 'w' a.k.a. 'write-only and create or truncate at open'.
The subsequent call to ftruncate() then moved the end-of-file pointer
from zero to the requested offset with the net result of a file that's
neatly truncated at the right offset and filled with zero bytes only.
This bug was introduced in commit 168a5557 but in fairness, before that
commit fs.truncate() worked like fs.ftruncate() so it seems we've never
had a working fs.truncate() until now.
Fixes#6233.
I haven't actually tested this code, but was reading it due to a
post that linked to the code here:
http://dailyjs.com/2013/09/26/libuv/
As I was reading through the code, I noticed a path that can't
be reached.
I didn't strictly follow the contributing guide:
https://github.com/joyent/node/wiki/Contributing
but the change seems safe.
Feel free to close this out. I'm not sure if it was just an oversight
or what.
The test case from the previous commit exposed a regression in the way
that c-ares errors are reported to JS land. Said regression was
introduced in commit 756b622 ("src: add multi-context support").
Fixes the following test failure:
$ out/Release/node test/simple/test-dns-regress-6244
util.js:675
var errname = uv.errname(err);
^
Error: err >= 0
at Object.exports._errnoException (util.js:675:20)
at errnoException (dns.js:43:15)
at Object.onresolve [as oncomplete] (dns.js:145:19)
lib/dns.js erroneously assumed that the error code was a libuv error
code when it's really a c-ares status code. Libuv handles getaddrinfo()
style lookups (which is by far the most common type of lookup), that's
why this bug wasn't discovered earlier.
Fix "Assertion failed" when trying to connect to non-int ports:
Assertion failed: (args[2]->Uint32Value()), function Connect,
file ../src/tcp_wrap.cc, line 379.
Abort trap: 6
The `options` that were being passed in before here are specific to a
single request, which kinda defeats the purpose of using an Agent in the
first place.
On a worse note, these `options` have not yet been "processed" by the
`http.ClientRequest` class, so if `port: null` is set (like it is as the
result of a `url.parse()` call), then they take preference over the
processed values since the agent's "options" get mixed in last in the
`createSocket()` function.
Fixes#6197.
Fixes#6199.
Closes#6231.
Otherwise the data ends up "on the wire" twice, and
switching between consuming the stream using `ondata`
vs. `read()` would yield duplicate data, which was bad.
String#toLowerCase() is incredibly slow and was costing a 15-30%
performance hit for Buffers less than 1KB. Now instead it'll attempt to
find the correct encoding directly from the passed encoding, only then
afterwards it'll lowercase.
The optimization for not passing any encoding at all is still at the top
of the method.
At most this may add 10% performance hit for passing a mixed case
encoding.
The NPN protocols was set on `require('tls')` or `global` object instead
of being a local property. This fact lead to strange persistence of NPN
protocols, and sometimes incorrect protocol selection (when no NPN
protocols were passed in client options).
fix#6168
In cases where the Agent has maxSockets=Infinity, and
keepAlive=false, there's no case where we won't immediately close the
connection after the response is completed.
Since we're going to close it anyway, send a `connection:close` header
rather than a `connection:keep-alive` header. Still send the
`connection:keep-alive` if the agent will actually reuse the socket,
however.
Closes#5838
This simplifies the logic that was in isSyntaxError, as well as the
choice to wrap command input in parens to coerce to an expression
statement.
1. Rather than a growing blacklist of allowed-to-throw syntax errors,
just sniff for the one we really care about ("Unexpected end of input")
and let all the others pass through.
2. Wrapping {a:1} in parens makes sense, because blocks and line labels
are silly and confusing and should not be in JavaScript at all.
However, wrapping functions and other types of programs in parens is
weird and required yet *more* hacking to work around. By only wrapping
statements that start with { and end with }, we can handle the confusing
use-case, without having to then do extra work for functions and other
cases.
This also fixes the repl wart where `console.log)(` works in the repl,
but only by virtue of the fact that it's wrapped in parens first, as
well as potential side effects of double-running the commands, such as:
> x = 1
1
> eval('x++; throw new SyntaxError("e")')
... ^C
> x
3
Adding a new `repl-harmony` test file here because adding the
`--use_strict --harmony` flags on the main repl test file was causing
lots of unrelated failures, due to global variable assignments and
things like that. This new test file is based off of the original
repl.js test file, but has a lot of the tests stripped out. A test case
for this commit is included though.
Fixes#6132.
Replace the growing list of 'isSyntaxError' whackamole conditions with a
smarter approach. This creates a vm Script object *first*, which will
parse the code and raise a SyntaxError right away.
We still do need the test function, but only because strict mode syntax
errors are not recoverable, and should be raised right away. Really, we
should probably *only* continue on "unexpected end of input" SyntaxErrors.
Also fixes a very difficult-to-test nit where the '...' indentation is
not properly cleared when you ^C out of a syntax error.
Closes#6093
Passing a filename is still supported in place of certain options
arguments, for backward-compatibility, but timeout and display-errors
are not translated since those were undocumented.
Also managed to eliminate an extra stack trace line by not calling
through the `createScript` export.
Added a few message tests to show how `displayErrors` works.
Previously, calling `vm.createContext(o)` repeatedly on the same `o`
would cause new C++ `ContextifyContext`s to be created and stored on
`o`, while the previous resident went off into leaked-memory limbo.
Now, repeatedly trying to contextify a sandbox will do nothing after
the first time.
To detect this, an independently-useful `vm.isContext(sandbox)` export
was added.
There's no need to create a new Buffer instance if we're just going to
immediately call toString() at the end anyway. Better to create a
string up front, and setEncoding() on the streams, and do a string
concatenation instead.
Since the encoding is no longer relevant once it is decoded to a Buffer,
it is confusing and incorrect to pass the encoding as 'utf8' or whatever
in those cases.
Closes#6119
Length arguments passed to SlowBuffer were coerced to Int32, not Uint32,
so passing a negative number would throw the following:
node: ../src/smalloc.cc:244: void node::smalloc::Alloc(): Assertion `length <= kMaxLength' failed.
Aborted (core dumped)
That has been fixed by coercing to Uint32 and comparing the value
against kMaxLength.
Due to a lot of the util.is* checks there was much unnecessary overhead
for the most common use case of Buffer. Which is creating a new Buffer
instance for data from incoming I/O. NativeBuffer is a simple way to
bypass all the unneeded checks and simply hand back a Buffer instance
while setting the length.
Instead of doing all the domain handling in core, allow the domain to
set an error handler that'll take care of it all. This way the domain
error handling can be abstracted enough for any user to use it.
All the Buffer#{ascii,hex,etc.}Slice() methods are intentionally strict
to alert if a Buffer instance was attempting to be accessed out of
bounds. Buffer#toString() is the more user friendly way of accessing the
data, and will coerce values to their min/max on overflow.
This is an important part of the repl use-case.
TODO: The arg parsing in vm.runIn*Context() is rather wonky.
It would be good to move more of that into the Script class,
and/or an options object.
As documented in #3042 and in [1], the existing vm implementation has
many problems. All of these are solved by @brianmcd's [contextify][2]
package. This commit uses contextify as a conceptual base and its code
core to overhaul the vm module and fix its many edge cases and caveats.
Functionally, this fixes#3042. In particular:
- A context is now indistinguishable from the object it is based on
(the "sandbox"). A context is simply a sandbox that has been marked
by the vm module, via `vm.createContext`, with special internal
information that allows scripts to be run inside of it.
- Consequently, items added to the context from anywhere are
immediately visible to all code that can access that context, both
inside and outside the virtual machine.
This commit also smooths over the API very slightly:
- Parameter defaults are now uniformly triggered via `undefined`, per
ES6 semantics and previous discussion at [3].
- Several undocumented and problematic features have been removed, e.g.
the conflation of `vm.Script` with `vm` itself, and the fact that
`Script` instances also had all static `vm` methods. The API is now
exactly as documented (although arguably the existence of the
`vm.Script` export is not yet documented, just the `Script` class
itself).
In terms of implementation, this replaces node_script.cc with
node_contextify.cc, which is derived originally from [4] (see [5]) but
has since undergone extensive modifications and iterations to expose
the most useful C++ API and use the coding conventions and utilities of
Node core.
The bindings exposed by `process.binding('contextify')`
(node_contextify.cc) replace those formerly exposed by
`process.binding('evals')` (node_script.cc). They are:
- ContextifyScript(code, [filename]), with methods:
- runInThisContext()
- runInContext(sandbox, [timeout])
- makeContext(sandbox)
From this, the vm.js file builds the entire documented vm module API.
node.js and module.js were modified to use this new native binding, or
the vm module itself where possible. This introduces an extra line or
two into the stack traces of module compilation (and thus into most
stack traces), explaining the changed tests.
The tests were also updated slightly, with all vm-related simple tests
consolidated as test/simple/test-vm-* (some of them were formerly
test/simple/test-script-*). At the same time they switched from
`common.debug` to `console.error` and were updated to use
`assert.throws` instead of rolling their own error-testing methods.
New tests were also added, of course, demonstrating the new
capabilities and fixes.
[1]: http://nodejs.org/docs/v0.10.16/api/vm.html#vm_caveats
[2]: https://github.com/brianmcd/contextify
[3]: https://github.com/joyent/node/issues/5323#issuecomment-20250726
[4]: bf123f3ef9/src/contextify.cc
[5]: https://gist.github.com/domenic/6068120
`maybeInitFinished()` can emit the 'secure' event which
in turn destroys the connection in case of authentication
failure and sets `this.pair.ssl` to `null`.
If such condition appeared after non-empty read - loop will continue
and `clearOut` will be called on `null` object instead of
`crypto::Connection` instance. Resulting in the following assertion:
ERROR: Error: Hostname/IP doesn't match certificate's altnames
Assertion failed: handle->InternalFieldCount() > 0
fix#5756
`dns.lookup` defaults to selecting IPv4 record even if IPv6 is available
for the desired zone. Generally, this approach works, but if IPv4
address is unavailable - there'll be no other way to opt-out and connect using
IPv6 address than calling `dns.lookup` and passing it to `.connect()`
directly.
This commit adds `family` option to `net.connect` method to figure out
this issue.
This change is 100% backwards compatible.
This change will make using `EventEmitter` slightly simpler / nicer and
adheres to the best practice set forth by substack.
```js
var EventEmitter = require("events")
var emitter = new EventEmitter()
```
The only difference is that we now have to set `EventEmitter` as a
property of `EventEmitter` for backwards compatibility like we do with
[`Stream`][1]
We have also set the `usingDomains` property on the `EventEmitter`
constructor itself because that aligns with it's current usage of
`require("events").usingDomains = true`
There are other internals that would benefit from this change as well
like `StringDecoder`
This allows automatically-inserted headers to be removed permanently by
calling OutgoingMessage.removeHeader() on them, as if they were normal
headers.