This is useful when we need to push some debugging messages out to
stderr, without going through the Writable class, or triggering any kind
of nextTick or callback behavior.
* Exit with an error message when the option is not a node or V8 option.
* Remove the option_end_index global. Needs to happen anyway for
the multi-context work, might as well land it in master now.
* Add a smidgen of const-correctness.
* Pay off a few years of accrued technical debt.
Don't wait a full second before starting the watcher, 10 ms ought to be
more than enough time. Reduces running time from 1250 ms to 250 ms on
my system.
This change is not entirely ready for prime time: it's making ~50 tests
fail on Windows, mostly due to timeouts. It's up for debate who is
at fault here: node.js or libuv.
It does however expose a libuv bug on OS X, where the event loop
sometimes gets stuck in uv__io_poll() when there is a single
UV_SHUTDOWN request left in the queue. Needs further investigation.
This reverts commit 4915884da6.
Commit 556b890 added a call to uv_loop_delete() with the intent of
catching handle lifecycle bugs. It worked because it exposed one:
process.on('exit', function() {
console.log('bye'); // Asserts.
});
When run, it asserts with the following message:
Assertion failed: (!uv__has_active_reqs(loop)), function
uv__loop_delete, file ../deps/uv/src/unix/loop.c, line 150.
That's because libuv as of joyent/libuv@3f2d4d5 checks that there are
no in-flight requests when the event loop is destroyed. In the test
case above, the write request for the string hasn't completed yet by
the time node.js exits: the string itself has most likely been written
but libuv hasn't had the opportunity to return the write request to
node.js.
That's why this commit adds a cleanup step right before exit where it
explicitly closes all open handles, then waits until the event loop
exits naturally.
Named pipes (UNIX domain sockets) are shut down first in order to flush
pending write requests. Should go some way towards fixing the Windows
issue where output on stdout/stderr sometimes gets truncated.
Fixesjoyent/libuv#911.
Passing a filename is still supported in place of certain options
arguments, for backward-compatibility, but timeout and display-errors
are not translated since those were undocumented.
Also managed to eliminate an extra stack trace line by not calling
through the `createScript` export.
Added a few message tests to show how `displayErrors` works.
In `Timer.now` always update the loop time by calling uv_update_time.
Previously we were trying to cache the loop time to prevent extra
syscalls. While a noble goal, it can cause timers to fire early in
certain circumstances. Especially seen in cpu bound work loads or work
loads with synchronous file operations.
Previously, calling `vm.createContext(o)` repeatedly on the same `o`
would cause new C++ `ContextifyContext`s to be created and stored on
`o`, while the previous resident went off into leaked-memory limbo.
Now, repeatedly trying to contextify a sandbox will do nothing after
the first time.
To detect this, an independently-useful `vm.isContext(sandbox)` export
was added.
Since the encoding is no longer relevant once it is decoded to a Buffer,
it is confusing and incorrect to pass the encoding as 'utf8' or whatever
in those cases.
Closes#6119
This is an important part of the repl use-case.
TODO: The arg parsing in vm.runIn*Context() is rather wonky.
It would be good to move more of that into the Script class,
and/or an options object.
As documented in #3042 and in [1], the existing vm implementation has
many problems. All of these are solved by @brianmcd's [contextify][2]
package. This commit uses contextify as a conceptual base and its code
core to overhaul the vm module and fix its many edge cases and caveats.
Functionally, this fixes#3042. In particular:
- A context is now indistinguishable from the object it is based on
(the "sandbox"). A context is simply a sandbox that has been marked
by the vm module, via `vm.createContext`, with special internal
information that allows scripts to be run inside of it.
- Consequently, items added to the context from anywhere are
immediately visible to all code that can access that context, both
inside and outside the virtual machine.
This commit also smooths over the API very slightly:
- Parameter defaults are now uniformly triggered via `undefined`, per
ES6 semantics and previous discussion at [3].
- Several undocumented and problematic features have been removed, e.g.
the conflation of `vm.Script` with `vm` itself, and the fact that
`Script` instances also had all static `vm` methods. The API is now
exactly as documented (although arguably the existence of the
`vm.Script` export is not yet documented, just the `Script` class
itself).
In terms of implementation, this replaces node_script.cc with
node_contextify.cc, which is derived originally from [4] (see [5]) but
has since undergone extensive modifications and iterations to expose
the most useful C++ API and use the coding conventions and utilities of
Node core.
The bindings exposed by `process.binding('contextify')`
(node_contextify.cc) replace those formerly exposed by
`process.binding('evals')` (node_script.cc). They are:
- ContextifyScript(code, [filename]), with methods:
- runInThisContext()
- runInContext(sandbox, [timeout])
- makeContext(sandbox)
From this, the vm.js file builds the entire documented vm module API.
node.js and module.js were modified to use this new native binding, or
the vm module itself where possible. This introduces an extra line or
two into the stack traces of module compilation (and thus into most
stack traces), explaining the changed tests.
The tests were also updated slightly, with all vm-related simple tests
consolidated as test/simple/test-vm-* (some of them were formerly
test/simple/test-script-*). At the same time they switched from
`common.debug` to `console.error` and were updated to use
`assert.throws` instead of rolling their own error-testing methods.
New tests were also added, of course, demonstrating the new
capabilities and fixes.
[1]: http://nodejs.org/docs/v0.10.16/api/vm.html#vm_caveats
[2]: https://github.com/brianmcd/contextify
[3]: https://github.com/joyent/node/issues/5323#issuecomment-20250726
[4]: bf123f3ef9/src/contextify.cc
[5]: https://gist.github.com/domenic/6068120
`dns.lookup` defaults to selecting IPv4 record even if IPv6 is available
for the desired zone. Generally, this approach works, but if IPv4
address is unavailable - there'll be no other way to opt-out and connect using
IPv6 address than calling `dns.lookup` and passing it to `.connect()`
directly.
This commit adds `family` option to `net.connect` method to figure out
this issue.
It only fails once in about 1000 times, but that's too many.
It's timing dependent, and the main behavior is covered by the other
assertions in the test anyway.
This allows automatically-inserted headers to be removed permanently by
calling OutgoingMessage.removeHeader() on them, as if they were normal
headers.
In this situation:
writable.on('error', handler);
readable.pipe(writable);
writable.removeListener('error', handler);
writable.emit('error', new Error('boom'));
there is actually no error handler, but it doesn't throw, because of the
fix for stream.once('error', handler), in 23d92ec.
Note that simply reverting that change is not valid either, because
otherwise this will emit twice, being handled the first time, and then
throwing the second:
writable.once('error', handler);
readable.pipe(writable);
writable.emit('error', new Error('boom'));
Fix this with a horrible hack to make the stream pipe onerror handler
added before any other userland handlers, so that our handler is not
affected by adding or removing any userland handlers.
Closes#6007.
Add range checks for the offset, length and port arguments to
dgram.Socket#send(). Fixes the following assertion:
node: ../../src/udp_wrap.cc:264: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`offset < Buffer::Length(buffer_obj)' failed.
And:
node: ../../src/udp_wrap.cc:265: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`length <= Buffer::Length(buffer_obj) - offset' failed.
Interestingly enough, a negative port number was accepted until now but
silently ignored. (In other words, it would send the datagram to a
random port.)
This commit exposed a bug in the simple/test-dgram-close test which
has also been fixed.
This is a back-port of commit 41ec6d0 from the master branch.
Fixes#6025.
On windows, libuv will immediately make a `ReadConsole` call (in the
thread pool) when a 'flowing' `uv_tty_t` handle is switched to
line-buffered mode. That causes an immediate issue for some users,
since libuv can't cancel the `ReadConsole` operation on Windows 8 /
Server 2012 and up if the program switches back to raw mode later.
But even if this will be fixed in libuv at some point, it's better to
avoid the overhead of starting work in the thread pool and immediately
cancelling it afther that.
See also f34f1e3, where the same change is made for the opposite
flow, e.g. move `resume()` after `_setRawMode(true)`.
Fixes#5927
This is a backport of dfb0461 (see #5930) to the v0.10 branch.
On windows, libuv will immediately make a `ReadConsole` call (in the
thread pool) when a 'flowing' `uv_tty_t` handle is switched to
line-buffered mode. That causes an immediate issue for some users,
since libuv can't cancel the `ReadConsole` operation on Windows 8 /
Server 2012 and up if the program switches back to raw mode later.
But even if this will be fixed in libuv at some point, it's better to
avoid the overhead of starting work in the thread pool and immediately
cancelling it afther that.
See also f34f1e3, where the same change is made for the opposite
flow, e.g. move `resume()` after `_setRawMode(true)`.
Fixes#5927Closes#5930
In other Writable streams, the 'finish' event means that all of the data
was written, and flushed to the underlying system.
The 'prefinish' event means that end() was called, and all of the data
was processed, but not necessarily completely flushed.
This change brings the http OutgoingMessage classes more in sync with
the other Writable classes throughout Node.
Unfortunately, this change highlights an issue with http
IncomingMessages, where the _dump() method will not actually pull the
data off the wire. This is a minor issue that is typically only
relevant in test cases, and will be addressed in the next commit.
This removes a dubious performance "optimization" where strings body
chunks were concatenated to one another (and to the headers) without any
regard for their encoding.
v0.10 allows strings for the offset, length and port arguments to
dgram.send() and dgram.sendto() but master before this commit would
abort with the following assert:
node: ../../src/udp_wrap.cc:227: static void
node::UDPWrap::DoSend(const v8::FunctionCallbackInfo<v8::Value>&,
int): Assertion `args[2]->IsUint32()' failed.
Go beyond what v0.10 does and also add range checks: offset and length
should be >= 0, port should be between 1 and 65535.
That particular change needs to be back-ported to v0.10 because passing
a negative offset or length number aborts with the following assertions:
node: ../../src/udp_wrap.cc:264: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`offset < Buffer::Length(buffer_obj)' failed.
Or:
node: ../../src/udp_wrap.cc:265: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`length <= Buffer::Length(buffer_obj) - offset' failed.
Interestingly enough, a negative port number is accepted in v0.10 but
is silently ignored.
This commit exposed a bug in the simple/test-dgram-close test which
has also been fixed.
smalloc.alloc now accepts an optional third argument which allows
specifying the type of array that should be allocated. All available
types are now located on smalloc.Types.
If an error listener is added to a stream using once() before it is
piped, it is invoked and removed during pipe() but before pipe() sees it
which causes it to be emitted again.
Fixes#4155#4978
It shouldn't ignore it!
There're two possibile cases, which should be handled properly:
1. Having a default `SNICallback` which is using contexts, added with
`server.addContext(...)` routine
2. Having a custom `SNICallback`.
In first case we may want to opt-out setting `.onsniselect` method (and
thus save some CPU time), if there're no contexts added. But, if custom
`SNICallback` is used, `.onsniselect` should always be set, because
server contexts don't affect it.
* Numeric values passed to alloc were converted to int32, not uint32
before the range check, which allows wrap around on ToUint32. This
would cause massive malloc calls and v8 fatal errors.
* dispose would not check if value was an Object, causing segfault if a
Primitive was passed.
* kMaxLength was not enumerable.