It shouldn't ignore it!
There're two possibile cases, which should be handled properly:
1. Having a default `SNICallback` which is using contexts, added with
`server.addContext(...)` routine
2. Having a custom `SNICallback`.
In first case we may want to opt-out setting `.onsniselect` method (and
thus save some CPU time), if there're no contexts added. But, if custom
`SNICallback` is used, `.onsniselect` should always be set, because
server contexts don't affect it.
* Numeric values passed to alloc were converted to int32, not uint32
before the range check, which allows wrap around on ToUint32. This
would cause massive malloc calls and v8 fatal errors.
* dispose would not check if value was an Object, causing segfault if a
Primitive was passed.
* kMaxLength was not enumerable.
Don't throw an exception when the argument to %j is an object that
contains circular references, it's not helpful. Catch the exception
and return the string '[Circular]'.
Prior, strings would first be converted to a Buffer before being written
to disk. Now the intermediary step has been removed.
Other changes of note:
* Class member "must_free" was added to req_wrap so to track if the
memory needs to be manually cleaned up after use.
* External String Resource support, so the memory will be used directly
instead of copying out the data.
* Docs have been updated to reflect that if position is not a number
then it will assume null. Previously it specified the argument must be
null, but that was not how the code worked. An attempt was made to
only support == null, but there were too many tests that assumed !=
number would be enough.
* Docs update show some of the write/writeSync arguments are optional.
Passing the number of sent bytes to the callback is superfluous;
datagram sockets operate in atomic mode: either the sendmsg() system
call succeeds or it fails but it never does partial writes.
Instead, report send errors to the callback. UDP error reporting is
fairly haphazard on most platforms. You should not expect reliable
delivery of anything besides EMSGSIZE and (possibly) ENETDOWN and
ENETUNREACH.
Fixes#2608.
This prevents the following sort of thing from being confusing:
```javascript
stream.on('data', function() { console.error('got data'); });
stream.pause(); // stop reading
// turns out no data is available
stream.push(null);
// Hand the stream to someone else, who does stuff...
setTimeout(function() {
// too late! 'end' is already emitted!
stream.on('end', function() { console.error('got end'); });
});
```
With this change, the `end` event is not emitted until you call `read()`
*past* the EOF null. So, a paused stream will not swallow the `end`
event and emit it before you `resume()` the stream.
If `obj` given to `cluster._getServer` has `_setServerData` or
`_getServerData` methods, the data will be synchronized across workers
and stored in master.
Includes:
* No need for `typeof` when checking undefined.
* length is coerced to uint so no need to check if < 0.
* Stay consistent and always throw `new` errors.
* Returning offset + magic number in every write is error prone. Instead
return the central write function which returns the correct offset.
In a rush to implement the fix 35e0d60 I overlooked the logic that
causes 0-length buffer instantiation to automatically not assign the
parent regardless.
SlowBuffer(0) passes NULL instead of doing malloc(0). So when someone
attempted to SlowBuffer(0).slice(0, 1) an assert would fail in
smalloc::SliceOnto.
It's important that the check go where it is because the resulting
Buffer needs to have external array data allocated. In the case a user
tries to slice a zero length Buffer it will also have NULL passed as the
data argument.
Also fixed where the .parent attribute was set for zero length Buffers.
There is no need to track the source of slice if the slice isn't
actually occurring.
Closes#5860
In streams2, there is an "old mode" for compatibility. Once switched
into this mode, there is no going back.
With this change, there is a "flowing mode" and a "paused mode". If you
add a data listener, then this will start the flow of data. However,
hitting the `pause()` method will switch *back* into a non-flowing mode,
where the `read()` method will pull data out.
Every time `read()` returns a data chunk, it also emits a `data` event.
In this way, a passive data listener can be added, and the stream passed
off to some other reader, for use with progress bars and the like.
There is no API change beyond this added flexibility.
Libuv now returns errors directly. Make everything in src/ and lib/
follow suit.
The changes to lib/ are not strictly necessary but they remove the need
for the abominations that are process._errno and node::SetErrno().
It will be confusing if later on we add Buffer#dispose(), and smalloc is
its own cpp api anyways. So instead create a new require('smalloc') to
expose the previous Buffer.alloc/dispose methods, and expose copyOnto
and kMaxLength as well.
Other changes:
* Added documentation and additional tests.
* smalloc::CopyOnto has changed from using assert() to throwing errors
on bad argument values because it is not exposed to the user.
* Minor style fixes.
When using url.parse(), path and pathname usually return '/' when there
is no path available. However when you have a protocol that contains
non-lowercase letters and the input string does not have a trailing
slash, both path and pathname will be undefined.
A pooled https agent may get a Connection: close, but never finish
destroying the socket as the prior request had already emitted finish
likely from a pipe.
Since the socket is not marked as destroyed it may get reused by the
agent pool and result in an ECONNRESET.
re: #5712#5739
Instead of destroying sockets when there are no pending requests, put
them in a freeSockets list, and unref() them so that they do not keep
the event loop open.
Also, set the default max sockets to Infinity, to prevent the awful
surprising deadlocks that happen when more connections are made.
When creating a slice, make sure to propagate the originating parent.
This is to prevent a buf.parent.parent.(etc) scenario.
Also speed up the constructor by preventing lookup of non-existant
properties by setting them beforehand in the prototype. (see
https://github.com/joyent/node/commit/7ce5a31#commitcomment-3332779)
There was previously up to a second exit delay when exiting node
right after an http request/response, due to the utcDate() function
doing a setTimeout to update the cached date/time.
Fixing this should increase the performance of our http tests.
This is a big commit that touches just about every file in the src/
directory. The V8 API has changed in significant ways. The most
important changes are:
* Binding functions take a const v8::FunctionCallbackInfo<T>& argument
rather than a const v8::Arguments& argument.
* Binding functions return void rather than v8::Handle<v8::Value>. The
return value is returned with the args.GetReturnValue().Set() family
of functions.
* v8::Persistent<T> no longer derives from v8::Handle<T> and no longer
allows you to directly dereference the object that the persistent
handle points to. This means that the common pattern of caching
oft-used JS values in a persistent handle no longer quite works,
you first need to reconstruct a v8::Local<T> from the persistent
handle with the Local<T>::New(isolate, persistent) factory method.
A handful of (internal) convenience classes and functions have been
added to make dealing with the new API a little easier.
The most visible one is node::Cached<T>, which wraps a v8::Persistent<T>
with some template sugar. It can hold arbitrary types but so far it's
exclusively used for v8::Strings (which was by far the most commonly
cached handle type.)
If a transform stream has objectMode = true, it should
allow falsey values other than (null) like 0, false, ''.
null is reserved to indicate stream eof but other falsey
values should flow through properly.
There was previously up to a second exit delay when exiting node
right after an http request/response, due to the utcDate() function
doing a setTimeout to update the cached date/time.
Fixing this should increase the performance of our http tests.