In other Writable streams, the 'finish' event means that all of the data
was written, and flushed to the underlying system.
The 'prefinish' event means that end() was called, and all of the data
was processed, but not necessarily completely flushed.
This change brings the http OutgoingMessage classes more in sync with
the other Writable classes throughout Node.
Unfortunately, this change highlights an issue with http
IncomingMessages, where the _dump() method will not actually pull the
data off the wire. This is a minor issue that is typically only
relevant in test cases, and will be addressed in the next commit.
This removes a dubious performance "optimization" where strings body
chunks were concatenated to one another (and to the headers) without any
regard for their encoding.
Achieve a minor speed-up by looking up the timeout callback on the timer
object by using an array index rather than a named property.
Gives a performance boost of about 1% on the misc/timers benchmarks.
Don't set the oncomplete property in src/cares_wrap.cc, we can do it
just as easily in lib/dns.js.
Switch two closures to the 'function with _this_ object' model. Makes
it impossible for an overzealous closure to capture too much context
and accidentally hold on to too much memory.
Change process.domain to use a getter/setter and access that property
via an array index. These are much faster to get from c++, and it can be
passed to _setupDomainUse and stored as a Persistent<Array>.
InDomain() and GetDomain() as trivial ways to access the domain
information in the native layer. Important because we'll be able to
quickly access if a domain is active. Instead of just whether the domain
module has been loaded.
Don't use v8::Object::SetHiddenValue() to keep a reference alive to the
buffer, we can just as easily do that from JS land and it's a lot faster
to boot.
Because the buffer is now a visible property of the write request
object, it's essential that we do *not* log it - we'd be effectively
serializing the whole buffer to a pretty-printed string.
v0.10 allows strings for the offset, length and port arguments to
dgram.send() and dgram.sendto() but master before this commit would
abort with the following assert:
node: ../../src/udp_wrap.cc:227: static void
node::UDPWrap::DoSend(const v8::FunctionCallbackInfo<v8::Value>&,
int): Assertion `args[2]->IsUint32()' failed.
Go beyond what v0.10 does and also add range checks: offset and length
should be >= 0, port should be between 1 and 65535.
That particular change needs to be back-ported to v0.10 because passing
a negative offset or length number aborts with the following assertions:
node: ../../src/udp_wrap.cc:264: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`offset < Buffer::Length(buffer_obj)' failed.
Or:
node: ../../src/udp_wrap.cc:265: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`length <= Buffer::Length(buffer_obj) - offset' failed.
Interestingly enough, a negative port number is accepted in v0.10 but
is silently ignored.
This commit exposed a bug in the simple/test-dgram-close test which
has also been fixed.
When a stream is flowing, and not in the middle of a sync read, and
the read buffer currently has a length of 0, we can just emit a 'data'
event rather than push it onto the array, emit 'readable', and then
automatically call read().
As it happens, this is quite a frequent occurrence! Making this change
brings the HTTP benchmarks back into a good place after the removal of
the .ondata/.onend socket kludge methods.
smalloc.alloc now accepts an optional third argument which allows
specifying the type of array that should be allocated. All available
types are now located on smalloc.Types.
* Moved the ToObject check out of smalloc::Alloc and into JS. Direct
usage of that method is for internal use only and so can bypass the
possible coercion.
* Same has been done with smalloc::SliceOnto.
* smalloc::CopyOnto will now throw if passed argument is not an object.
* Remove extra TargetFreeCallback function. There was a use for it when
it was working with a Local<T>, but that code has been removed making
the function superfluous.
There are some agent subclasses using this today.
Despite the addRequest function being undocumented internal API, it's
easy enough to just support the old signature for backwards
compatibility.
`server.SNICallback` was initialized with `SNICallback.bind(this)`, and
therefore check `this.SNICallback === SNICallback` was always false, and
`_tls_wrap.js` always thought that it was a custom callback instead of
default one. Which in turn was causing clienthello parser to be enabled
regardless of presence of SNI contexts.
If an error listener is added to a stream using once() before it is
piped, it is invoked and removed during pipe() but before pipe() sees it
which causes it to be emitted again.
Fixes#4155#4978
It shouldn't ignore it!
There're two possibile cases, which should be handled properly:
1. Having a default `SNICallback` which is using contexts, added with
`server.addContext(...)` routine
2. Having a custom `SNICallback`.
In first case we may want to opt-out setting `.onsniselect` method (and
thus save some CPU time), if there're no contexts added. But, if custom
`SNICallback` is used, `.onsniselect` should always be set, because
server contexts don't affect it.
* Numeric values passed to alloc were converted to int32, not uint32
before the range check, which allows wrap around on ToUint32. This
would cause massive malloc calls and v8 fatal errors.
* dispose would not check if value was an Object, causing segfault if a
Primitive was passed.
* kMaxLength was not enumerable.
Before this commit, events were set to undefined rather than deleted
from the EventEmitter's backing dictionary for performance reasons:
`delete obj.key` causes a transition of the dictionary's hidden class
and that can be costly.
Unfortunately, that introduces a memory leak when many events are added
and then removed again. The strings containing the event names are never
reclaimed by the garbage collector because they remain part of the
dictionary.
That's why this commit makes EventEmitter delete events again. This
effectively reverts commit 0397223.
Fixes#5970.
Avoid a costly buffer-to-string operation. Instead, allocate a new
buffer, copy the chunk header and data into it and send that.
The speed difference is negligible on small payloads but it really
shines with larger (10+ kB) chunks. benchmark/http/end-vs-write-end
with 64 kB chunks gives 45-50% higher throughput. With 1 MB chunks,
the difference is a staggering 590%.
Of course, YMMV will vary with real workloads and networks but this
commit should have a positive impact on CPU and memory consumption.
Big kudos to Wyatt Preul (@wpreul) for reporting the issue and providing
the initial patch.
Fixes#5941 and #5944.
Don't throw an exception when the argument to %j is an object that
contains circular references, it's not helpful. Catch the exception
and return the string '[Circular]'.
Prior, strings would first be converted to a Buffer before being written
to disk. Now the intermediary step has been removed.
Other changes of note:
* Class member "must_free" was added to req_wrap so to track if the
memory needs to be manually cleaned up after use.
* External String Resource support, so the memory will be used directly
instead of copying out the data.
* Docs have been updated to reflect that if position is not a number
then it will assume null. Previously it specified the argument must be
null, but that was not how the code worked. An attempt was made to
only support == null, but there were too many tests that assumed !=
number would be enough.
* Docs update show some of the write/writeSync arguments are optional.
Passing the number of sent bytes to the callback is superfluous;
datagram sockets operate in atomic mode: either the sendmsg() system
call succeeds or it fails but it never does partial writes.
Instead, report send errors to the callback. UDP error reporting is
fairly haphazard on most platforms. You should not expect reliable
delivery of anything besides EMSGSIZE and (possibly) ENETDOWN and
ENETUNREACH.
Fixes#2608.
This prevents the following sort of thing from being confusing:
```javascript
stream.on('data', function() { console.error('got data'); });
stream.pause(); // stop reading
// turns out no data is available
stream.push(null);
// Hand the stream to someone else, who does stuff...
setTimeout(function() {
// too late! 'end' is already emitted!
stream.on('end', function() { console.error('got end'); });
});
```
With this change, the `end` event is not emitted until you call `read()`
*past* the EOF null. So, a paused stream will not swallow the `end`
event and emit it before you `resume()` the stream.
If `obj` given to `cluster._getServer` has `_setServerData` or
`_getServerData` methods, the data will be synchronized across workers
and stored in master.
Includes:
* No need for `typeof` when checking undefined.
* length is coerced to uint so no need to check if < 0.
* Stay consistent and always throw `new` errors.
* Returning offset + magic number in every write is error prone. Instead
return the central write function which returns the correct offset.