Make lines ending \r\n emit one 'line' event, not two (where the second
one is an empty string).
This adds a new keypress name: 'return' (as in: 'carriage return').
Fixes#3305.
This commit breaks simple/test-stream2-stderr-sync. Need to figure out
a better way, or just accept that `(function W(){stream.write(b,W)})()`
is going to be noisy. People should really be using the `'drain'` event
for this use-case anyway.
This reverts commit 02f7d1bfd8.
Always defer the _write callback. The optimization here was only
relevant in some oddball edge cases that we don't actually care about.
Our benchmarks confirm that just always deferring the Socket._write cb
is perfectly fine to do, and in some cases, even slightly more
performant.
When a datagram socket hasn't been bound yet, node will defer `send()`
operations until binding has completed. Before this patch a `listening`
listener would be installed every time `send` was called. This triggered
an EventEmitter leak warning when more than 10 packets were sent in a
tight loop. Therefore switch to using a single `listening` listener, and
use an array to enqueue outbound packets.
The refactor in b43e544140 to use
stream.push() in Transform inadvertently caused it to immediately
consume all the written data, regardless of whether or not the readable
side was being consumed.
Only pull data through the _transform() process when the readable side
is being consumed.
Fix#4667
TCPWrap::Initialize() and PipeWrap::Initialize() should be called before
any data will be read from received socket. But, because of lazy
initialization of these bindings, Initialize() method isn't called.
Init bindings manually upon socket receiving.
See #4669
mainly to allow native addons to export single functions on
rather than being restricted to operating on an existing
object.
Init functions now receive exports as the first argument, like
before, but also the module object as the second argument, if they
support it.
Related to #4634
cc: @rvagg
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
Backport of 16bbecc from master branch. Closes#4633.
Changed types of errors thrown to be more indicative of what the error
represents. Also removed a few unnecessary uses of the v8 fully
quantified typename.
Argument checks were simplified by setting all undefined/NaN or out of
bounds values equal to their defaults.
Also copy() tests had a flaw that each buffer had the same bit pattern at
the same offset. So even if the copy failed, the bit-by-bit comparison
would have still been true. This was fixed by filling each buffer with a
unique value before copy operations.
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
This adds a proxy for bytesWritten to the tls.CryptoStream. This
change makes the connection object more similar between HTTP and
HTTPS requests in an effort to avoid confusion.
See issue #4650 for more background information.
We detect for non-string and non-buffer values in onread and
turn the stream into an "objectMode" stream.
If we are in "objectMode" mode then howMuchToRead will
always return 1, state.length will always have 1 appended
to it when there is a new item and fromList always takes
the first value from the list.
This means that for object streams, the n in read(n) is
ignored and read() will always return a single value
Fixed a bug with unpipe where the pipe would break because
the flowing state was not reset to false.
Fixed a bug with sync cb(null, null) in _read which would
forget to end the readable stream
This is similar to commit 2cbf458 but this time for 204 No Content
instead of 304 Not Modified responses.
When the user sends a 204 response with a Transfer-Encoding: chunked
header, suppress sending the zero chunk and force the connection to
close.
Force the connection to close when the response is a 304 Not Modified
and the user has set a "Transfer-Encoding: chunked" header.
RFC 2616 mandates that 304 responses MUST NOT have a body but node.js
used to send out a zero chunk anyway to accommodate clients that don't
have special handling for 304 responses.
It was pointed out that this might confuse reverse proxies to the point
of creating security liabilities, so suppress the zero chunk and force
the connection to close.
Profiling in clustered environments doesn't work out of the box.
By default, V8 writes the profile data of all processes to a single
v8.log.
Running that log file through a tick processor produces bogus numbers
because many events won't match up with the recorded memory mappings
and you end up with graphs where 80+% of ticks is unaccounted for.
Fixing the tick processor to deal with multi-process output is not very
useful because the processes may be running wildly disparate workloads.
That's why we fix up the command line arguments to include
a "--logfile=v8-%p.log" argument (where %p is expanded to the PID)
unless it already contains a --logfile argument.
Fixes#4617.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want to receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want `server.connections` to be a reliable
number, and receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Unfortunately, it's just too slow to do this in events.js. Users will
just have to live with not having events named __proto__ or toString.
This reverts commit b48e303af0.
Reject negative offsets in SlowBuffer::MakeFastBuffer(), it allows
the creation of buffers that point to arbitrary addresses.
Reported by Trevor Norris.
Problem 1: If stream.push() triggers a 'readable' event, and the user
calls `read(n)` with some n > the highWaterMark, then the push() will
return false (indicating that they should not push any more), but no
future 'readable' event is coming (because we're above the
highWaterMark).
Solution: return true from push() when needReadable is set.
Problem 2: A read(n) for n != 0, after the stream had encountered an
EOF, would not trigger the 'end' event if the EOF was pushed in
synchronously by the _read() function.
Solution: Check for ended in stream.read() and schedule an end event if
the length now equals 0.
Fix#4585
Improved assert check order of execution and added additional checks on
parameters to ensure no bad values make it through (e.g. negative offset
values).
Improvements:
* floating point operations are approx 4x's faster
* Now write quiet NaN's
* all read/write on floating point now done in C, so no more need for
lib/buffer_ieee754.js
* float values have more accurate min/max value checks
* add additional benchmarks for buffers read/write
* created benchmark/_bench_timer.js which is a simple library that
can be included into any benchmark and provides an intelligent tracker
for sync and async tests
* add benchmarks for DataView set methods
* add checks and tests to make sure offset is greater than 0
There was previously an assert() in there, but this part of the code is
so high-volume that the added cost made a measurable dent in http_simple.
Just checking inline is fine, though, and prevents a lot of potential
hazards.
Say that a stream's current read queue has 101 bytes in it, and the
underlying resource has ended (ie, reached EOF).
If you do something like this:
stream.read(100); // leave a byte behind
stream.read(0); // read(0) for some reason
then the read(0) will get 0 from the howMuchToRead function. Since the
stream was ended, this was incorrectly treating the 0 as a "there is no
more in the buffer", and emitting 'end' before that last byte was read.
Why have the read(0) in the first place? We do this in some cases to
trigger the last few bytes of a net socket (such as a child process's
stdio pipes). This was causing issues when piping a `git archive` job
to a file: the resulting tarball was incomplete, because it occasionally
was not getting the last chunk.