As of 34b535f4c, test-child-process-flush-stdio was failing
on CentOS 5 systems in CI due to the change in stream state
checking in `child_process`. This commit fixes those failures
by making readable streams less eager in setting their readable
flag on EOF.
Fixes: https://github.com/nodejs/node/issues/4125
PR-URL: https://github.com/nodejs/node/pull/4141
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
These properties were initially used to determine stream status
back in node v0.8 and earlier. Since streams2 however, these
properties were *always* true, which can be misleading for
example if you are trying to immediately determine whether
a Writable stream is still writable or not (to avoid a "write after
end" exception).
PR-URL: https://github.com/nodejs/node/pull/4083
Reviewed-By: Chris Dickinson <christopher.s.dickinson@gmail.com>
This commit fixes some error messages that are not consistent with
some general rules which most of the error messages follow.
PR-URL: https://github.com/nodejs/node/pull/3374
Reviewed-By: Roman Reiss <me@silverwind.io>
Avoids doing a buffer.concat on the internal buffer
when that array has only a single thing in it.
Reviewed-By: Chris Dickinson <chris@neversaw.us>
Reviewed-By: James M Snell <jasnell@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/3300
The `events` module already exports `EventEmitter` constructor function
So, we don't have to use `events.EventEmitter` to access it.
Refer: https://github.com/nodejs/node/pull/2896
PR-URL: https://github.com/nodejs/node/pull/2921
Reviewed-By: Roman Reiss <me@silverwind.io>
Reviewed-By: Michaël Zasso <mic.besace@gmail.com>
Now parts of our public and public-ish APIs fall back to old-style
listenerCount() if the emitter does not have a listenerCount function.
Fixes: https://github.com/nodejs/node/issues/2655
Refs: 8f58fb92ff
PR-URL: https://github.com/nodejs/node/pull/2661
Reviewed-By: Sakthipriyan Vairamani <thechargingvolcano@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
roundUpToNextPowerOf2() does more than just rounding up to the next
power of two. Rename it to computeNewHighWaterMark().
PR-URL: https://github.com/nodejs/node/pull/2479
Reviewed-By: Chris Dickinson <christopher.s.dickinson@gmail.com>
Reviewed-By: Sakthipriyan Vairamani <thechargingvolcano@gmail.com>
Don't iterate over all 32 bits, use some hacker's delight bit twiddling
to compute the next power of two.
The logic can be reduced to `n = 1 << 32 - Math.clz32(n)` but then it
can't easily be backported to v2.x; Math.clz32() was added in V8 4.3.
PR-URL: https://github.com/nodejs/node/pull/2479
Reviewed-By: Chris Dickinson <christopher.s.dickinson@gmail.com>
Reviewed-By: Sakthipriyan Vairamani <thechargingvolcano@gmail.com>
The high watermark is capped at 8 MB, not 128 MB like the comment
in lib/_stream_readable.js said.
PR-URL: https://github.com/nodejs/node/pull/2479
Reviewed-By: Chris Dickinson <christopher.s.dickinson@gmail.com>
Reviewed-By: Sakthipriyan Vairamani <thechargingvolcano@gmail.com>
As per the discussion in #734, this patch deprecates the usage of
`EventEmitter.listenerCount` static function in the docs, and introduces
the `listenerCount` function in the prototype of `EventEmitter` itself.
PR-URL: https://github.com/nodejs/node/pull/2349
Reviewed-By: Trevor Norris <trev.norris@gmail.com>
Reviewed-By: Brian White <mscdex@mscdex.net>
Many of the util.is*() methods used to check data types
simply compare against a single value or the result of
typeof. This commit replaces calls to these methods with
equivalent checks. This commit does not touch calls to the
more complex methods (isRegExp(), isDate(), etc.).
Fixes: https://github.com/iojs/io.js/issues/607
PR-URL: https://github.com/iojs/io.js/pull/647
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
This commit replaces a number of var statements throughout
the lib code with const statements.
PR-URL: https://github.com/iojs/io.js/pull/541
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
The copyright and license notice is already in the LICENSE file. There
is no justifiable reason to also require that it be included in every
file, since the individual files are not individually distributed except
as part of the entire package.
Turn on strict mode for the files in the lib/ directory. It helps
catch bugs and can have a positive effect on performance.
PR-URL: https://github.com/node-forward/node/pull/64
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Fedor Indutny <fedor@indutny.com>
net Sockets were calling read(0) to start reading, without
checking to see if they were paused first. This would result
in paused Socket objects keeping the event loop alive.
Fixes#8200
Reviewed-by: Trevor Norris <trev.norris@gmail.com>
A streams1 stream will have its falsy values such as 0, false, or ""
eaten by the upgrade to streams2, even when objectMode is enabled.
Include test for said cases.
Reviewed-by: isaacs <i@izs.me>
Reviewed-by: Trevor Norris <trev.norris@gmail.com>
A ReadableStream with a base64 StringDecoder backed by only
one or two bytes would fail to output its partial data before
ending. This fix adds a check to see if the `read` was triggered
by an internal `flow`, and if so, empties any remaining data.
fixes#7914.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Switch condition order to check for null before calling isNaN().
Also remove two unnecessary calls to isNaN() that are already
covered by calls to isFinite(). This commit targets v0.10, as
opposed to #7891, which targets master (suggested by
@bnoordhuis). Closes#7840.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Default highWaterMark is now set properly when using stream Duplex's
writableObjectMode and readableObjectMode options.
Added condition to the already existing split objectMode test to ensure
the highWaterMark is being set to the correct default value on both the
ReadableState and WritableState for readableObjectMode and
writableObjectMode.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
The [Stream documentation for .push](http://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)
explicitly states multiple times that null is a special cased value
that indicates the end of a stream. It is confusing and undocumented
that undefined *also* ends the stream, even though in object mode
there is a distinct and important difference.
The docs for Object-Mode also explicitly mention null as the *only*
special cased value, making no mention of undefined.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
This commit introduces `readableObjectMode` and
`writableObjectMode` options for Duplex streams.
This can be used mostly to make parsers and
serializers with Transform streams.
Also the docs section about stream state objects
is removed, because it is not relevant anymore.
The example from the section is remade to show
new options.
fixes#6284
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
In this situation:
writable.on('error', handler);
readable.pipe(writable);
writable.removeListener('error', handler);
writable.emit('error', new Error('boom'));
there is actually no error handler, but it doesn't throw, because of the
fix for stream.once('error', handler), in 23d92ec.
Note that simply reverting that change is not valid either, because
otherwise this will emit twice, being handled the first time, and then
throwing the second:
writable.once('error', handler);
readable.pipe(writable);
writable.emit('error', new Error('boom'));
Fix this with a horrible hack to make the stream pipe onerror handler
added before any other userland handlers, so that our handler is not
affected by adding or removing any userland handlers.
Closes#6007.
When a stream is flowing, and not in the middle of a sync read, and
the read buffer currently has a length of 0, we can just emit a 'data'
event rather than push it onto the array, emit 'readable', and then
automatically call read().
As it happens, this is quite a frequent occurrence! Making this change
brings the HTTP benchmarks back into a good place after the removal of
the .ondata/.onend socket kludge methods.
If an error listener is added to a stream using once() before it is
piped, it is invoked and removed during pipe() but before pipe() sees it
which causes it to be emitted again.
Fixes#4155#4978
This prevents the following sort of thing from being confusing:
```javascript
stream.on('data', function() { console.error('got data'); });
stream.pause(); // stop reading
// turns out no data is available
stream.push(null);
// Hand the stream to someone else, who does stuff...
setTimeout(function() {
// too late! 'end' is already emitted!
stream.on('end', function() { console.error('got end'); });
});
```
With this change, the `end` event is not emitted until you call `read()`
*past* the EOF null. So, a paused stream will not swallow the `end`
event and emit it before you `resume()` the stream.
Closes#5860
In streams2, there is an "old mode" for compatibility. Once switched
into this mode, there is no going back.
With this change, there is a "flowing mode" and a "paused mode". If you
add a data listener, then this will start the flow of data. However,
hitting the `pause()` method will switch *back* into a non-flowing mode,
where the `read()` method will pull data out.
Every time `read()` returns a data chunk, it also emits a `data` event.
In this way, a passive data listener can be added, and the stream passed
off to some other reader, for use with progress bars and the like.
There is no API change beyond this added flexibility.
In some cases, the http CONNECT/Upgrade API is unshifting an empty
bodyHead buffer onto the socket.
Normally, stream.unshift(chunk) does not set state.reading=false.
However, this check was not being done for the case when the chunk was
empty (either `''` or `Buffer(0)`), and as a result, it was causing the
socket to think that a read had completed, and to stop providing data.
This bug is not limited to http or web sockets, but rather would affect
any parser that unshifts data back onto the source stream without being
very careful to never unshift an empty chunk. Since the intent of
unshift is to *not* change the state.reading property, this is a bug.
Fixes#5557FixesLearnBoost/socket.io#1242
Pretty much everything assumes strings to be utf-8, but crypto
traditionally used binary strings, so we need to keep the default
that way until most users get off of that pattern.
If there is an encoding, and we do 'stream.push(chunk, enc)', and the
encoding argument matches the stated encoding, then we're converting from
a string, to a buffer, and then back to a string. Of course, this is a
completely pointless bit of work, so it's best to avoid it when we know
that we can do so safely.
Fix#5272
The consumption of a readable stream is a dance with 3 partners.
1. The specific stream Author (A)
2. The Stream Base class (B), and
3. The Consumer of the stream (C)
When B calls the _read() method that A implements, it sets a 'reading'
flag, so that parallel calls to _read() can be avoided. When A calls
stream.push(), B knows that it's safe to start calling _read() again.
If the consumer C is some kind of parser that wants in some cases to
pass the source stream off to some other party, but not before "putting
back" some bit of previously consumed data (as in the case of Node's
websocket http upgrade implementation). So, stream.unshift() will
generally *never* be called by A, but *only* called by C.
Prior to this patch, stream.unshift() *also* unset the state.reading
flag, meaning that C could indicate the end of a read, and B would
dutifully fire off another _read() call to A. This is inappropriate.
In the case of fs streams, and other variably-laggy streams that don't
tolerate overlapped _read() calls, this causes big problems.
Also, calling stream.shift() after the 'end' event did not raise any
kind of error, but would cause very strange behavior indeed. Calling it
after the EOF chunk was seen, but before the 'end' event was fired would
also cause weird behavior, and could lead to data being lost, since it
would not emit another 'readable' event.
This change makes it so that:
1. stream.unshift() does *not* set state.reading = false
2. stream.unshift() is allowed up until the 'end' event.
3. unshifting onto a EOF-encountered and zero-length (but not yet
end-emitted) stream will defer the 'end' event until the new data is
consumed.
4. pushing onto a EOF-encountered stream is now an error.
So, if you read(), you have that single tick to safely unshift() data
back into the stream, even if the null chunk was pushed, and the length
was 0.