This makes it so that `stream.push(chunk)` is the only way to signal the
end of reading, removing the confusing disparity between the
callback-style _read method, and the fact that most real-world streams
do not have a 1:1 corollation between the "please give me data" event,
and the actual arrival of a chunk of data.
It is still possible, of course, to implement a `CallbackReadable` on
top of this. Simply provide a method like this as the callback:
function readCallback(er, chunk) {
if (er)
stream.emit('error', er);
else
stream.push(chunk);
}
However, *only* fs streams actually would behave in this way, so it
makes not a lot of sense to make TCP, TLS, HTTP, and all the rest have
to bend into this uncomfortable paradigm.
It seems like a good idea on the face of it, but lowWaterMarks are
actually not useful, and in practice should always be set to zero.
It would be worthwhile for writers if we actually did some kind of
writev() type of thing, but actually this just delays calling write()
and the overhead of doing a bunch of Buffer copies is not worth the
slight benefit of calling write() fewer times.
We detect for non-string and non-buffer values in onread and
turn the stream into an "objectMode" stream.
If we are in "objectMode" mode then howMuchToRead will
always return 1, state.length will always have 1 appended
to it when there is a new item and fromList always takes
the first value from the list.
This means that for object streams, the n in read(n) is
ignored and read() will always return a single value
Fixed a bug with unpipe where the pipe would break because
the flowing state was not reset to false.
Fixed a bug with sync cb(null, null) in _read which would
forget to end the readable stream