Manually fix issues that eslint --fix couldn't do automatically.
PR-URL: https://github.com/nodejs/node/pull/10685
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Roman Reiss <me@silverwind.io>
Several changes:
* Soft-Deprecate Buffer() constructors
* Add `Buffer.from()`, `Buffer.alloc()`, and `Buffer.allocUnsafe()`
* Add `--zero-fill-buffers` command line option
* Add byteOffset and length to `new Buffer(arrayBuffer)` constructor
* buffer.fill('') previously had no effect, now zero-fills
* Update the docs
PR-URL: https://github.com/nodejs/node/pull/4682
Reviewed-By: Сковорода Никита Андреевич <chalkerx@gmail.com>
Reviewed-By: Stephen Belanger <admin@stephenbelanger.com>
common.js needs to be loaded in all tests so that there is checking
for variable leaks and possibly other things. However, it does not
need to be assigned to a variable if nothing in common.js is referred
to elsewhere in the test.
PR-URL: https://github.com/nodejs/node/pull/4408
Reviewed-By: James M Snell <jasnell@gmail.com>
Enable linting for the test directory. A number of changes was made so
all tests conform the current rules used by lib and src directories. The
only exception for tests is that unreachable (dead) code is allowed.
test-fs-non-number-arguments-throw had to be excluded from the changes
because of a weird issue on Windows CI.
PR-URL: https://github.com/nodejs/io.js/pull/1721
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
This commit changes many test styles to change all references
from require('./common.js'); to require('./common');.
The latter is much more common, with the former only being used in 50
tests. It is just a stylistic change, and it seems that `common.js` was
introduced by a rogue test and copied and pasted into the rest.
Semver: patch
PR-URL: https://github.com/iojs/io.js/pull/917
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
The copyright and license notice is already in the LICENSE file. There
is no justifiable reason to also require that it be included in every
file, since the individual files are not individually distributed except
as part of the entire package.
Closes#5860
In streams2, there is an "old mode" for compatibility. Once switched
into this mode, there is no going back.
With this change, there is a "flowing mode" and a "paused mode". If you
add a data listener, then this will start the flow of data. However,
hitting the `pause()` method will switch *back* into a non-flowing mode,
where the `read()` method will pull data out.
Every time `read()` returns a data chunk, it also emits a `data` event.
In this way, a passive data listener can be added, and the stream passed
off to some other reader, for use with progress bars and the like.
There is no API change beyond this added flexibility.
In the function that pre-emptively fills the Readable queue, it relies
on a recursion through:
stream.push(chunk) ->
maybeReadMore(stream, state) ->
if (not reading more and < hwm) stream.read(0) ->
stream._read() ->
stream.push(chunk) -> repeat.
Since this was only calling read() a single time, and then relying on a
future nextTick to collect more data, it ends up causing a nextTick
recursion error (and potentially a RangeError, even) if you have a very
high highWaterMark, and are getting very small chunks pushed
synchronously in _read (as happens with TLS, or many simple test
streams).
This change implements a new approach, so that read(0) is called
repeatedly as long as it is effective (that is, the length keeps
increasing), and thus quickly fills up the buffer for streams such as
these, without any stacks overflowing.
When a readable listener is added, call read(0) so that data will flow in, up to
the high water mark.
Otherwise, it's somewhat confusing that you have to listen for readable,
and ALSO call read() (when it will certainly return null) just to get some
data out of the stream.
See: #4720
This makes it so that `stream.push(chunk)` is the only way to signal the
end of reading, removing the confusing disparity between the
callback-style _read method, and the fact that most real-world streams
do not have a 1:1 corollation between the "please give me data" event,
and the actual arrival of a chunk of data.
It is still possible, of course, to implement a `CallbackReadable` on
top of this. Simply provide a method like this as the callback:
function readCallback(er, chunk) {
if (er)
stream.emit('error', er);
else
stream.push(chunk);
}
However, *only* fs streams actually would behave in this way, so it
makes not a lot of sense to make TCP, TLS, HTTP, and all the rest have
to bend into this uncomfortable paradigm.