Increase the performance and simplify the logic of Buffer#write{U}Int*
and Buffer#read{U}Int* methods by placing the byte manipulation code
directly inline.
Also improve the speed of buffer-write benchmarks by creating a new
call directly to each method by using Function() instead of calling by
buff[fn].
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Conflicts:
lib/buffer.js
compare() works like String.localeCompare such that:
Buffer.compare(a, b) === a.compare(b);
equals() does a native check to see if two buffers are equal.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Fix issue where a signed integer is returned.
Example:
var b = new Buffer(4);
b.writeUInt32BE(0xffffffff);
b.readUInt32BE(0) == -1
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
When our estimates for a storage size are higher than the actual length
of decoded data, the destination buffer should be truncated. Otherwise
`Buffer::Length` will give misleading information to C++ layer.
fix#7365
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Increase the performance and simplify the logic of Buffer#write{U}Int*
and Buffer#read{U}Int* methods by placing the byte manipulation code
directly inline.
Also improve the speed of buffer-write benchmarks by creating a new
call directly to each method by using Function() instead of calling by
buff[fn].
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Buffer#write() was showing the deprecation warning when only
buf.write('string') was passed. This is incorrect since the encoding is
always optional.
Argument order should follow:
Buffer#write(string[, offset[, length]][, encoding])
(yeah, not confusing at all)
String#toLowerCase() is incredibly slow and was costing a 15-30%
performance hit for Buffers less than 1KB. Now instead it'll attempt to
find the correct encoding directly from the passed encoding, only then
afterwards it'll lowercase.
The optimization for not passing any encoding at all is still at the top
of the method.
At most this may add 10% performance hit for passing a mixed case
encoding.
Length arguments passed to SlowBuffer were coerced to Int32, not Uint32,
so passing a negative number would throw the following:
node: ../src/smalloc.cc:244: void node::smalloc::Alloc(): Assertion `length <= kMaxLength' failed.
Aborted (core dumped)
That has been fixed by coercing to Uint32 and comparing the value
against kMaxLength.
Due to a lot of the util.is* checks there was much unnecessary overhead
for the most common use case of Buffer. Which is creating a new Buffer
instance for data from incoming I/O. NativeBuffer is a simple way to
bypass all the unneeded checks and simply hand back a Buffer instance
while setting the length.
All the Buffer#{ascii,hex,etc.}Slice() methods are intentionally strict
to alert if a Buffer instance was attempting to be accessed out of
bounds. Buffer#toString() is the more user friendly way of accessing the
data, and will coerce values to their min/max on overflow.
Includes:
* No need for `typeof` when checking undefined.
* length is coerced to uint so no need to check if < 0.
* Stay consistent and always throw `new` errors.
* Returning offset + magic number in every write is error prone. Instead
return the central write function which returns the correct offset.
In a rush to implement the fix 35e0d60 I overlooked the logic that
causes 0-length buffer instantiation to automatically not assign the
parent regardless.
SlowBuffer(0) passes NULL instead of doing malloc(0). So when someone
attempted to SlowBuffer(0).slice(0, 1) an assert would fail in
smalloc::SliceOnto.
It's important that the check go where it is because the resulting
Buffer needs to have external array data allocated. In the case a user
tries to slice a zero length Buffer it will also have NULL passed as the
data argument.
Also fixed where the .parent attribute was set for zero length Buffers.
There is no need to track the source of slice if the slice isn't
actually occurring.
It will be confusing if later on we add Buffer#dispose(), and smalloc is
its own cpp api anyways. So instead create a new require('smalloc') to
expose the previous Buffer.alloc/dispose methods, and expose copyOnto
and kMaxLength as well.
Other changes:
* Added documentation and additional tests.
* smalloc::CopyOnto has changed from using assert() to throwing errors
on bad argument values because it is not exposed to the user.
* Minor style fixes.
When creating a slice, make sure to propagate the originating parent.
This is to prevent a buf.parent.parent.(etc) scenario.
Also speed up the constructor by preventing lookup of non-existant
properties by setting them beforehand in the prototype. (see
https://github.com/joyent/node/commit/7ce5a31#commitcomment-3332779)
While the new Buffer implementation is much faster we still have the
necessity of using Buffer pools. This is undesirable because it may
still lead to unwanted memory retention, but for the time being this is
the best solution.
Because of this re-introduction, and since there is no more SlowBuffer
type, the SlowBuffer method has been re-purposed to return a non-pooled
Buffer instance. This will be helpful for developers to store data for
indeterminate lengths of time without introducing a memory leak.
Another change to Buffer pools was that they are only allocated if the
requested chunk is < poolSize / 2. This was done because allocations are
much quicker now, and it's a better use of the pool.
Memory allocations are now done through smalloc. The Buffer cc class has
been removed completely, but for backwards compatibility have left the
namespace as Buffer.
The .parent attribute is only set if the Buffer is a slice of an
allocation. Which is then set to the alloc object (not a Buffer).
The .offset attribute is now a ReadOnly set to 0, for backwards
compatibility. I'd like to remove it in the future (pre v1.0).
A few alterations have been made to how arguments are either coerced or
thrown. All primitives will now be coerced to their respective values,
and (most) all out of range index requests will throw.
The indexes that are coerced were left for backwards compatibility. For
example: Buffer slice operates more like Array slice, and coerces
instead of throwing out of range indexes. This may change in the future.
The reason for wanting to throw for out of range indexes is because
giving js access to raw memory has high potential risk. To mitigate that
it's easier to make sure the developer is always quickly alerted to the
fact that their code is attempting to access beyond memory bounds.
Because SlowBuffer will be deprecated, and simply returns a new Buffer
instance, all tests on SlowBuffer have been removed.
Heapdumps will now show usage under "smalloc" instead of "Buffer".
ParseArrayIndex was added to node_internals to support proper uint
argument checking/coercion for external array data indexes.
SlabAllocator had to be updated since handle_ no longer exists.
Previously one could write anywhere in a buffer pool if they accidently
got their offset wrong. Mainly because the cc level checks only test
against the parent slow buffer and not against the js object properties.
So now we check to make sure values won't go beyond bounds without
letting the dev know.
Expand the JSON representation of Buffer to include type information
so that it can be deserialized in JSON.parse() without context.
Fixes#5110.
Fixes#5143.
_charsWritten is an internal property that was constantly written to,
but never read from. So it has been removed.
Removed documentation reference as well.
Checks have been simplified and optimized for most-used cases.
Calling Buffer with another Buffer as the subject will now use the
SlowBuffer Copy method instead of the for loop.
No need to call for value coercion, just place the ternary inline.
Move the implementation to C++ land. This is similar to commit 3f65916
but this time for the write() function and the Buffer(s, 'hex')
constructor.
Speeds up the benchmark below about 24x (2.6s vs 1:02m).
var s = 'f';
for (var i = 0; i < 26; ++i) s += s; // 64 MB
Buffer(s, 'hex');
Move the implementation to C++ land. The old JS implementation used
string concatenation, was dog slow and consumed copious amounts of
memory for large buffers. Example:
var buf = Buffer(0x1000000); // 16 MB
buf.toString('hex') // Used 3+ GB of memory.
The new implementation operates in O(n) time and space.
Fixes#4700.
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
Backport of 16bbecc from master branch. Closes#4633.
Changed types of errors thrown to be more indicative of what the error
represents. Also removed a few unnecessary uses of the v8 fully
quantified typename.
Argument checks were simplified by setting all undefined/NaN or out of
bounds values equal to their defaults.
Also copy() tests had a flaw that each buffer had the same bit pattern at
the same offset. So even if the copy failed, the bit-by-bit comparison
would have still been true. This was fixed by filling each buffer with a
unique value before copy operations.
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
Reject negative offsets in SlowBuffer::MakeFastBuffer(), it allows
the creation of buffers that point to arbitrary addresses.
Reported by Trevor Norris.