gcc 4.2 on OS X gets confused about the call to node::Buffer::Data().
Fully qualify the function name to help it along.
Fixes the following build error:
../../deps/v8/include/v8.h: In function ‘char*
node::Buffer::Data(v8::Handle<v8::Value>)’:
../../deps/v8/include/v8.h:900: error: ‘class v8::Data’
is not a function,
../../src/node_buffer.h:38: error:
conflict with ‘char* node::Buffer::Data(v8::Handle<v8::Object>)’
../../src/node_buffer.cc:94: error:
in call to ‘Data’
Buffer(<String>) used to pass the string to js where it would then be
passed back to cpp for processing. Now only the buffer object
instantiation is done in js and the string is processed in cpp.
Also added a Buffer api that also accepts the encoding.
Old fill would take the char code of the first character and wrap around
the int to fit in the 127 range. Now fill will duplicate whatever string
is given through the entirety of the buffer.
Note: There is one bug around ending on a partial fill of any character
outside the ASCII range.
While the new Buffer implementation is much faster we still have the
necessity of using Buffer pools. This is undesirable because it may
still lead to unwanted memory retention, but for the time being this is
the best solution.
Because of this re-introduction, and since there is no more SlowBuffer
type, the SlowBuffer method has been re-purposed to return a non-pooled
Buffer instance. This will be helpful for developers to store data for
indeterminate lengths of time without introducing a memory leak.
Another change to Buffer pools was that they are only allocated if the
requested chunk is < poolSize / 2. This was done because allocations are
much quicker now, and it's a better use of the pool.
Memory allocations are now done through smalloc. The Buffer cc class has
been removed completely, but for backwards compatibility have left the
namespace as Buffer.
The .parent attribute is only set if the Buffer is a slice of an
allocation. Which is then set to the alloc object (not a Buffer).
The .offset attribute is now a ReadOnly set to 0, for backwards
compatibility. I'd like to remove it in the future (pre v1.0).
A few alterations have been made to how arguments are either coerced or
thrown. All primitives will now be coerced to their respective values,
and (most) all out of range index requests will throw.
The indexes that are coerced were left for backwards compatibility. For
example: Buffer slice operates more like Array slice, and coerces
instead of throwing out of range indexes. This may change in the future.
The reason for wanting to throw for out of range indexes is because
giving js access to raw memory has high potential risk. To mitigate that
it's easier to make sure the developer is always quickly alerted to the
fact that their code is attempting to access beyond memory bounds.
Because SlowBuffer will be deprecated, and simply returns a new Buffer
instance, all tests on SlowBuffer have been removed.
Heapdumps will now show usage under "smalloc" instead of "Buffer".
ParseArrayIndex was added to node_internals to support proper uint
argument checking/coercion for external array data indexes.
SlabAllocator had to be updated since handle_ no longer exists.
If the user knows the allocation is no longer needed then the memory can
be manually released.
Currently this will not ClearWeak the Persistent, so the callback will
still run.
If the user passed a ClearWeak callback, and then disposed the object,
the buffer callback argument will == NULL.
smalloc is a simple utility for quickly allocating external memory onto
js objects. This will be used to centralize how memory is managed in
node, and will become the backer for Buffers. So in the future crypto's
SlabBuffer, stream's SlabAllocator will be removed.
Note on the js API: because no arguments are optional the order of
arguments have been placed to match their cc counterparts as closely as
possible.
Libuv may provide a NULL buffer to the uv_read callback in case of an
error, so with this assert we'd be using the api incorrectly. None of
the current DoRead implementations rely on this constraint, either.
The console module has always been called 'stdio' in the
table-of-contents, but nowhere else, since its name is
'console'. This makes it difficult to find.
Resolves minor discrepancies between android and standard POSIX systems.
In addition, some configure parameters were added, and a helper-script
for android configuration. Ideally, this script should be merged into
the standard configure script.
To build for android, source the android-configure script with an NDK
path:
source ./android-configure ~/android-ndk-r8d
This will create an android standalone toolchain and export the
necessary environment parameters.
After that, build as normal:
make -j8
After the build, you should now have android-compatible NodeJS binaries.
Suppress the following warning:
../../src/cares_wrap.cc: In function ‘v8::Handle<v8::Value>
node::cares_wrap::SetServers(const v8::Arguments&)’:
../../src/cares_wrap.cc:1017:5: warning: ‘uv_ret.uv_err_s::code’
may be used uninitialized in this function [-Wuninitialized]
Split `tls.js` into `_tls_legacy.js`, containing legacy
`createSecurePair` API, and `_tls_wrap.js` containing new code based on
`tls_wrap` binding.
Remove tests that are no longer useful/valid.
When large strings are used they cause v8's GC to spend a lot more time
cleaning up. In these cases it's much faster to use external string
resources.
UTF8 strings do not use external string resources because only one and
two byte external strings are supported.
EXTERN_APEX is the value at which v8's GC overtakes performance.
The following table has the type and buffer size that use to encode the
strings as rough estimates of the percentage of performance gain from
this patch (UTF8 is missing because they cannot be externalized).
encoding 128KB 1MB 5MB
-----------------------------
ASCII 58% 208% 250%
HEX 15% 74% 86%
BASE64 11% 74% 71%
UCS2 2% 225% 398%
BINARY 2234% 1728% 2305%
BINARY is so much faster across the board because of using the new v8
WriteOneByte API.
v8 has a new API to write out strings to memory. This has been
implemented.
One other change of note is BINARY encoded strings have a new
implementation. This has improved performance substantially.
Before this commit NodeBIO never shrank, possibly consuming a lot of
memory (depending on reader's haste).
All buffers between write_head's child and read_head should be
deallocated on read, leaving only space left in write_head and in the
next buffer.
Commit 0bba5902 accidentally (or maybe erroneously) added node_isolate
to src/node.h and src/node_object_wrap.h.
Undo that, said variable is not for public consumption. Add-on authors
should use v8::Isolate::GetCurrent() instead.
I missed that while reviewing. Mea culpa.
Fixes#5639.
This is only relevant for CentOS 6.3 using kernel version 2.6.32.
On other linuxes and darwin, the `read` call gets an ECONNRESET in that
case. On sunos, the `write` call fails with EPIPE.
However, old CentOS will occasionally send an EOF instead of a
ECONNRESET or EPIPE when the client has been destroyed abruptly.
Make sure we don't keep trying to write or read more in that case.
Fixes#5504
However, there is still the question of what libuv should do when it
gets an EOF. Apparently in this case, it will continue trying to read,
which is almost certainly the wrong thing to do.
That should be fixed in libuv, even though this works around the issue.