ClientHelloParser used to contain an 18k buffer that was kept around
for the life of the connection, even though it was not needed in many
situations. I changed it to be deallocated when it's determined to
be no longer needed.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Fix the following compiler warning on systems where _XOPEN_SOURCE is
defined by default:
../src/node_constants.cc:35:0: warning: "_XOPEN_SOURCE" redefined
#define _XOPEN_SOURCE 500
Move the (re)definition of _XOPEN_SOURCE to the top of the file while
we're here. Commit 00890e4 adds a `#define _XOPEN_SOURCE 500` in order
to make <fcntl.h> expose O_NONBLOCK but it does so after other system
headers have been included. If those headers include <fcntl.h>, then
the #include in node_constants.cc will be a no-op and O_NONBLOCK won't
be visible.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
A recent change to v8's API now makes it impossible to memcpy to a
v8::ArrayBuffer without causing it to be externalized. This means that
the garbage collector will not automatically free the memory when the
object is collected.
When/If the necessary API is included to allow the above
Buffer#toArrayBuffer() will be reintroduced.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
When process._setupNextTick() was introduced as the means to properly
initialize the mechanism behind process.nextTick() a chunk of code was
left behind that assigned memory to process._tickInfo. This code is no
longer needed.
Because of differences in memcmp() implementation, normalize output to
return -1, 0 or 1 only.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
When ExitCallback was not called with an error such as ENOENT in
uv_spawn, the process handle still remains refed and needs to be closed.
Signed-off-by: Timothy J Fontaine <tjfontaine@gmail.com>
1) ThrowCryptoTypeErrors was not actually used for
type-related errors. Removed it.
2) For AEAD modes, OpenSSL does not set any internal
error information if Final does not complete suc-
cessfully. Therefore, "TypeError:error:00000000:l
ib(0):func(0):reason(0)" would be the error mess-
age. Use a default message for these cases.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
compare() works like String.localeCompare such that:
Buffer.compare(a, b) === a.compare(b);
equals() does a native check to see if two buffers are equal.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
OpenSSL behaves oddly: on client `cert_chain` contains
the `peer_certificate`, but on server it doesn't.
Signed-off-by: Fedor Indutny <fedor@indutny.com>
Oversight to not pass blksize to fs.Stats on initialization.
Also added a test to make sure the object property has been set. Since
now on Windows both blksize and blocks will simply be set to undefined.
When our estimates for a storage size are higher than the actual length
of decoded data, the destination buffer should be truncated. Otherwise
`Buffer::Length` will give misleading information to C++ layer.
fix#7365
Signed-off-by: Fedor Indutny <fedor@indutny.com>
`process.uptime()` interface will return the amount of time the
current process has been running. To achieve this it was caching the
`uv_uptime` value at program start, and then on the call to
`process.uptime()` returning the delta between the two values.
`uv_uptime` is defined as the number of seconds the operating system
has been up since last boot. On sunos this interface uses `kstat`s
which can be a significantly expensive operation as it requires
exclusive access, but because of the design of `process.uptime()` node
*had* to always call this on start. As a result if you had many node
processes all starting at the same time you would suffer lock
contention as they all tried to read kstats.
Instead of using `uv_uptime` to achieve this, the libuv loop already
has a concept of current loop time in the form of `uv_now()` which is
in fact monotonically increasing, and already stored directly on the
loop. By using this value at start every platform performs at least
one fewer syscall during initialization.
Since the interface to `uv_uptime` is defined as seconds, in the call
to `process.uptime()` we now `uv_update_time` get our delta, divide by
1000 to get seconds, and then convert to an `Integer`. In 0.12 we can
move back to `Number::New` instead and not lose precision.
Caveat: For some platforms `uv_uptime` reports time monotonically
increasing regardless of system hibernation, `uv_now` interface is
also monotonically increasing but may not reflect time spent in
hibernation.
Introduce new signature for both `dgram.createSocket` method and
`dgram.Socket` constructor:
dgram.createSocket(options, [listener])
Options should contain `type` property and may contain `reuseAddr`
property. When `reuseAddr` is `true` - SO_REUSEADDR will be issued on
socket on bind.
fix#7415
Signed-off-by: Fedor Indutny <fedor@indutny.com>
This prevents segfaults when a native method is reassigned to a
different object (which corrupts args.This()). When unwrapping,
clients should use args.Holder() instead of args.This().
Closes#6690.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
The two biggest changes are that v8::Script::New() has been removed and
that a v8::Script object now has to be explicitly bound to a context if
you want to run it from another context.
We can accommodate both changes without breaking the vm module's public
API or even the internal JS API.
Improve on commit b55c9d6 by not requiring that switches are comma
separated. This commit makes `./configure --v8-options="--foo --bar"`
work and takes special care to properly escape quotes in the options
string.
By building the fs.Stats object in JS, which is returned by all fs stat
functions, calls to v8::Object::Set() are removed. This also includes
creating all associated Date objects in JS, rather than using
v8::Date::New(). Both these changes have significant performance gains.
Note that the returned value from fs.stat changes slightly for non-POSIX
systems. Whereas before the stats object would be missing blocks and
blksize keys, it now has these keys with undefined as the value.
Signed-off-by: Trevor Norris <trev.norris@gmail.com>
Move `createCredentials` to `tls` module and rename it to
`createSecureContext`. Make it use default values from `tls` module:
`DEFAULT_CIPHERS` and `DEFAULT_ECDH_CURVE`.
fix#7249
Ensure that OpenSSL has enough entropy (at least 256 bits) for its PRNG.
The entropy pool starts out empty and needs to fill up before the PRNG
can be used securely.
OpenSSL normally fills the pool automatically but not when someone
starts generating random numbers before the pool is full: in that case
OpenSSL keeps lowering the entropy estimate to thwart attackers trying
to guess the initial state of the PRNG.
When that happens, we wait until enough entropy is available, something
that normally should never take longer than a few milliseconds.
Fixes#7338.
The default entropy source is /dev/urandom on UNIX platforms, which is
okay but we can do better by seeding it from OpenSSL's entropy pool.
On Windows we can certainly do better; on that platform, V8 seeds the
random number generator using only the current system time.
Fixes#6250.
NB: This is a back-port of commit 7ac2391 from the master branch that
for some reason never got back-ported to the v0.10 branch.
The default on UNIX platforms in v0.10 is different and arguably worse
than it is with master: if no entropy source is provided, V8 3.14 calls
srandom() with a xor of the PID and the current time in microseconds.
That means that on systems with a coarse system clock, the initial
state of the PRNG may be easily guessable.
The situation on Windows is even more dire because there the PRNG is
seeded with only the current time... in milliseconds.
`env.h` is an internal header file and should not be copied or exposed
to the users.
Additionally, export convenience `Throw*` methods with `v8::Isolate*` as
a first argument.
Fix up the dtrace/etw/systemtap infrastructure after the V8 upgrade in
commit 1c7bf24. The win32 changes are untested but can hardly make
things worse because node doesn't build on windows right now.
Fixes#7313 with some luck.
Don't call DecodeWrite() with a Buffer as its argument because it in
turn calls StringBytes::Write() and that method expects a Local<String>.
"Why then does that function take a Local<Value>?" I hear you ask.
Good question but I don't have the answer. I added a CHECK for good
measure and what do you know, all of a sudden a large number of crypto
tests started failing.
Calling DecodeWrite(BINARY) on a buffer is nonsensical anyway: if you
want the contents of the buffer, just copy out the data, there is no
need to decode it - and that's exactly what this commit does.
Fixes a great many instances of the following run-time error in debug
builds:
FATAL ERROR: v8::String::Cast() Could not convert to string
Fix a regression that was introduced in commit ce04c726 after the
upgrade to V8 3.24.
The new weak persistent handle API no longer gives you the original
persistent but still requires that you clear it inside your weak
callback.
Rearrange the code in src/smalloc.cc to keep track of the persistent
handle with the least amount of pain and try hard to share as much
code as possible between the 'just free it' and 'invoke my callback'
versions of the smalloc API.
Fixes#7309.