When writing bad data to EncryptedStream it'll first get to the
ClientHello parser, and, only after it will refuse it, to the OpenSSL.
But ClientHello parser has limited buffer and therefore write could
return `bytes_written` < `incoming_bytes`, which is not the case when
working with OpenSSL.
After such errors ClientHello parser disables itself and will
pass-through all data to the OpenSSL. So just trying to write data one
more time will throw the rest into OpenSSL and let it handle it.
Otherwise, writing an empty string causes the whole program to grind to
a halt when piping data into http messages.
This wasn't as much of a problem (though it WAS a bug) in 0.8 and
before, because our hyperactive 'drain' behavior would mean that some
*previous* write() would probably have a pending drain event, and cause
things to start moving again.
This commit adds an optimization to the HTTP client that makes it
possible to:
* Pack the headers and the first chunk of the request body into a
single write().
* Pack the chunk header and the chunk itself into a single write().
Because only one write() system call is issued instead of several,
the chances of data ending up in a single TCP packet are phenomenally
higher: the benchmark with `type=buf size=32` jumps from 50 req/s to
7,500 req/s, a 150-fold increase.
This commit removes the check from e4b716ef that pushes binary encoded
strings into the slow path. The commit log mentions that:
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
For hex and base64 strings that's certainly true but binary strings
are 'das Ding an sich': string.length is the same before and after
decoding.
Fixes#5528.
Consider a user on his Mac, who wants to cross-compile for his Linux ARM device:
./configure --dest-cpu=arm --dest-os=linux
Before this patch, for example, DTrace probes would incorrectly attempt to be
enabled because the configure script is running on a MacOS machine, even though
we're trying to compile a binary for `linux`.
With this patch, the `--dest-os` flag is respected throughout the configure
script, thus leaving DTrace probes disabled in this cross-compiling scenario.
When an internal api needs a timeout, they should use
timers._unrefActive since that won't hold the loop open. This solves
the problem where you might have unref'd the socket handle but the
timeout for the socket was still active.
Previously one could write anywhere in a buffer pool if they accidently
got their offset wrong. Mainly because the cc level checks only test
against the parent slow buffer and not against the js object properties.
So now we check to make sure values won't go beyond bounds without
letting the dev know.
Because of variations in different base64 implementation, it's been
decided to strip all padding from the end of a base64 string and
calculate its size from that.
The default encoding is 'buffer'. When the input is a string, treat it
as 'binary'. Fixes the following assertion:
node: ../src/string_bytes.cc:309: static size_t
node::StringBytes::StorageSize(v8::Handle<v8::Value>, node::encoding):
Assertion `0 && "buffer encoding specified but string provided"'
failed.
Introduced in 64fc34b2.
Fixes#5482.
Test case:
var t = setInterval(function() {}, 1);
process.nextTick(t.unref);
Output:
Assertion failed: (args.Holder()->InternalFieldCount() > 0),
function Unref, file ../src/handle_wrap.cc, line 78.
setInterval() returns a binding layer object. Make it stop doing that,
wrap the raw process.binding('timer_wrap').Timer object in a Timeout
object.
Fixes#4261.
This also templatizes the Buffer::*Slice functions, and the template
function probably cannot be safely used outside of Node. However, it
also SHOULD not be used outside of Node, so this is arguably a feature
as well as a caveat.
Pretty much everything assumes strings to be utf-8, but crypto
traditionally used binary strings, so we need to keep the default
that way until most users get off of that pattern.
If there is an encoding, and we do 'stream.push(chunk, enc)', and the
encoding argument matches the stated encoding, then we're converting from
a string, to a buffer, and then back to a string. Of course, this is a
completely pointless bit of work, so it's best to avoid it when we know
that we can do so safely.