If an http response has an 'end' handler that throws, then the socket
will never be released back into the pool.
Granted, we do NOT guarantee that throwing will never have adverse
effects on Node internal state. Such a guarantee cannot be reasonably
made in a shared-global mutable-state side-effecty language like
JavaScript. However, in this case, it's a rather trivial patch to
increase our resilience a little bit, so it seems like a win.
There is no semantic change in this case, except that some event
listeners are removed, and the `'free'` event is emitted on nextTick, so
that you can schedule another request which will re-use the same socket.
From the user's point of view, there should be no detectable difference.
Closes#5107
The tests did not agree with the test comments. Tests first and second
were both testing the !state.reading case. Now second tests the
state.reading && state.length case.
Fixesjoyent/node#5183
The v0.8 Stream.pipe() method automatically destroyed the destination
stream whenever the src stream closed. However, this caused a lot of
problems, and was removed by popular demand. (Many userland modules
still have a no-op destroy() method just because of this.) It was also
very hazardous because this would be done even if { end: false } was
passed in the pipe options.
In v0.10, we decided that the 'close' event and destroy() method are
application-specific, and pipe() doesn't automatically call destroy().
However, TLS actually depended (silently) on this behavior. So, in this
case, we should just go ahead and destroy the thing when close happens.
Closes#5145
In cases where a stream may have data added to the read queue before the
user adds a 'readable' event, there is never any indication that it's
time to start reading.
True, there's already data there, which the user would get if they
checked However, as we use 'readable' event listening as the signal to
start the flow of data with a read(0) call internally, we ought to
trigger the same effect (ie, emitting a 'readable' event) even if the
'readable' listener is added after the first emission.
To avoid confusing weirdness, only the *first* 'readable' event listener
is granted this privileged status. After we've started the flow (or,
alerted the consumer that the flow has started) we don't need to start
it again. At that point, it's the consumer's responsibility to consume
the stream.
Closes#5141
A llvm/clang bug on Darwin ia32 makes these tests fail 100% of
the time. Since no one really seems to mind overly much, and we
can't reasonably fix this in node anyway, just accept both types
of NaN for now.
Calling `this.pair.encrypted._internallyPendingBytes()` before
handling/resetting error will result in assertion failure:
../src/node_crypto.cc:962: void node::crypto::Connection::ClearError():
Assertion `handle_->Get(String::New("error"))->BooleanValue() == false'
failed.
see #5058
Microsoft's IIS doesn't support it, and is not replying with ServerHello
after receiving ClientHello which contains it.
The good way might be allowing to opt-out this at runtime from
javascript-land, but unfortunately OpenSSL doesn't support it right now.
see #5119
Since _tickCallback and _tickDomainCallback were both called from
MakeCallback, it was possible for a callback to be called that required
a domain directly to _tickCallback.
The fix was to implement process.usingDomains(). This will set all
applicable functions to their domain counterparts, and set a flag in cc
to let MakeCallback know domain callbacks always need to be checked.
Added test in own file. It's important that the test remains isolated.
It's possible to read multiple messages off the parent/child channel.
When that happens, make sure that recvHandle is cleared after emitting
the first message so it doesn't get emitted twice.
Commit f53441a added crypto.getCiphers() as a function that returns the
names of SSL ciphers.
Commit 14a6c4e then added crypto.getHashes(), which returns the names of
digest algorithms, but that creates a subtle inconsistency: the return
values of crypto.getHashes() are valid arguments to crypto.createHash()
but that is not true for crypto.getCiphers() - the returned values are
only valid for SSL/TLS functions.
Rectify that by adding tls.getCiphers() and making crypto.getCiphers()
return proper cipher names.
In process#send() and child_process.ChildProcess#send(), use 'utf8' as
the encoding instead of 'ascii' because 'ascii' mutilates non-ASCII
input. Correctly handle partial character sequences by introducing
a StringDecoder.
Sending over UTF-8 no longer works in v0.10 because the high bit of
each byte is now cleared when converting a Buffer to ASCII. See
commit 96a314b for details.
Fixes#4999 and #5011.
Commit 8632af3 ("tools: update gyp to r1601") broke the Windows build.
Older versions of GYP link to kernel32.lib, user32.lib, etc. but that
was changed in r1584. See https://codereview.chromium.org/12256017
Fix the build by explicitly linking to the required libraries.
The EncIn, EncOut, ClearIn & ClearOut functions are victims of some code
copy + pasting. A common line copied to all of them is:
`if (off >= buffer_length) { ...`
448e0f43 corrected ClearIn's check from `>=` to `>`, but left the others
unchanged (with an incorrect bounds check). However, if you look down at
the next very next bounds check you'll see:
`if (off + len > buffer_length) { ...`
So the check is actually obviated by the next line, and should be
removed.
This fixes an issue where writing a zero-length buffer to an encrypted
pair's *encrypted* stream you would get a crash.
Increase the number of bits by 1 by making Flags unsigned.
BUG=chromium:211741
Review URL: https://chromiumcodereview.appspot.com/12886008
This is a back-port of commits 13964 and 13988 addressing CVE-2013-2632.
Throw a TypeError if size > 0x3fffffff. Avoids the following V8 fatal
error:
FATAL ERROR: v8::Object::SetIndexedPropertiesToExternalArrayData()
length exceeds max acceptable value
Fixes#5126.
The stall is exposed in the test, though the test itself asserts before
it stalls.
The test is constructed to replicate the stalling state of a complex
Passthrough usecase since I was not able to reliable trigger the stall.
Some of the preconditions for triggering the stall are:
* rs.length >= rs.highWaterMark
* !rs.needReadable
* _transform() handler that can return empty transforms
* multiple sync write() calls
Combined this can trigger a case where rs.reading is not cleared when
further progress requires this. The fix is to always clear rs.reading.
Before this patch calling `socket.setTimeout(0xffffffff)` will result in
signed int32 overflow in C++ which resulted in assertion error:
Assertion failed: (timeout >= -1), function uv__io_poll, file
../deps/uv/src/unix/kqueue.c, line 121.
see #5101