fs.truncate() and its synchronous sibling are implemented in terms of
open() + ftruncate(). Unfortunately, it opened the target file with
mode 'w' a.k.a. 'write-only and create or truncate at open'.
The subsequent call to ftruncate() then moved the end-of-file pointer
from zero to the requested offset with the net result of a file that's
neatly truncated at the right offset and filled with zero bytes only.
This bug was introduced in commit 168a5557 but in fairness, before that
commit fs.truncate() worked like fs.ftruncate() so it seems we've never
had a working fs.truncate() until now.
Fixes#6233.
Otherwise the data ends up "on the wire" twice, and
switching between consuming the stream using `ondata`
vs. `read()` would yield duplicate data, which was bad.
The NPN protocols was set on `require('tls')` or `global` object instead
of being a local property. This fact lead to strange persistence of NPN
protocols, and sometimes incorrect protocol selection (when no NPN
protocols were passed in client options).
fix#6168
Since the encoding is no longer relevant once it is decoded to a Buffer,
it is confusing and incorrect to pass the encoding as 'utf8' or whatever
in those cases.
Closes#6119
`maybeInitFinished()` can emit the 'secure' event which
in turn destroys the connection in case of authentication
failure and sets `this.pair.ssl` to `null`.
If such condition appeared after non-empty read - loop will continue
and `clearOut` will be called on `null` object instead of
`crypto::Connection` instance. Resulting in the following assertion:
ERROR: Error: Hostname/IP doesn't match certificate's altnames
Assertion failed: handle->InternalFieldCount() > 0
fix#5756
In this situation:
writable.on('error', handler);
readable.pipe(writable);
writable.removeListener('error', handler);
writable.emit('error', new Error('boom'));
there is actually no error handler, but it doesn't throw, because of the
fix for stream.once('error', handler), in 23d92ec.
Note that simply reverting that change is not valid either, because
otherwise this will emit twice, being handled the first time, and then
throwing the second:
writable.once('error', handler);
readable.pipe(writable);
writable.emit('error', new Error('boom'));
Fix this with a horrible hack to make the stream pipe onerror handler
added before any other userland handlers, so that our handler is not
affected by adding or removing any userland handlers.
Closes#6007.
Add range checks for the offset, length and port arguments to
dgram.Socket#send(). Fixes the following assertion:
node: ../../src/udp_wrap.cc:264: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`offset < Buffer::Length(buffer_obj)' failed.
And:
node: ../../src/udp_wrap.cc:265: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`length <= Buffer::Length(buffer_obj) - offset' failed.
Interestingly enough, a negative port number was accepted until now but
silently ignored. (In other words, it would send the datagram to a
random port.)
This commit exposed a bug in the simple/test-dgram-close test which
has also been fixed.
This is a back-port of commit 41ec6d0 from the master branch.
Fixes#6025.
On windows, libuv will immediately make a `ReadConsole` call (in the
thread pool) when a 'flowing' `uv_tty_t` handle is switched to
line-buffered mode. That causes an immediate issue for some users,
since libuv can't cancel the `ReadConsole` operation on Windows 8 /
Server 2012 and up if the program switches back to raw mode later.
But even if this will be fixed in libuv at some point, it's better to
avoid the overhead of starting work in the thread pool and immediately
cancelling it afther that.
See also f34f1e3, where the same change is made for the opposite
flow, e.g. move `resume()` after `_setRawMode(true)`.
Fixes#5927
This is a backport of dfb0461 (see #5930) to the v0.10 branch.
If an error listener is added to a stream using once() before it is
piped, it is invoked and removed during pipe() but before pipe() sees it
which causes it to be emitted again.
Fixes#4155#4978
Before this commit, events were set to undefined rather than deleted
from the EventEmitter's backing dictionary for performance reasons:
`delete obj.key` causes a transition of the dictionary's hidden class
and that can be costly.
Unfortunately, that introduces a memory leak when many events are added
and then removed again. The strings containing the event names are never
reclaimed by the garbage collector because they remain part of the
dictionary.
That's why this commit makes EventEmitter delete events again. This
effectively reverts commit 0397223.
Fixes#5970.
Avoid a costly buffer-to-string operation. Instead, allocate a new
buffer, copy the chunk header and data into it and send that.
The speed difference is negligible on small payloads but it really
shines with larger (10+ kB) chunks. benchmark/http/end-vs-write-end
with 64 kB chunks gives 45-50% higher throughput. With 1 MB chunks,
the difference is a staggering 590%.
Of course, YMMV will vary with real workloads and networks but this
commit should have a positive impact on CPU and memory consumption.
Big kudos to Wyatt Preul (@wpreul) for reporting the issue and providing
the initial patch.
Fixes#5941 and #5944.
When using url.parse(), path and pathname usually return '/' when there
is no path available. However when you have a protocol that contains
non-lowercase letters and the input string does not have a trailing
slash, both path and pathname will be undefined.
A pooled https agent may get a Connection: close, but never finish
destroying the socket as the prior request had already emitted finish
likely from a pipe.
Since the socket is not marked as destroyed it may get reused by the
agent pool and result in an ECONNRESET.
re: #5712#5739
There was previously up to a second exit delay when exiting node
right after an http request/response, due to the utcDate() function
doing a setTimeout to update the cached date/time.
Fixing this should increase the performance of our http tests.
This reverts commit a40133d10c.
Unfortunately, this breaks socket.io. Even though it's not strictly an
API change, it is too subtle and in too brittle an area of node, to be
done in a stable branch.
Conflicts:
doc/api/http.markdown
Normalize the encoding in getEncoding() before using it. Fixes a
"AssertionError: Cannot change encoding" exception when the caller
mixes "utf8" and "utf-8".
Fixes#5655.
This is only relevant for CentOS 6.3 using kernel version 2.6.32.
On other linuxes and darwin, the `read` call gets an ECONNRESET in that
case. On sunos, the `write` call fails with EPIPE.
However, old CentOS will occasionally send an EOF instead of a
ECONNRESET or EPIPE when the client has been destroyed abruptly.
Make sure we don't keep trying to write or read more in that case.
Fixes#5504
However, there is still the question of what libuv should do when it
gets an EOF. Apparently in this case, it will continue trying to read,
which is almost certainly the wrong thing to do.
That should be fixed in libuv, even though this works around the issue.
In cases where there are multiple @-chars in a url, Node currently
parses the hostname and auth sections differently than web browsers.
This part of the bug is serious, and should be landed in v0.10, and
also ported to v0.8, and releases made as soon as possible.
The less serious issue is that there are many other sorts of malformed
urls which Node either accepts when it should reject, or interprets
differently than web browsers. For example, `http://a.com*foo` is
interpreted by Node like `http://a.com/*foo` when web browsers treat
this as `http://a.com%3Bfoo/`.
In general, *only* the `hostEndingChars` should be the characters that
delimit the host portion of the URL. Most of the current `nonHostChars`
that appear in the hostname should be escaped, but some of them (such as
`;` and `%` when it does not introduce a hex pair) should raise an
error.
We need to have a broader discussion about whether it's best to throw in
these cases, and potentially break extant programs, or return an object
that has every field set to `null` so that any attempt to read the
hostname/auth/etc. will appear to be empty.
In some cases, the http CONNECT/Upgrade API is unshifting an empty
bodyHead buffer onto the socket.
Normally, stream.unshift(chunk) does not set state.reading=false.
However, this check was not being done for the case when the chunk was
empty (either `''` or `Buffer(0)`), and as a result, it was causing the
socket to think that a read had completed, and to stop providing data.
This bug is not limited to http or web sockets, but rather would affect
any parser that unshifts data back onto the source stream without being
very careful to never unshift an empty chunk. Since the intent of
unshift is to *not* change the state.reading property, this is a bug.
Fixes#5557FixesLearnBoost/socket.io#1242
Before this, entering something like:
> JSON.parse('066');
resulted in the "..." prompt instead of displaying the expected
"SyntaxError: Unexpected number"
1. Emit `sslOutEnd` only when `_internallyPendingBytes() === 0`.
2. Read before checking `._halfRead`, otherwise we'll see only previous
value, and will invoke `._write` callback improperly.
3. Wait for both `end` and `finish` events in `.destroySoon`.
4. Unpipe encrypted stream from socket to prevent write after destroy.
Stream's `._write()` callback should be invoked only after it's opposite
stream has finished processing incoming data, otherwise `finish` event
fires too early and connection might be closed while there's some data
to send to the client.
see #5544
Quote from SSL_shutdown man page:
The output of SSL_get_error(3) may be misleading,
as an erroneous SSL_ERROR_SYSCALL may be flagged even though
no error occurred.
Also, handle all other errors to prevent assertion in `ClearError()`.
When writing bad data to EncryptedStream it'll first get to the
ClientHello parser, and, only after it will refuse it, to the OpenSSL.
But ClientHello parser has limited buffer and therefore write could
return `bytes_written` < `incoming_bytes`, which is not the case when
working with OpenSSL.
After such errors ClientHello parser disables itself and will
pass-through all data to the OpenSSL. So just trying to write data one
more time will throw the rest into OpenSSL and let it handle it.
Otherwise, writing an empty string causes the whole program to grind to
a halt when piping data into http messages.
This wasn't as much of a problem (though it WAS a bug) in 0.8 and
before, because our hyperactive 'drain' behavior would mean that some
*previous* write() would probably have a pending drain event, and cause
things to start moving again.
This commit adds an optimization to the HTTP client that makes it
possible to:
* Pack the headers and the first chunk of the request body into a
single write().
* Pack the chunk header and the chunk itself into a single write().
Because only one write() system call is issued instead of several,
the chances of data ending up in a single TCP packet are phenomenally
higher: the benchmark with `type=buf size=32` jumps from 50 req/s to
7,500 req/s, a 150-fold increase.
This commit removes the check from e4b716ef that pushes binary encoded
strings into the slow path. The commit log mentions that:
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
For hex and base64 strings that's certainly true but binary strings
are 'das Ding an sich': string.length is the same before and after
decoding.
Fixes#5528.
When an internal api needs a timeout, they should use
timers._unrefActive since that won't hold the loop open. This solves
the problem where you might have unref'd the socket handle but the
timeout for the socket was still active.
Previously one could write anywhere in a buffer pool if they accidently
got their offset wrong. Mainly because the cc level checks only test
against the parent slow buffer and not against the js object properties.
So now we check to make sure values won't go beyond bounds without
letting the dev know.
Test case:
var t = setInterval(function() {}, 1);
process.nextTick(t.unref);
Output:
Assertion failed: (args.Holder()->InternalFieldCount() > 0),
function Unref, file ../src/handle_wrap.cc, line 78.
setInterval() returns a binding layer object. Make it stop doing that,
wrap the raw process.binding('timer_wrap').Timer object in a Timeout
object.
Fixes#4261.
Pretty much everything assumes strings to be utf-8, but crypto
traditionally used binary strings, so we need to keep the default
that way until most users get off of that pattern.
If there is an encoding, and we do 'stream.push(chunk, enc)', and the
encoding argument matches the stated encoding, then we're converting from
a string, to a buffer, and then back to a string. Of course, this is a
completely pointless bit of work, so it's best to avoid it when we know
that we can do so safely.