In cases where there are multiple @-chars in a url, Node currently
parses the hostname and auth sections differently than web browsers.
This part of the bug is serious, and should be landed in v0.10, and
also ported to v0.8, and releases made as soon as possible.
The less serious issue is that there are many other sorts of malformed
urls which Node either accepts when it should reject, or interprets
differently than web browsers. For example, `http://a.com*foo` is
interpreted by Node like `http://a.com/*foo` when web browsers treat
this as `http://a.com%3Bfoo/`.
In general, *only* the `hostEndingChars` should be the characters that
delimit the host portion of the URL. Most of the current `nonHostChars`
that appear in the hostname should be escaped, but some of them (such as
`;` and `%` when it does not introduce a hex pair) should raise an
error.
We need to have a broader discussion about whether it's best to throw in
these cases, and potentially break extant programs, or return an object
that has every field set to `null` so that any attempt to read the
hostname/auth/etc. will appear to be empty.
In some cases, the http CONNECT/Upgrade API is unshifting an empty
bodyHead buffer onto the socket.
Normally, stream.unshift(chunk) does not set state.reading=false.
However, this check was not being done for the case when the chunk was
empty (either `''` or `Buffer(0)`), and as a result, it was causing the
socket to think that a read had completed, and to stop providing data.
This bug is not limited to http or web sockets, but rather would affect
any parser that unshifts data back onto the source stream without being
very careful to never unshift an empty chunk. Since the intent of
unshift is to *not* change the state.reading property, this is a bug.
Fixes#5557FixesLearnBoost/socket.io#1242
Remove the need to call start/stop the uv_idle spinner between
MakeCallbacks. The one place where the tick processor needs to be kicked
is where a user catches uncaughtException. For that we'll now use
setImmediate, which accomplishes the same task.
maxTickDepth checks have been removed for domains and replaced with a
flag that checks if the last callback threw. If it did then execution of
the remaining tickQueue is deferred to the spinner.
This is to prevent domains from entering a continuous loop when an error
callback also throws an error.
Removes the check for maxTickDepth for non-domain callbacks. So a user
can starve I/O by setting a recursive nextTick.
The domain case is more complex and will be addressed in another commit.
Previous code was calling uv_loop_delete() directly on a running loop,
which led to race condition aborts/segfaults within libuv. This change
changes the watchdog thread to call uv_run() with UV_RUN_ONCE so that
the call exits after either the timer times out or uv_async_send() is
called from the main thread in Watchdog::Destroy(). The timer/async
handles are then closed and uv_run() with UV_RUN_DEFAULT is called so
that libuv has a chance to cleanup before the thread exits. The main
thread meanwhile calls uv_thread_join() and then uv_loop_delete() to
complete the cleanup.
Before this, entering something like:
> JSON.parse('066');
resulted in the "..." prompt instead of displaying the expected
"SyntaxError: Unexpected number"
1. Emit `sslOutEnd` only when `_internallyPendingBytes() === 0`.
2. Read before checking `._halfRead`, otherwise we'll see only previous
value, and will invoke `._write` callback improperly.
3. Wait for both `end` and `finish` events in `.destroySoon`.
4. Unpipe encrypted stream from socket to prevent write after destroy.
Stream's `._write()` callback should be invoked only after it's opposite
stream has finished processing incoming data, otherwise `finish` event
fires too early and connection might be closed while there's some data
to send to the client.
see #5544
Quote from SSL_shutdown man page:
The output of SSL_get_error(3) may be misleading,
as an erroneous SSL_ERROR_SYSCALL may be flagged even though
no error occurred.
Also, handle all other errors to prevent assertion in `ClearError()`.
When writing bad data to EncryptedStream it'll first get to the
ClientHello parser, and, only after it will refuse it, to the OpenSSL.
But ClientHello parser has limited buffer and therefore write could
return `bytes_written` < `incoming_bytes`, which is not the case when
working with OpenSSL.
After such errors ClientHello parser disables itself and will
pass-through all data to the OpenSSL. So just trying to write data one
more time will throw the rest into OpenSSL and let it handle it.