We should go to next buffer if *current* one is full, not the next one.
Otherwise we may hop through buffers and written data will become
interleaved, which will lead to failure.
On Linux, positional writes don't work when the file is opened in
append mode. The kernel ignores the position argument and always
appends the data to the end of the file.
To quote the man page:
POSIX requires that opening a file with the O_APPEND flag should have
no affect on the location at which pwrite() writes data. However, on
Linux, if a file is opened with O_APPEND, pwrite() appends data to the
end of the file, regardless of the value of offset.
RFC 6125 explicitly states that a client "MUST NOT seek a match
for a reference identifier of CN-ID if the presented identifiers
include a DNS-ID, SRV-ID, URI-ID, or any application-specific
identifier types supported by the client", but it MAY do so if
none of the mentioned identifier types (but others) are present.
The DTrace probes were updated to accomodate platforms that can't
handle structs, update the prototypes for ETW but it's not necessary
to do anything with the new arguments as it's redundant information.
OSX and other DTrace implementations don't support dereferencing
structs in probes. To accomodate that pass members from the struct as
arguments so that DTrace is useful on those systems.
If an http response has an 'end' handler that throws, then the socket
will never be released back into the pool.
Granted, we do NOT guarantee that throwing will never have adverse
effects on Node internal state. Such a guarantee cannot be reasonably
made in a shared-global mutable-state side-effecty language like
JavaScript. However, in this case, it's a rather trivial patch to
increase our resilience a little bit, so it seems like a win.
There is no semantic change in this case, except that some event
listeners are removed, and the `'free'` event is emitted on nextTick, so
that you can schedule another request which will re-use the same socket.
From the user's point of view, there should be no detectable difference.
Closes#5107
The tests did not agree with the test comments. Tests first and second
were both testing the !state.reading case. Now second tests the
state.reading && state.length case.
Fixesjoyent/node#5183
The v0.8 Stream.pipe() method automatically destroyed the destination
stream whenever the src stream closed. However, this caused a lot of
problems, and was removed by popular demand. (Many userland modules
still have a no-op destroy() method just because of this.) It was also
very hazardous because this would be done even if { end: false } was
passed in the pipe options.
In v0.10, we decided that the 'close' event and destroy() method are
application-specific, and pipe() doesn't automatically call destroy().
However, TLS actually depended (silently) on this behavior. So, in this
case, we should just go ahead and destroy the thing when close happens.
Closes#5145
In cases where a stream may have data added to the read queue before the
user adds a 'readable' event, there is never any indication that it's
time to start reading.
True, there's already data there, which the user would get if they
checked However, as we use 'readable' event listening as the signal to
start the flow of data with a read(0) call internally, we ought to
trigger the same effect (ie, emitting a 'readable' event) even if the
'readable' listener is added after the first emission.
To avoid confusing weirdness, only the *first* 'readable' event listener
is granted this privileged status. After we've started the flow (or,
alerted the consumer that the flow has started) we don't need to start
it again. At that point, it's the consumer's responsibility to consume
the stream.
Closes#5141