getServers returns an array of ips that are currently being used for
resolution
setServers takes an array of ips that are to be used for resolution,
this will throw if there's invalid input but preserve the original
configuration
If there is an encoding, and we do 'stream.push(chunk, enc)', and the
encoding argument matches the stated encoding, then we're converting from
a string, to a buffer, and then back to a string. Of course, this is a
completely pointless bit of work, so it's best to avoid it when we know
that we can do so safely.
Empirical evidence suggests that OS-level load balancing (that is,
having multiple processes listen on a socket and have the operating
system wake up one when a connection comes in) produces skewed load
distributions on Linux, Solaris and possibly other operating systems.
The observed behavior is that a fraction of the listening processes
receive the majority of the connections. From the perspective of the
operating system, that somewhat makes sense: a task switch is expensive,
to be avoided whenever possible. That's why the operating system likes
to give preferential treatment to a few processes, because it reduces
the number of switches.
However, that rather subverts the purpose of the cluster module, which
is to distribute the load as evenly as possible. That's why this commit
adds (and defaults to) round-robin support, meaning that the master
process accepts connections and distributes them to the workers in a
round-robin fashion, effectively bypassing the operating system.
Round-robin is currently disabled on Windows due to how IOCP is wired
up. It works and you can select it manually but it probably results in
a heavy performance hit.
Fixes#4435.
When developer calls setBreakpoint with an unknown script name,
we convert the script name into regular expression matching all
paths ending with given name (name can be a relative path too).
To create such breakpoint in V8, we use type `scriptRegEx`
instead of `scriptId` for `setbreakpoint` request.
To restore such breakpoint, we save the original script name
send by the user. We use this original name to set (restore)
breakpoint in the new child process.
This is a back-port of commit 5db936d from the master branch.
Add a watchdog class which executes a timer in a separate event loop in
a separate thread that will terminate v8 execution if it expires.
Add timeout argument to functions in vm module which use the watchdog
if a non-zero timeout is specified.
When developer calls setBreakpoint with an unknown script name,
we convert the script name into regular expression matching all
paths ending with given name (name can be a relative path too).
To create such breakpoint in V8, we use type `scriptRegEx`
instead of `scriptId` for `setbreakpoint` request.
To restore such breakpoint, we save the original script name
send by the user. We use this original name to set (restore)
breakpoint in the new child process.
An absolute path will always open the same location regardless of your
current working directory. For posix, this just means path.charAt(0) ===
'/', but on Windows it's a little more complicated.
Fixesjoyent/node#5299.
Fix#5272
The consumption of a readable stream is a dance with 3 partners.
1. The specific stream Author (A)
2. The Stream Base class (B), and
3. The Consumer of the stream (C)
When B calls the _read() method that A implements, it sets a 'reading'
flag, so that parallel calls to _read() can be avoided. When A calls
stream.push(), B knows that it's safe to start calling _read() again.
If the consumer C is some kind of parser that wants in some cases to
pass the source stream off to some other party, but not before "putting
back" some bit of previously consumed data (as in the case of Node's
websocket http upgrade implementation). So, stream.unshift() will
generally *never* be called by A, but *only* called by C.
Prior to this patch, stream.unshift() *also* unset the state.reading
flag, meaning that C could indicate the end of a read, and B would
dutifully fire off another _read() call to A. This is inappropriate.
In the case of fs streams, and other variably-laggy streams that don't
tolerate overlapped _read() calls, this causes big problems.
Also, calling stream.shift() after the 'end' event did not raise any
kind of error, but would cause very strange behavior indeed. Calling it
after the EOF chunk was seen, but before the 'end' event was fired would
also cause weird behavior, and could lead to data being lost, since it
would not emit another 'readable' event.
This change makes it so that:
1. stream.unshift() does *not* set state.reading = false
2. stream.unshift() is allowed up until the 'end' event.
3. unshifting onto a EOF-encountered and zero-length (but not yet
end-emitted) stream will defer the 'end' event until the new data is
consumed.
4. pushing onto a EOF-encountered stream is now an error.
So, if you read(), you have that single tick to safely unshift() data
back into the stream, even if the null chunk was pushed, and the length
was 0.
On Linux, positional writes don't work when the file is opened in
append mode. The kernel ignores the position argument and always
appends the data to the end of the file.
To quote the man page:
POSIX requires that opening a file with the O_APPEND flag should have
no affect on the location at which pwrite() writes data. However, on
Linux, if a file is opened with O_APPEND, pwrite() appends data to the
end of the file, regardless of the value of offset.
Expand the JSON representation of Buffer to include type information
so that it can be deserialized in JSON.parse() without context.
Fixes#5110.
Fixes#5143.
_charsWritten is an internal property that was constantly written to,
but never read from. So it has been removed.
Removed documentation reference as well.