This change is to unify the declaration for constants into using
destructuring on the top-level-module scope, reducing some redundant
code.
PR-URL: https://github.com/nodejs/node/pull/16063
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/8104
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
The `events` module already exports `EventEmitter` constructor function
So, we don't have to use `events.EventEmitter` to access it.
Refer: https://github.com/nodejs/node/pull/2896
PR-URL: https://github.com/nodejs/node/pull/2921
Reviewed-By: Roman Reiss <me@silverwind.io>
Reviewed-By: Michaël Zasso <mic.besace@gmail.com>
Changes included in this commit are
1. Making the deprecation messages consistent. The messages will be in
the following format
x is deprecated. Use y instead.
If there is no alternative for `x`, then the ` Use y instead.` part
will not be there in the message.
2. All the internal deprecation messages are printed with the prefix
`(node) `, except when the `--trace-deprecation` flag is set.
Fixes: https://github.com/nodejs/io.js/issues/1883
PR-URL: https://github.com/nodejs/io.js/pull/1892
Reviewed-By: Roman Reiss <me@silverwind.io>
PR-URL: https://github.com/iojs/io.js/pull/634
Reviewed-BY: Nicu Micleușanu <micnic90@gmail.com>
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Stephen Belanger <admin@stephenbelanger.com>
This commit replaces a number of var statements throughout
the lib code with const statements.
PR-URL: https://github.com/iojs/io.js/pull/541
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
The copyright and license notice is already in the LICENSE file. There
is no justifiable reason to also require that it be included in every
file, since the individual files are not individually distributed except
as part of the entire package.
Turn on strict mode for the files in the lib/ directory. It helps
catch bugs and can have a positive effect on performance.
PR-URL: https://github.com/node-forward/node/pull/64
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Fedor Indutny <fedor@indutny.com>
When replying to a HEAD request, do not attempt to send the trailers and
EOF sequence (`0\r\n\r\n`). The HEAD request MUST not have body.
Quote from RFC:
The presence of a message body in a response depends on both the
request method to which it is responding and the response status code
(Section 3.1.2). Responses to the HEAD request method (Section 4.3.2
of [RFC7231]) never include a message body because the associated
response header fields (e.g., Transfer-Encoding, Content-Length,
etc.), if present, indicate only what their values would have been if
the request method had been GET (Section 4.3.1 of [RFC7231]).
fix#8361
Reviewed-By: Timothy J Fontaine <tjfontaine@gmail.com>
Unexport the http.parsers freelist. It was originally exported by Ryan
in commit 0003c701 but the commit log doesn't mention why and it's never
been documented. It's unclear if there are any users.
The lifecycle of parser objects changed recently and it seems better to
not let people shoot themselves in the foot so easily.
If it turns out there are actually users, we can always re-export it
again - probably under a slightly different name, to force people to
update their code to the new way of things.
Reviewed-by: Trevor Norris <trev.norris@gmail.com>
Socket may become not `readable`, but http should not rely on this
property and should not think that it means that no data will ever
arrive from it. In fact, it may arrive in a next tick and, since
`this.push(null)` was already called, it will result in a error like
this:
Error: stream.push() after EOF
at readableAddChunk (_stream_readable.js:143:15)
at IncomingMessage.Readable.push (_stream_readable.js:123:10)
at HTTPParser.parserOnBody (_http_common.js:132:22)
at Socket.socketOnData (_http_client.js:277:20)
at Socket.EventEmitter.emit (events.js:101:17)
at Socket.Readable.read (_stream_readable.js:367:10)
at Socket.socketCloseListener (_http_client.js:196:10)
at Socket.EventEmitter.emit (events.js:123:20)
at TCP.close (net.js:479:12)
fix#6784
For the `request()` and `get()` functions. I could never
really understand why these two functions go through agent
first... Especially since the user could be passing `agent: false`
or a different Agent instance completely, in which `globalAgent`
will be completely bypassed.
Moved the relevant logic from `Agent#request()` into the
`ClientRequest` constructor.
Incidentally, this commit fixes#7012 (which was the original
intent of this commit).
This makes it so that the user may pass in a
`createConnection()` option, and they don't have
to pass `agent: false` at the same time.
Also adding a test for the `createConnection` option,
since none was in place before.
See #7014.
If a client sends a lot more pipelined requests than we can handle, then
we need to provide backpressure so that the client knows to back off.
Do this by pausing both the stream and the parser itself when the
responses are not being read by the downstream client.
Backport of 085dd30
Avoid a costly buffer-to-string operation. Instead, allocate a new
buffer, copy the chunk header and data into it and send that.
The speed difference is negligible on small payloads but it really
shines with larger (10+ kB) chunks. benchmark/http/end-vs-write-end
with 64 kB chunks gives 45-50% higher throughput. With 1 MB chunks,
the difference is a staggering 590%.
Of course, YMMV will vary with real workloads and networks but this
commit should have a positive impact on CPU and memory consumption.
Big kudos to Wyatt Preul (@wpreul) for reporting the issue and providing
the initial patch.
Fixes#5941 and #5944.
There was previously up to a second exit delay when exiting node
right after an http request/response, due to the utcDate() function
doing a setTimeout to update the cached date/time.
Fixing this should increase the performance of our http tests.
This reverts commit a40133d10c.
Unfortunately, this breaks socket.io. Even though it's not strictly an
API change, it is too subtle and in too brittle an area of node, to be
done in a stable branch.
Conflicts:
doc/api/http.markdown
Otherwise, writing an empty string causes the whole program to grind to
a halt when piping data into http messages.
This wasn't as much of a problem (though it WAS a bug) in 0.8 and
before, because our hyperactive 'drain' behavior would mean that some
*previous* write() would probably have a pending drain event, and cause
things to start moving again.
This commit adds an optimization to the HTTP client that makes it
possible to:
* Pack the headers and the first chunk of the request body into a
single write().
* Pack the chunk header and the chunk itself into a single write().
Because only one write() system call is issued instead of several,
the chances of data ending up in a single TCP packet are phenomenally
higher: the benchmark with `type=buf size=32` jumps from 50 req/s to
7,500 req/s, a 150-fold increase.
This commit removes the check from e4b716ef that pushes binary encoded
strings into the slow path. The commit log mentions that:
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
For hex and base64 strings that's certainly true but binary strings
are 'das Ding an sich': string.length is the same before and after
decoding.
Fixes#5528.
Commit 38149bb changes http.get() and http.request() to escape unsafe
characters. However, that creates an incompatibility with v0.10 that
is difficult to work around: if you escape the path manually, then in
v0.11 it gets escaped twice. Change lib/http.js so it no longer tries
to fix up bad request paths, simply reject them with an exception.
The actual check is rather basic right now. The full check for illegal
characters is difficult to implement efficiently because it requires a
few characters of lookahead. That's why it currently only checks for
spaces because those are guaranteed to create an invalid request.
Fixes#5474.
Fixes#3740
In the case of pipelined requests, you can have a situation where
the socket gets destroyed via one req/res object, but then trying
to destroy *another* req/res on the same socket will cause it to
call undefined.destroy(), since it was already removed from that
message.
Add a guard to OutgoingMessage.destroy and IncomingMessage.destroy
to prevent this error.
Make http.request() and friends escape unsafe characters in the request
path. That is, a request for '/foo bar' is now escaped as '/foo%20bar'.
Before this commit, the path was used as-is in the request status line,
creating an invalid HTTP request ("GET /foo bar HTTP/1.1").
Fixes#4381.
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
An unusual edge case, but certainly a bug.
If an http response has an 'end' handler that throws, then the socket
will never be released back into the pool.
Granted, we do NOT guarantee that throwing will never have adverse
effects on Node internal state. Such a guarantee cannot be reasonably
made in a shared-global mutable-state side-effecty language like
JavaScript. However, in this case, it's a rather trivial patch to
increase our resilience a little bit, so it seems like a win.
There is no semantic change in this case, except that some event
listeners are removed, and the `'free'` event is emitted on nextTick, so
that you can schedule another request which will re-use the same socket.
From the user's point of view, there should be no detectable difference.
Closes#5107
The benefits of the hot-path optimization below start to fall off when
the buffer size gets up near 128KB, because the cost of the copy is more
than the cost of the extra write() call. Switch to the write/end method
at that point.
Heuristics and magic numbers are awful, but slow http responses are
worse.
Fix#4975
This solves the problem of calling `readable.pipe(writable)` after the
readable stream has already emitted 'end', as often is the case when
writing simple HTTP proxies.
The spirit of streams2 is that things will work properly, even if you
don't set them up right away on the first tick.
This approach breaks down, however, because pipe()ing from an ended
readable will just do nothing. No more data will ever arrive, and the
writable will hang open forever never being ended.
However, that does not solve the case of adding a `on('end')` listener
after the stream has received the EOF chunk, if it was the first chunk
received (and thus, length was 0, and 'end' got emitted). So, with
this, we defer the 'end' event emission until the read() function is
called.
Also, in pipe(), if the source has emitted 'end' already, we call the
cleanup/onend function on nextTick. Piping from an already-ended stream
is thus the same as piping from a stream that is in the process of
ending.
Updates many tests that were relying on 'end' coming immediately, even
though they never read() from the req.
Fix#4942
Fix#4948
This adds a check before setting the incoming parser
to null. Under certain circumstances it'll already be set to
null by freeParser().
Otherwise this will cause node to crash as it tries to set
null on something that is already null.