If proxy_pass to a host with dynamic resolution was used to handle
a subrequest, and host resolution failed, the main request wasn't run
till something else happened on the connection. E.g. request to "/zzz"
with the following configuration hanged:
addition_types *;
resolver 8.8.8.8;
location /test {
set $ihost xxx;
proxy_pass http://$ihost;
}
location /zzz {
add_after_body /test;
return 200 "test";
}
Report and original version of the patch by Lanshun Zhou,
http://mailman.nginx.org/pipermail/nginx-devel/2013-March/003476.html.
Code to reuse of r->request_body->buf in upstream module assumes it's
dedicated buffer, hence after 1.3.9 (r4931) it might reuse r->header_in
if client_body_in_file_only was set, resulting in original request
corruption. It is considered to be safer to always create a dedicated
buffer for rb->bufs to avoid such problems.
After introduction of chunked request body handling in 1.3.9 (r4931),
r->request_body->bufs buffers have b->start pointing to original buffer
start (and b->pos pointing to real data of this particular buffer).
While this is ok as per se, it caused bad things (usually original request
headers included before the request body) after reinit of the request
chain in ngx_http_upstream_reinit() while sending the request to a next
upstream server (which used to do b->pos = b->start for each buffer
in the request chain).
Patch by Piotr Sikora.
If c->recv() returns 0 there is no sense in using ngx_socket_errno for
logging, its value meaningless. (The code in question was copied from
ngx_http_keepalive_handler(), but ngx_socket_errno makes sense there as it's
used as a part of ECONNRESET handling, and the c->recv() call is preceeded
by the ngx_set_socket_errno(0) call.)
In r2411 setting of NGX_HTTP_GZIP_BUFFERED in c->buffered was moved from
ngx_http_gzip_filter_deflate_start() to ngx_http_gzip_filter_buffer() since
it was always called first. But in r2543 the "postpone_gzipping" directive
was introduced, and if postponed gzipping is disabled (the default setting),
ngx_http_gzip_filter_buffer() is not called at all.
We must always set NGX_HTTP_GZIP_BUFFERED after the start of compression
since there is always a trailer that is buffered.
There are no known cases when it leads to any problem with current code.
But we already had troubles in upcoming SPDY implementation.
Not only this is useful for the upcoming SPDY support, but it can
also help to improve HTTPS performance by enabling TLS False Start
in Chrome/Chromium browsers [1]. So, we always enable NPN for HTTPS
if it is supported by OpenSSL.
[1] http://www.imperialviolet.org/2012/04/11/falsestart.html
The c->single_connection was intended to be used as lock mechanism
to serialize modifications of request object from several threads
working with client and upstream connections. The flag is redundant
since threads in nginx have never been used that way.
In Linux 2.6.32, TCP_DEFER_ACCEPT was changed to accept connections
after the deferring period is finished without any data available.
(Reading from the socket returns EAGAIN in this case.)
Since in nginx TCP_DEFER_ACCEPT is set to "post_accept_timeout", we
do not need to wait longer if deferred accept returns with no data.
Previously, only the first request in a connection used timeout
value from the "client_header_timeout" directive while reading
header. All subsequent requests used "keepalive_timeout" for
that.
It happened because timeout of the read event was set to the
value of "keepalive_timeout" in ngx_http_set_keepalive(), but
was not removed when the next request arrived.
Previously, we always created an object and logged 400 (Bad Request)
in access log if a client closed connection without sending any data.
Such a connection was counted as "reading".
Since it's common for modern browsers to behave like this, it's no
longer considered an error if a client closes connection without
sending any data, and such a connection will be counted as "waiting".
Now, we do not log 400 (Bad Request) and keep memory footprint as
small as possible.
Previously, it was allocated from a connection pool and
was selectively freed for an idle keepalive connection.
The goal is to put coupled things in one chunk of memory,
and to simplify handling of request objects.
According to RFC 6066, client is not supposed to request a different server
name at the application layer. Server implementations that rely upon these
names being equal must validate that a client did not send a different name
in HTTP request. Current versions of Apache HTTP server always return 400
"Bad Request" in such cases.
There exist implementations however (e.g., SPDY) that rely on being able to
request different host names in one connection. Given this, we only reject
requests with differing host names if verification of client certificates
is enabled in a corresponding server configuration.
An example of configuration that might not work as expected:
server {
listen 433 ssl default;
return 404;
}
server {
listen 433 ssl;
server_name example.org;
ssl_client_certificate org.cert;
ssl_verify_client on;
}
server {
listen 433 ssl;
server_name example.com;
ssl_client_certificate com.cert;
ssl_verify_client on;
}
Previously, a client was able to request example.com by presenting
a certificate for example.org, and vice versa.
Not only this is consistent with a case without SNI, but this also
prevents abusing configurations that assume that the $host variable
is limited to one of the configured names for a server.
An example of potentially unsafe configuration:
server {
listen 443 ssl default_server;
...
}
server {
listen 443;
server_name example.com;
location / {
proxy_pass http://$host;
}
}
Note: it is possible to negotiate "example.com" by SNI, and to request
arbitrary host name that does not exist in the configuration above.
Previously, this was done only after the whole request header
was parsed, and if an error occurred earlier then the request
was processed in the default server (or server chosen by SNI),
while r->headers_in.server might be set to the value from the
Host: header or host from request line.
r->headers_in.server is in turn used for $host variable and
in HTTP redirects if "server_name_in_redirect" is disabled.
Without the change, configurations that rely on this during
error handling are potentially unsafe if SNI is used.
This change also allows to use server specific settings of
"underscores_in_headers", "ignore_invalid_headers", and
"large_client_header_buffers" directives for HTTP requests
and HTTPS requests without SNI.
The request object will not be created until SSL handshake is complete.
This simplifies adding another connection handler that does not need
request object right after handshake (e.g., SPDY).
There are also a few more intentional effects:
- the "client_header_buffer_size" directive will be taken from the
server configuration that was negotiated by SNI;
- SSL handshake errors and timeouts are not logged into access log
as bad requests;
- ngx_ssl_create_connection() is not called until the first byte of
ClientHello message was received. This also decreases memory
consumption if plain HTTP request is sent to SSL socket.
Previously, only the first request in a connection was assigned the
configuration selected by SNI. All subsequent requests initially
used the default server's configuration, ignoring SNI, which was
wrong.
Now all subsequent requests in a connection will initially use the
configuration selected by SNI. This is done by storing a pointer
to configuration in http connection object. It points to default
server's configuration initially, but changed upon receipt of SNI.
(The request's configuration can be further refined when parsing
the request line and Host: header.)
This change was not made specific to SNI as it also allows slightly
faster access to configuration without the request object.
This change helps to decouple ngx_http_ssl_servername() from the request
object.
Note: now we close connection in case of error during server name lookup
for request. Previously, we did so only for HTTP/0.9 requests.
In case multiple "Cache-Control" headers are sent to a client,
multiple values in $sent_http_cache_control were incorrectly
split by a semicolon. Now they are split by a comma.
In case of error in the read event handling we close a connection
by calling ngx_http_close_connection(), that also destroys connection
pool. Thereafter, an attempt to free a buffer (added in r4892) that
was allocated from the pool could cause SIGSEGV and is meaningless
as well (the buffer already freed with the pool).
In case of fully populated SSL session cache with no memory left for
new allocations, ngx_ssl_new_session() will try to expire the oldest
non-expired session and retry, but only in case when slab allocation
fails for "cached_sess", not when slab allocation fails for either
"sess_id" or "id", which can happen for number of reasons and results
in new session not being cached.
Patch fixes this by adding retry logic to "sess_id" & "id" allocations.
Patch by Piotr Sikora.
It was added in r2717 and no longer needed since r2721,
where the termination was added to ngx_shm_alloc() and
ngx_init_zone_pool(). So then it only corrupts error
messages about ivalid zones.
This allows to proxy WebSockets by using configuration like this:
location /chat/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Connection upgrade is allowed as long as it was requested by a client
via the Upgrade request header.
Note: the "-p" argument of cp(1) dropped intentionally, to force nginx.so
rebuild. It is considered too boring to properly list all dependencies
in Makefile.PL.
Note: use of {SHA} passwords is discouraged as {SHA} password scheme is
vulnerable to attacks using rainbow tables. Use of {SSHA}, $apr1$ or
crypt() algorithms as supported by OS is recommended instead.
The {SHA} password scheme support is added to avoid the need of changing
the scheme recorded in password files from {SHA} to {SSHA} because such
a change hides security problem with {SHA} passwords.
Patch by Louis Opter, with minor changes.