Much like normal connections, cached connections are now tested against
u->conf->next_upstream, and u->state->status is now always set.
This allows to disable additional tries even with upstream keepalive
by using "proxy_next_upstream off".
When a keys_zone is full then each next request to the cache is
penalized. That is, the cache has to evict older files to get a
slot from the keys_zone synchronously. The patch introduces new
behavior in this scenario. Manager will try to maintain available
free slots in the keys_zone by cleaning old files in the background.
The "aio_write" directive is introduced, which enables use of aio
for writing. Currently it is meaningful only with "aio threads".
Note that aio operations can be done by both event pipe and output
chain, so proper mapping between r->aio and p->aio is provided when
calling ngx_event_pipe() and in output filter.
In collaboration with Valentin Bartenev.
This simplifies the interface of the ngx_thread_read() function.
Additionally, most of the thread operations now explicitly set
file->thread_task, file->thread_handler and file->thread_ctx,
to facilitate use of thread operations in other places.
(Potential problems remain with sendfile in threads though - it uses
file->thread_handler as set in ngx_output_chain(), and it should not
be overwritten to an incompatible one.)
In collaboration with Valentin Bartenev.
It can now be set to "off" conditionally, e.g. using the map
directive.
An empty value will disable the emission of the Server: header
and the signature in error messages generated by nginx.
Any other value is treated as "on", meaning that full nginx
version is emitted in the Server: header and error messages
generated by nginx.
If proxy_cache is enabled, and proxy_no_cache tests true, it was previously
possible for the client connection to be closed after a 304. The fix is to
recheck r->header_only after the final cacheability is determined, and end the
request if no longer cacheable.
Example configuration:
proxy_cache foo;
proxy_cache_bypass 1;
proxy_no_cache 1;
If a client sends If-None-Match, and the upstream server returns 200 with a
matching ETag, no body should be returned to the client. At the start of
ngx_http_upstream_send_response proxy_no_cache is not yet tested, thus cacheable
is still 1 and downstream_error is set.
However, by the time the downstream_error check is done in process_request,
proxy_no_cache has been tested and cacheable is set to 0. The client connection
is then closed, regardless of keepalive.
If caching was used, "zero size buf in output" alerts might appear
in logs if a client prematurely closed connection. Alerts appeared
in the following situation:
- writing to client returned an error, so event pipe
drained all busy buffers leaving body output filters
in an invalid state;
- when upstream response was fully received,
ngx_http_upstream_finalize_request() tried to flush
all pending data.
Fix is to avoid flushing body if p->downstream_error is set.
Sendfile handlers (aio preload and thread handler) are called within
ctx->output_filter() in ngx_output_chain(), and hence ctx->aio cannot
be set directly in ngx_output_chain(). Meanwhile, it must be set to
make sure loop within ngx_output_chain() will be properly terminated.
There are no known cases that trigger the problem, though in theory
something like aio + sub filter (something that needs body in memory,
and can also free some memory buffers) + sendfile can result in
"task already active" and "second aio post" alerts.
The fix is to set ctx->aio in ngx_http_copy_aio_sendfile_preload()
and ngx_http_copy_thread_handler().
For consistency, ctx->aio is no longer set explicitly in
ngx_output_chain_copy_buf(), as it's now done in
ngx_http_copy_thread_handler().
Previously, there were only three timeouts used globally for the whole HTTP/2
connection:
1. Idle timeout for inactivity when there are no streams in processing
(the "http2_idle_timeout" directive);
2. Receive timeout for incomplete frames when there are no streams in
processing (the "http2_recv_timeout" directive);
3. Send timeout when there are frames waiting in the output queue
(the "send_timeout" directive on a server level).
Reaching one of these timeouts leads to HTTP/2 connection close.
This left a number of scenarios when a connection can get stuck without any
processing and timeouts:
1. A client has sent the headers block partially so nginx starts processing
a new stream but cannot continue without the rest of HEADERS and/or
CONTINUATION frames;
2. When nginx waits for the request body;
3. All streams are stuck on exhausted connection or stream windows.
The first idea that was rejected was to detect when the whole connection
gets stuck because of these situations and set the global receive timeout.
The disadvantage of such approach would be inconsistent behaviour in some
typical use cases. For example, if a user never replies to the browser's
question about where to save the downloaded file, the stream will be
eventually closed by a timeout. On the other hand, this will not happen
if there's some activity in other concurrent streams.
Now almost all the request timeouts work like in HTTP/1.x connections, so
the "client_header_timeout", "client_body_timeout", and "send_timeout" are
respected. These timeouts close the request.
The global timeouts work as before.
Previously, the c->write->delayed flag was abused to avoid setting timeouts on
stream events. Now, the "active" and "ready" flags are manipulated instead to
control the processing of individual streams.
This is required for implementing per request timeouts.
Previously, the temporary pool was used only during skipping of
headers and the request pool was used otherwise. That required
switching of pools if the request was closed while parsing.
It wasn't a problem since the request could be closed only after
the validation of the fully parsed header. With the per request
timeouts, the request can be closed at any moment, and switching
of pools in the middle of parsing header name or value becomes a
problem.
To overcome this, the temporary pool is now always created and
used. Special checks are added to keep it when either the stream
is being processed or until header block is fully parsed.
Since 667aaf61a778 (1.1.17) the ngx_http_parse_header_line() function can return
NGX_HTTP_PARSE_INVALID_HEADER when a header contains NUL character. In this
case the r->header_end pointer isn't properly initialized, but the log message
in ngx_http_process_request_headers() hasn't been adjusted. It used the pointer
in size calculation, which might result in up to 2k buffer over-read.
Found with afl-fuzz.
When the "pending" value is zero, the "buf" will be right shifted
by the width of its type, which results in undefined behavior.
Found by Coverity (CID 1352150).
Due to greater priority of the unary plus operator over the ternary operator
the expression didn't work as expected. That might result in one byte less
allocation than needed for the HEADERS frame buffer.
With main request buffered, it's possible, that a slice subrequest will send
output before it. For example, while main request is waiting for aio read to
complete, a slice subrequest can start an aio operation as well. The order
in which aio callbacks are called is undetermined.
Skip SSL_CTX_set_tlsext_servername_callback in case of renegotiation.
Do nothing in SNI callback as in this case it will be supplied with
request in c->data which isn't expected and doesn't work this way.
This was broken by b40af2fd1c16 (1.9.6) with OpenSSL master branch and LibreSSL.
Splits a request into subrequests, each providing a specific range of response.
The variable "$slice_range" must be used to set subrequest range and proper
cache key. The directive "slice" sets slice size.
The following example splits requests into 1-megabyte cacheable subrequests.
server {
listen 8000;
location / {
slice 1m;
proxy_cache cache;
proxy_cache_key $uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_cache_valid 200 206 1h;
proxy_pass http://127.0.0.1:9000;
}
}
If an upstream with variables evaluated to address without a port,
then instead of a "no port in upstream" error an attempt was made
to connect() which failed with EADDRNOTAVAIL.
The HEADERS frame is always represented by more than one buffer since
b930e598a199, but the handling code hasn't been adjusted.
Only the first buffer of HEADERS frame was checked and if it had been
sent while others had not, the rest of the frame was dropped, resulting
in broken connection.
Before b930e598a199, the problem could only be seen in case of HEADERS
frame with CONTINUATION.
The r->invalid_header flag wasn't reset once an invalid header appeared in a
request, resulting in all subsequent headers in the request were also marked
as invalid.
The directive toggles conversion of HEAD to GET for cacheable proxy requests.
When disabled, $request_method must be added to cache key for consistency.
By default, HEAD is converted to GET as before.
OpenSSL doesn't check if the negotiated protocol has been announced.
As a result, the client might force using HTTP/2 even if it wasn't
enabled in configuration.
It caused inconsistency between setting "in_closed" flag and the moment when
the last DATA frame was actually read. As a result, the body buffer might not
be initialized properly in ngx_http_v2_init_request_body(), which led to a
segmentation fault in ngx_http_v2_state_read_data(). Also it might cause
start processing of incomplete body.
This issue could be triggered when the processing of a request was delayed,
e.g. in the limit_req or auth_request modules.
Now it limits only the maximum length of literal string (either raw or
compressed) in HPACK request header fields. It's easier to understand
and to describe in the documentation.
Previous code has been based on assumption that the header block can only be
splitted at the borders of individual headers. That wasn't the case and might
result in emitting frames bigger than the frame size limit.
The current approach is to split header blocks by the frame size limit.
Previously, nginx worker would crash because of a double free
if client disconnected or timed out before sending all headers.
Found with afl-fuzz.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
Previously, streams that were indirectly reprioritized (either because of
a new exclusive dependency on their parent or because of removal of their
parent from the dependency tree), didn't have their pointer to the parent
node updated.
This broke detection of circular dependencies and, as a result, nginx
worker would crash due to stack overflow whenever such dependency was
introduced.
Found with afl-fuzz.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
Per RFC7540, a stream cannot depend on itself.
Previously, this requirement was enforced on PRIORITY frames, but not on
HEADERS frames and due to the implementation details nginx worker would
crash (stack overflow) while opening self-dependent stream.
Found with afl-fuzz.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
Since an output buffer can only be used for either reading or sending, small
amounts of data left from the previous operation (due to some limits) must be
sent before nginx will be able to read further into the buffer. Using only
one output buffer can result in suboptimal behavior that manifests itself in
forming and sending too small chunks of data. This is particularly painful
with SPDY (or HTTP/2) where each such chunk needs to be prefixed with some
header.
The default flow-control window in HTTP/2 is 64k minus one bytes. With one
32k output buffer this results is one byte left after exhausting the window.
With two 32k buffers the data will be read into the second free buffer before
sending, thus the minimum output is increased to 32k + 1 bytes which is much
better.
A configuration with a named location inside a zero-length prefix
or regex location used to trigger a segmentation fault, as
ngx_http_core_location() failed to properly detect if a nested location
was created. Example configuration to reproduce the problem:
location "" {
location @foo {}
}
Fix is to not rely on a parent location name length, but rather check
command type we are currently parsing.
Identical fix is also applied to ngx_http_rewrite_if(), which used to
incorrectly assume the "if" directive is on server{} level in such
locations.
Reported by Markus Linnala.
Found with afl-fuzz.
This prevents a potential attack that discloses cached data if an attacker
will be able to craft a hash collision between some cache key the attacker
is allowed to access and another cache key with protected data.
See http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007288.html.
Thanks to Gena Makhomed and Sergey Brester.
The value of NGX_ERROR, returned from filter handlers, was treated as a generic
upstream error and changed to NGX_HTTP_INTERNAL_SERVER_ERROR before calling
ngx_http_finalize_request(). This resulted in "header already sent" alert
if header was already sent in filter handlers.
The problem appeared in 54e9b83d00f0 (1.7.5).
This overflow has become possible after the change in 06e850859a26,
since concurrent subrequests are not limited now and each of them is
counted in r->main->count.
Resolved warnings about declarations that hide previous local declarations.
Warnings about WSASocketA() being deprecated resolved by explicit use of
WSASocketW() instead of WSASocket(). When compiling without IPv6 support,
WinSock deprecated warnings are disabled to allow use of gethostbyname().
The following configuration with alias, nested location and try_files
resulted in wrong file being used. Request "/foo/test.gif" tried to
use "/tmp//foo/test.gif" instead of "/tmp/test.gif":
location /foo/ {
alias /tmp/;
location ~ gif {
try_files $uri =405;
}
}
Additionally, rev. c985d90a8d1f introduced a regression if
the "/tmp//foo/test.gif" file was found (ticket #768). Resulting URI
was set to "gif?/foo/test.gif", as the code used clcf->name of current
location ("location ~ gif") instead of parent one ("location /foo/").
Fix is to use r->uri instead of clcf->name in all cases in the
ngx_http_core_try_files_phase() function. It is expected to be
already matched and identical to the clcf->name of the right
location.
If alias was used in a location given by a regular expression,
nginx used to do wrong thing in try_files if a location name (i.e.,
regular expression) was an exact prefix of URI. The following
configuration triggered a segmentation fault on a request to "/mail":
location ~ /mail {
alias /path/to/directory;
try_files $uri =404;
}
Reported by Per Hansson.
Iterating through all connections takes a lot of CPU time, especially
with large number of worker connections configured. As a result
nginx processes used to consume CPU time during graceful shutdown.
To mitigate this we now only do a full scan for idle connections when
shutdown signal is received.
Transitions of connections to idle ones are now expected to be
avoided if the ngx_exiting flag is set. The upstream keepalive module
was modified to follow this.