On Windows, a worker process does not call ngx_slab_init() from
ngx_init_zone_pool(), so ngx_slab_max_size, ngx_slab_exact_size,
and ngx_slab_exact_shift were left uninitialized.
The variable was considered non-existent in the absence of any
valid_referers directives.
Given the following config snippet,
location / {
return 200 $invalid_referer;
}
location /referer {
valid_referers server_names;
}
"location /" should work identically and independently on other
"location /referer".
The fix is to always add the $invalid_referer variable as long
as the module is compiled in, as is done by other modules.
The shared objects should generally be allocated from shared memory.
While peers->name and the data it points to allocated from cf->pool
happened to work on UNIX, it broke on Windows. On UNIX this worked
only because the shared memory zone for upstreams is re-created for
every new configuration.
But on Windows, a worker process does not inherit the address space
of the master process, so the peers->name pointed to data allocated
from cf->pool by the master process, and was invalid.
The phase is added instead of the try_files phase. Unlike the old phase, the
new one supports registering multiple handlers. The try_files implementation is
moved to a separate ngx_http_try_files_module, which now registers a precontent
phase handler.
The new request flag "preserve_body" indicates that the request body file should
not be removed by the upstream module because it may be used later by a
subrequest. The flag is set by the SSI (ticket #585), addition and slice
modules. Additionally, it is also set by the upstream module when a background
cache update subrequest is started to prevent the request body file removal
after an internal redirect. Only the main request is now allowed to remove the
file.
When closing a socket with SO_REUSEPORT, Linux drops all connections waiting
in this socket's listen queue. Previously, it was believed to only result
in connection resets when reconfiguring nginx to use smaller number of worker
processes. It also results in connection resets during configuration
testing though.
Workaround is to avoid using SO_REUSEPORT when testing configuration. It
should prevent listening sockets from being created if a conflicting socket
already exists, while still preserving detection of other possible errors.
It should also cover UDP sockets.
The only downside of this approach seems to be that a configuration testing
won't be able to properly report the case when nginx was compiled with
SO_REUSEPORT, but the kernel is not able to set it. Such errors will be
reported on a real start instead.
Suffix ranges no longer allowed to set negative start values, to prevent
ranges with negative start from appearing even if total size protection
will be removed.
The overflow can be used to circumvent the restriction on total size of
ranges introduced in c2a91088b0c0 (1.1.2). Additionally, overflow
allows producing ranges with negative start (such ranges can be created
by using a suffix, "bytes=-100"; normally this results in 200 due to
the total size check). These can result in the following errors in logs:
[crit] ... pread() ... failed (22: Invalid argument)
[alert] ... sendfile() failed (22: Invalid argument)
When using cache, it can be also used to reveal cache file header.
It is believed that there are no other negative effects, at least with
standard nginx modules.
In theory, this can also result in memory disclosure and/or segmentation
faults if multiple ranges are allowed, and the response is returned in a
single in-memory buffer. This never happens with standard nginx modules
though, as well as known 3rd party modules.
Fix is to properly protect from possible overflow when incrementing size.
It is safe because re-sending still works during graceful shutdown as
long as resolving takes place (and resolve tasks set their own timeouts
that are not cancelable).
Also, the new ctx->cancelable flag can be set to make resolve task's
timeout event cancelable.
Notably, on ppc64 with 64k pagesize, slab 0 (of size 8) requires
128 64-bit elements for bitmasks. The code bogusly assumed that
one uintptr_t is enough for bitmasks plus at least one free slot.
Resolving an SRV record includes resolving its host names in subrequests.
Previously, if memory allocation failed while reporting a subrequest result
after receiving a response from a DNS server, the SRV resolve handler was
called immediately with the NGX_ERROR state. However, if the SRV record
included another copy of the resolved name, it was reported once again.
This could trigger the use-after-free memory access after SRV resolve
handler freed the resolve context by calling ngx_resolve_name_done().
Now the SRV resolve handler is called only when all its subrequests are
completed.
Previously, each configured header was represented in one of two ways,
depending on whether or not its value included any variables.
If the value didn't include any variables, then it would be represented
as as a single script that contained complete header line with HTTP/1.1
delimiters, i.e.:
"Header: value\r\n"
But if the value included any variables, then it would be represented
as a series of three scripts: first contained header name and the ": "
delimiter, second evaluated to header value, and third contained only
"\r\n", i.e.:
"Header: "
"$value"
"\r\n"
This commit changes that, so that each configured header is represented
as a series of two scripts: first contains only header name, and second
contains (or evaluates to) only header value, i.e.:
"Header"
"$value"
or
"Header"
"value"
This not only makes things more consistent, but also allows header name
and value to be accessed separately.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
As per RFC 2616 / RFC 7233, any range request to an empty file
is expected to result in 416 Range Not Satisfiable response, as
there cannot be a "byte-range-spec whose first-byte-pos is less
than the current length of the entity-body". On the other hand,
this makes use of byte-range requests inconvenient in some cases,
as reported for the slice module here:
http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010177.html
This commit changes range filter to instead return 200 if the file
is empty and the range requested starts at 0.
This change reworks 13a5f4765887 to only run posted requests once,
with nothing on stack. Running posted requests with other request
functions on stack may result in use-after-free in case of errors,
similar to the one reported in #788.
To only run posted request once, a separate function was introduced
to be used as ssl handshake handler in c->ssl->handler,
ngx_http_upstream_ssl_handshake_handler(). The ngx_http_run_posted_requests()
is only called in this function, and not in ngx_http_upstream_ssl_handshake()
which may be called directly on stack.
Additionaly, ngx_http_upstream_ssl_handshake_handler() now does appropriate
debug logging of the current subrequest, similar to what is done in other
event handlers.
Previously, the upstream resolve handler always called
ngx_http_run_posted_requests() to run posted requests after processing the
resolver response. However, if the handler was called directly from the
ngx_resolve_name() function (for example, if the resolver response was cached),
running posted requests from the handler could lead to the following errors:
- If the request was scheduled for termination, it could actually be terminated
in the resolve handler. Upper stack frames could reference the freed request
object in this case.
- If a significant number of requests were posted, and for each of them the
resolve handler was called directly from the ngx_resolve_name() function,
posted requests could be run recursively and lead to stack overflow.
Now ngx_http_run_posted_requests() is only called from asynchronously invoked
resolve handlers.
Trailers added using this directive are evaluated after response body
is processed by output filters (but before it's written to the wire),
so it's possible to use variables calculated from the response body
as the trailer value.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
Example:
ngx_table_elt_t *h;
h = ngx_list_push(&r->headers_out.trailers);
if (h == NULL) {
return NGX_ERROR;
}
ngx_str_set(&h->key, "Fun");
ngx_str_set(&h->value, "with trailers");
h->hash = ngx_hash_key_lc(h->key.data, h->key.len);
The code above adds "Fun: with trailers" trailer to the response.
Modules that want to emit trailers must set r->expect_trailers = 1
in header filter, otherwise they might not be emitted for HTTP/1.1
responses that aren't already chunked.
This change also adds $sent_trailer_* variables.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
The current style in variable handlers returning NGX_OK is to either set
v->not_found to 1, or to initialize the entire ngx_http_variable_value_t
structure.
In theory, always setting v->valid = 1 for NGX_OK would be useful, which
would mean that the value was computed and is thus valid, including the
special case of v->not_found = 1. But currently that's not the case and
causes the (v->valid || v->not_found) check to access an uninitialized
v->valid value, which is safe only because its value doesn't matter when
v->not_found is set.
When evaluating a mapped $reset_uid variable in the userid filter,
if get_handler set to ngx_http_map_variable() returned an error,
this previously resulted in a NULL pointer dereference.
If memory allocation of a new r->uri.data storage failed, reset its length as
well. Request URI is used in ngx_http_finalize_request() for debug logging.
Previously, when using NGX_HTTP_SSI_ERROR, error was ignored in ssi processing,
thus timefmt could be accessed later in ngx_http_ssi_date_gmt_local_variable()
as part of "set" handler, or NULL format pointer could be passed to strftime().
Previously, SETTINGS ACK was sent immediately upon receipt of SETTINGS
frame, before already queued DATA frames created using old SETTINGS.
This incorrect behavior was source of interoperability issues, because
peers rely on the fact that new SETTINGS are in effect after receiving
SETTINGS ACK.
Reported by Feng Li.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
Previously, new frames could be emitted in the middle of applying
new (and already acknowledged) SETTINGS params, which is illegal.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
If the main request was finalized while a background request performed an
asynchronous operation, the main request ended up in ngx_http_writer() and was
not finalized until a network event or a timeout. For example, cache
background update with aio enabled made nginx unable to process further client
requests or close the connection, keeping it open until client closes it.
Now regular finalization of the main request is not suspended because of an
asynchronous operation in another request.
If a background request was terminated while an asynchronous operation was in
progress, background request's write event handler was changed to
ngx_http_request_finalizer() and never called again.
Now, whenever a request is terminated while an asynchronous operation is in
progress, connection error flag is set to make further finalizations of any
request with this connection lead to termination.
These issues appeared in 1aeaae6e9446 (not yet released).
In http these checks were changed in a6d6d762c554, though mail module
was missed at that time. Since then, the stream module was introduced
based on mail, using "== NGX_ERROR" check.
With OpenSSL 1.1.0+, the workaround for handshake buffer size as introduced
in a720f0b0e083 (ticket #413) no longer works, as OpenSSL no longer exposes
handshake buffers, see https://github.com/openssl/openssl/commit/2e7dc7cd688.
Moreover, it is no longer possible to adjust handshake buffers at all now.
To avoid additional RTT if handshake uses more than 4k we now set TCP_NODELAY
on SSL connections before handshake. While this still results in sub-optimal
network utilization due to incomplete packets being sent, it seems to be
better than nothing.
Previously, cache background update might not work as expected, making client
wait for it to complete before receiving the final part of a stale response.
This could happen if the response could not be sent to the client socket in one
filter chain call.
Now background cache update is done in a background subrequest. This type of
subrequest does not block any other subrequests or the main request.
Previously, the read event of the accepted connection was marked ready, but not
available. This made EPOLLRDHUP-related code (for example, in ngx_unix_recv())
expect more data from the socket, leading to unexpected behavior.
For example, if SSL, PROXY protocol and deferred accept were enabled on a listen
socket, the client connection was aborted due to unexpected return value of
c->recv().
If allocation of cleanup handler in the HTTP/2 header filter failed, then
a stream might be freed with a HEADERS frame left in the output queue.
Now the HEADERS frame is accounted in the queue before trying to allocate
the cleanup handler.
Abnormally exited workers may leave locked cache entries, this can
result in the cache size on disk exceeding max_size and shared memory
exhaustion.
This change mitigates the issue by ignoring locked entries during forced
expire. It also increases the visibility of the problem by logging such
entries.
Previously, an allocation error resulted in uninitialized memory access
when evaluating $upstream_http_ variables.
On a related note, see r->headers_out.headers cleanup work in 0cdee26605f3.
In ac9b1df5b246 (1.13.0) we attempted to allow renegotiation in client mode,
but when using OpenSSL 1.0.2 or older versions it was additionally disabled
by SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS.
If initialization of a header failed for some reason after ngx_list_push(),
leaving the header as is can result in uninitialized memory access by
the header filter or the log module. The fix is to clear partially
initialized headers in case of errors.
For the Cache-Control header, the fix is to postpone pushing
r->headers_out.cache_control until its value is completed.
Previously, ngx_http_sub_header_filter() could fail with a partially
initialized context, later accessed in ngx_http_sub_body_filter()
if called from the perl content handler.
The issue had appeared in 2c045e5b8291 (1.9.4).
A better fix would be to handle ngx_http_send_header() errors in
the perl module, though this doesn't seem to be easy enough.
The SSL_CTRL_SET_CURVES_LIST macro is removed in the OpenSSL master branch.
SSL_CTX_set1_curves_list is preserved as compatibility with previous versions.
CVE-2009-3555 is no longer relevant and mitigated by the renegotiation
info extension (secure renegotiation). On the other hand, unexpected
renegotiation still introduces potential security risks, and hence we do
not allow renegotiation on the server side, as we never request renegotiation.
On the client side the situation is different though. There are backends
which explicitly request renegotiation, and disabled renegotiation
introduces interoperability problems. This change allows renegotiation
on the client side, and fixes interoperability problems as observed with
such backends (ticket #872).
Additionally, with TLSv1.3 the SSL_CB_HANDSHAKE_START flag is currently set
by OpenSSL when receiving a NewSessionTicket message, and was detected by
nginx as a renegotiation attempt. This looks like a bug in OpenSSL, though
this change also allows better interoperability till the problem is fixed.
Previously, the source IP address of a response UDP datagram could differ from
the original datagram destination address. This could happen if the server UDP
socket is bound to a wildcard address and the network interface chosen to output
the response packet has a different default address than the destination address
of the original packet. For example, if two addresses from the same network are
configured on an interface.
Now source address is set explicitly if a response is sent for a server UDP
socket bound to a wildcard address.
This change adds "http_429" parameter to "proxy_next_upstream" for
retrying rate-limited requests, and to "proxy_cache_use_stale" for
serving stale cached responses after being rate-limited.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
This change adds reason phrase in status line and pretty response body
when "429" status code is used in "return", "limit_conn_status" and/or
"limit_req_status" directives.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
When a slice subrequest was redirected to a new location, its context was lost.
After its completion, a new slice subrequest for the same slice was created.
This could lead to infinite loop. Now the slice module makes sure each slice
subrequest starts output with the slice context available.
With post_action or subrequests, it is possible that the timer set for
wev->delayed will expire while the active subrequest write event handler
is not ready to handle this. This results in request hangs as observed
with limit_rate / sendfile_max_chunk and post_action (ticket #776) or
subrequests (ticket #1228).
Moving the handling to the connection event handler fixes the hangs observed,
and also slightly simplifies the code.
Since limit_req uses connection's write event to delay request processing,
it can conflict with timers in other subrequests. In particular, even
if applied to an active subrequest, it can break things if wev->delayed
is already set (due to limit_rate or sendfile_max_chunk), since after
limit_req finishes the wev->delayed flag will be set and no timer will be
active.
Fix is to use the wev->delayed flag in limit_req as well. This ensures that
wev->delayed won't be set after limit_req finishes, and also ensures that
limit_req's timers will be properly handled by other subrequests if the one
delayed by limit_req is not active.
All streams in connection must be finalized before the connection
itself can be finalized and all related memory is freed. That's
not always possible on the current event loop iteration.
Thus when the last stream is finalized, it sets the special read
event handler ngx_http_v2_handle_connection_handler() and posts
the event.
Previously, this handler didn't check the connection state and
could call the regular event handler on a connection that was
already in finalization stage. In the worst case that could
lead to a segmentation fault, since some data structures aren't
supposed to be used during connection finalization. Particularly,
the waiting queue can contain already freed streams, so the
WINDOW_UPDATE frame received by that moment could trigger
accessing to these freed streams.
Now, the connection error flag is explicitly checked in
ngx_http_v2_handle_connection_handler().
In order to finalize stream the error flag is set on fake connection and
either "write" or "read" event handler is called. The read events of fake
connections are always ready, but it's not the case with the write events.
When the ready flag isn't set, the error flag can be not checked in some
cases and as a result stream isn't finalized. Now the ready flag is
explicilty set on write events for proper finalization in all cases.
Previously, flow control didn't account for padding in DATA frames,
which meant that its view of the world could drift from peer's view
by up to 256 bytes per received padded DATA frame, which could lead
to a deadlock.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>