Previously, if a response to the PTR request was cached, and ngx_resolver_dup()
failed to allocate memory for the resulting name, then the original node was
freed but left in expire_queue. A subsequent address resolving would end up
in a use-after-free memory access of the node either in ngx_resolver_expire()
or ngx_resolver_process_ptr(), when accessing it through expire_queue.
The fix is to leave the resolver node intact.
Don't waste server resources by sending RST_STREAM frames. Instead,
reject WINDOW_UPDATE frames with invalid zero increment by closing
connection with PROTOCOL_ERROR.
Don't waste server resources by sending RST_STREAM frames. Instead,
reject HEADERS and PRIORITY frames with self-dependency by closing
connection with PROTOCOL_ERROR.
When ngx_http_discard_request_body() call was added to ngx_http_send_response(),
there were no return codes other than NGX_OK and NGX_HTTP_INTERNAL_SERVER_ERROR.
Now it can also return NGX_HTTP_BAD_REQUEST, but ngx_http_send_response() still
incorrectly transforms it to NGX_HTTP_INTERNAL_SERVER_ERROR.
The fix is to propagate ngx_http_discard_request_body() errors.
As defined in HTTP/1.1, body chunks have the following ABNF:
chunk = chunk-size [ chunk-ext ] CRLF chunk-data CRLF
where chunk-data is a sequence of chunk-size octets.
With this change, chunk-data that doesn't end up with CRLF at chunk-size
offset will be treated as invalid, such as in the example provided below:
4
SEE-THIS-AND-
4
THAT
0
Previously, if unbuffered request body reading wasn't finished before
the request was redirected to a different location using error_page
or X-Accel-Redirect, and the request body is read again, this could
lead to disastrous effects, such as a duplicate post_handler call or
"http request count is zero" alert followed by a segmentation fault.
This happened in the following configuration (ticket #1819):
location / {
proxy_request_buffering off;
proxy_pass http://bad;
proxy_intercept_errors on;
error_page 502 = /error;
}
location /error {
proxy_pass http://backend;
}
Fixed excessive memory growth and CPU usage if stream windows are
manipulated in a way that results in generating many small DATA frames.
Fix is to limit the number of simultaneously allocated DATA frames.
Fixed uncontrolled memory growth if peer sends a stream of
headers with a 0-length header name and 0-length header value.
Fix is to reject headers with zero name length.
When using SMTP with SSL and resolver, read events might be enabled
during address resolving, leading to duplicate ngx_mail_ssl_handshake_handler()
calls if something arrives from the client, and duplicate session
initialization - including starting another resolving. This can lead
to a segmentation fault if the session is closed after first resolving
finished. Fix is to block read events while resolving.
Reported by Robert Norris,
http://mailman.nginx.org/pipermail/nginx/2019-July/058204.html.
After ac5a741d39cf it is now possible that after zstream.avail_out
reaches 0 and we allocate additional buffer, there will be no more data
to put into this buffer, triggering "zero size buf" alert. Fix is to
reset b->temporary flag in this case.
Additionally, an optimization added to avoid allocating additional buffer
in this case, by checking if last deflate() call returned Z_STREAM_END.
Note that checking for Z_STREAM_END by itself is not enough to fix alerts,
as deflate() can return Z_STREAM_END without producing any output if the
buffer is smaller than gzip trailer.
Reported by Witold Filipczyk,
http://mailman.nginx.org/pipermail/nginx-devel/2019-July/012469.html.
Due to shortcomings of the ccv->zero flag implementation in complex value
interface, length of the resulting string from ngx_http_complex_value()
might either not include terminating null character or include it,
so the only safe way to work with the result is to use it as a
null-terminated string.
Reported by Patrick Wollgast.
When "-" follows a parameter of maximum length, a single byte buffer
overflow happens, since the error branch does not check parameter length.
Fix is to avoid saving "-" to the parameter key, and instead use an error
message with "-" explicitly written. The message is mostly identical to
one used in similar cases in the preequal state.
Reported by Patrick Wollgast.
With level-triggered event methods it is important to specify
the NGX_CLOSE_EVENT flag to ngx_handle_read_event(), otherwise
the event won't be removed, resulting in CPU hog.
Reported by Patrick Wollgast.
To save memory hash code uses u_short to store resulting bucket sizes,
so maximum bucket size is limited to 65536 minus ngx_cacheline_size (larger
values will be aligned to 65536 which will overflow u_short). However,
there were no checks to enforce this, and using larger bucket sizes
resulted in overflows and segmentation faults.
Appropriate safety checks to enforce this added to ngx_hash_init().
When nginx is used with zlib patched with [1], which provides
integration with the future IBM Z hardware deflate acceleration, it ends
up computing CRC32 twice: one time in hardware, which always does this,
and one time in software by explicitly calling crc32().
crc32() calls were added in changesets 133:b27548f540ad ("nginx-0.0.1-
2003-09-24-23:51:12 import") and 134:d57c6835225c ("nginx-0.0.1-
2003-09-26-09:45:21 import") as part of gzip wrapping feature - back
then zlib did not support it.
However, since then gzip wrapping was implemented in zlib v1.2.0.4,
and it's already being used by nginx for log compression.
This patch replaces hand-written gzip wrapping with the one provided by
zlib. It simplifies the code, and makes it avoid computing CRC32 twice
when using hardware acceleration.
[1] https://github.com/madler/zlib/pull/410
Similarly to the change in 5491:74bfa803a5aa (1.5.9), we should accept
properly escaped URIs and unescape them as needed, else it is not possible
to handle URIs with question marks.
As we now have ctx->header_sent flag, it is further used to prevent
duplicate $r->send_http_header() calls, prevent output before sending
header, and $r->internal_redirect() after sending header.
Further, $r->send_http_header() protected from calls after
$r->internal_redirect().
Returning NGX_HTTP_INTERNAL_SERVER_ERROR if a perl code died after
sending header will lead to a "header already sent" alert. To avoid
it, we now check if header was already sent, and return NGX_ERROR
instead if it was.
Previously, redirects scheduled with $r->internal_redirect() were followed
even if the code then died. Now these are ignored and nginx will return
an error instead.
Variable handlers are not expected to send anything to the client, cannot
sleep or read body, and are not expected to modify the request. Added
appropriate protection to prevent accidental foot shooting.
Duplicate $r->sleep() and/or $r->has_request_body() calls result
in undefined behaviour (in practice, connection leaks were observed).
To prevent this, croak() added in appropriate places.
Previously, allocation errors in nginx.xs were more or less ignored,
potentially resulting in incorrect code execution in specific low-memory
conditions. This is changed to use ctx->error bit and croak(), similarly
to how output errors are now handled.
Note that this is mostly a cosmetic change, as Perl itself exits on memory
allocation errors, and hence nginx with Perl is hardly usable in low-memory
conditions.
When an error happens, the ctx->error bit is now set, and croak()
is called to terminate further processing. The ctx->error bit is
checked in ngx_http_perl_call_handler() to cancel further processing,
and is also checked in various output functions - to make sure these won't
be called if croak() was handled by an eval{} in perl code.
In particular, this ensures that output chain won't be called after
errors, as filters might not expect this to happen. This fixes some
segmentation faults under low memory conditions. Also this stops
request processing after filter finalization or request body reading
errors.
For cases where an HTTP error status can be additionally returned (for
example, 416 (Requested Range Not Satisfiable) from the range filter),
the ctx->status field is also added.
This ensures that correct ctx is always available, including after
filter finalization. In particular, this fixes a segmentation fault
with the following configuration:
location / {
image_filter test;
perl 'sub {
my $r = shift;
$r->send_http_header();
$r->print("foo\n");
$r->print("bar\n");
}';
}
This also seems to be the only way to correctly handle filter finalization
in various complex cases, for example, when embedded perl is used both
in the original handler and in an error page called after filter
finalization.
The NGX_DONE test in ngx_http_perl_handle_request() was introduced
in 1702:86bb52e28ce0, which also modified ngx_http_perl_call_handler()
to return NGX_DONE with c->destroyed. The latter part was then
removed in 3050:f54b02dbb12b, so NGX_DONE test is no longer needed.
Embedded perl does not set any request fields needed for conditional
requests processing. Further, filter finalization in the not_modified
filter can cause segmentation faults due to cleared ctx as in
ticket #1786.
Before 5fb1e57c758a (1.7.3) the not_modified filter was implicitly disabled
for perl responses, as r->headers_out.last_modified_time was -1. This
change restores this behaviour by using the explicit r->disable_not_modified
flag.
Note that this patch doesn't try to address perl module robustness against
filter finalization and other errors returned from filter chains. It should
be eventually reworked to handle errors instead of ignoring them.
A new directive limit_req_dry_run allows enabling the dry run mode. In this
mode requests are neither rejected nor delayed, but reject/delay status is
logged as usual.
In case of filter finalization, essential request fields like r->uri,
r->args etc could be changed, which affected the cache update subrequest.
Also, after filter finalization r->cache could be set to NULL, leading to
null pointer dereference in ngx_http_upstream_cache_background_update().
The fix is to create background cache update subrequest before sending the
cached response.
Since initial introduction in 1aeaae6e9446 (1.11.10) background cache update
subrequest was created after sending the cached response because otherwise it
blocked the parent request output. In 9552758a786e (1.13.1) background
subrequests were introduced to eliminate the delay before sending the final
part of the cached response. This also made it possible to create the
background cache update subrequest before sending the response.
Note that creating the subrequest earlier does not change the fact that in case
of filter finalization the background cache update subrequest will likely not
have enough time to successfully update the cache entry. Filter finalization
leads to the main request termination as soon the current iteration of request
processing is complete.
In ngx_http_range_singlepart_body() special buffers where passed
unmodified, including ones after the end of the range. As such,
if the last buffer of a response was sent separately as a special
buffer, two buffers with b->last_buf set were present in the response.
In particular, this might result in a duplicate final chunk when using
chunked transfer encoding (normally range filter and chunked transfer
encoding are not used together, but this may happen if there are trailers
in the response). This also likely to cause problems in HTTP/2.
Fix is to skip all special buffers after we've sent the last part of
the range requested. These special buffers are not meaningful anyway,
since we set b->last_buf in the buffer with the last part of the range,
and everything is expected to be flushed due to it.
Additionally, ngx_http_next_body_filter() is now called even
if no buffers are to be passed to it. This ensures that various
write events are properly propagated through the filter chain. In
particular, this fixes test failures observed with the above change
and aio enabled.
Filters are not allowed to change incoming chain links, and should allocate
their own links if any modifications are needed. Nevertheless
ngx_http_range_singlepart_body() modified incoming chain links in some
cases, notably at the end of the requested range.
No problems caused by this are currently known, mostly because of
limited number of possible modifications and the position of the range
body filter in the filter chain. Though this behaviour is clearly incorrect
and tests demonstrate that it can at least cause some proxy buffers being
lost when using proxy_force_ranges, leading to less effective handling
of responses.
Fix is to always allocate new chain links in ngx_http_range_singlepart_body().
Links are explicitly freed to ensure constant memory usage with long-lived
requests.
If a complex value is expected to be of type size_t, and the compiled
value is constant, the constant size_t value is remembered at compile
time.
The value is accessed through ngx_http_complex_value_size() which
either returns the remembered constant or evaluates the expression
and parses it as size_t.
Previously, ngx_utf8_decode() was called from ngx_utf8_length() with
incorrect length, potentially resulting in out-of-bounds read when
handling invalid UTF-8 strings.
In practice out-of-bounds reads are not possible though, as autoindex, the
only user of ngx_utf8_length(), provides null-terminated strings, and
ngx_utf8_decode() anyway returns an errors when it sees a null in the
middle of an UTF-8 sequence.
Reported by Yunbin Liu.
If OCSP stapling was enabled with dynamic certificate loading, with some
OpenSSL versions (1.0.2o and older, 1.1.0h and older; fixed in 1.0.2p,
1.1.0i, 1.1.1) a segmentation fault might happen.
The reason is that during an abbreviated handshake the certificate
callback is not called, but the certificate status callback was called
(https://github.com/openssl/openssl/issues/1662), leading to NULL being
returned from SSL_get_certificate().
Fix is to explicitly check SSL_get_certificate() result.