Without this change, aio used with HTTP/2 can result in connection hang,
as observed with "aio threads; aio_write on;" and proxying (ticket #2248).
The problem is that HTTP/2 updates buffers outside of the output filters
(notably, marks them as sent), and then posts a write event to call
output filters. If a filter does not call the next one for some reason
(for example, because of an AIO operation in progress), this might
result in a state when the owner of a buffer already called
ngx_chain_update_chains() and can reuse the buffer, while the same buffer
is still sitting in the busy chain of some other filter.
In the particular case a buffer was sitting in output chain's ctx->busy,
and was reused by event pipe. Output chain's ctx->busy was permanently
blocked by it, and this resulted in connection hang.
Fix is to change ngx_chain_update_chains() to skip buffers from other
modules unconditionally, without trying to wait for these buffers to
become empty.
With SSL it is possible that an established connection is ready for
reading after the handshake. Further, events might be already disabled
in case of level-triggered event methods. If this happens and
ngx_http_upstream_send_request() blocks waiting for some data from
the upstream, such as flow control in case of gRPC, the connection
will time out due to no read events on the upstream connection.
Fix is to explicitly check the c->read->ready flag if sending request
blocks and post a read event if it is set.
Note that while it is possible to modify ngx_ssl_handshake() to keep
read events active, this won't completely resolve the issue, since
there can be data already received during the SSL handshake
(see 573bd30e46b4).
A new behaviour was introduced in OpenSSL 1.1.1e, when a peer does not send
close_notify before closing the connection. Previously, it was to return
SSL_ERROR_SYSCALL with errno 0, known since at least OpenSSL 0.9.7, and is
handled gracefully in nginx. Now it returns SSL_ERROR_SSL with a distinct
reason SSL_R_UNEXPECTED_EOF_WHILE_READING ("unexpected eof while reading").
This leads to critical errors seen in nginx within various routines such as
SSL_do_handshake(), SSL_read(), SSL_shutdown(). The behaviour was restored
in OpenSSL 1.1.1f, but presents in OpenSSL 3.0 by default.
Use of the SSL_OP_IGNORE_UNEXPECTED_EOF option added in OpenSSL 3.0 allows
to set a compatible behaviour to return SSL_ERROR_ZERO_RETURN:
https://git.openssl.org/?p=openssl.git;a=commitdiff;h=09b90e0
See for additional details: https://github.com/openssl/openssl/issues/11381
The OPENSSL_SUPPRESS_DEPRECATED macro is used to suppress deprecation warnings.
This covers Session Tickets keys, SSL Engine, DH low level API for DHE ciphers.
Unlike OPENSSL_API_COMPAT, it works well with OpenSSL built with no-deprecated.
In particular, it doesn't unhide various macros in OpenSSL includes, which are
meant to be hidden under OPENSSL_NO_DEPRECATED.
The only consumer is a callback function for SSL_CTX_set_tmp_rsa_callback()
deprecated in OpenSSL 1.1.0. Now the function is conditionally compiled too.
The c->read->ready and c->write->ready flags might be reset during
the handshake, and not set again if the handshake was finished on
the other event. At the same time, some data might be read from
the socket during the handshake, so missing c->read->ready flag might
result in a connection hang, for example, when waiting for an SMTP
greeting (which was already received during the handshake).
Found by Sergey Kandaurov.
Recent fixes to SSL shutdown with lingering close (554c6ae25ffc, 1.19.5)
broke logging of SSL variables. To make sure logging of SSL variables
works properly, avoid freeing c->ssl when doing an SSL shutdown before
lingering close.
Reported by Reinis Rozitis
(http://mailman.nginx.org/pipermail/nginx/2021-May/060670.html).
Instead of calling SSL_free() with each return point, introduced a single
place where cleanup happens. As a positive side effect, this fixes two
potential memory leaks on ngx_handle_read_event() and ngx_handle_write_event()
errors where there were no SSL_free() calls (though unlikely practical,
as errors there are only expected to happen due to bugs or kernel issues).
Since nginx always uses exactly one entry in the question section of
a DNS query, and never uses compression pointers in this entry, parsing
of a DNS response in ngx_resolver_process_response() does not expect
compression pointers to appear in the question section of the DNS
response. Indeed, compression pointers in the first name of a DNS response
hardly make sense, do not seem to be allowed by RFC 1035 (which says
"a pointer to a prior occurance of the same name", note "prior"), and
were never observed in practice.
Added an explicit check to ngx_resolver_process_response()'s parsing
of the question section to properly report an error if compression pointers
nevertheless appear in the question section.
Instead of checking on each label if we need to place a dot or not,
now it always adds a dot after a label, and reduces the resulting
length afterwards.
Previously, anything with any of the two high bits set were interpreted
as compression pointers. This is incorrect, as RFC 1035 clearly states
that "The 10 and 01 combinations are reserved for future use". Further,
the 01 combination is actually allocated for EDNS extended label type
(see RFC 2671 and RFC 6891), not really used though.
Fix is to reject unrecognized label types rather than misinterpreting
them as compression pointers.
It is believed to be harmless, and in the worst case it uses some
uninitialized memory as a part of the compression pointer length,
eventually leading to the "name is out of DNS response" error.
It turns out no browsers implement HTTP/2 GOAWAY handling properly, and
large enough number of resources on a page results in failures to load
some resources. In particular, Chrome seems to experience errors if
loading of all resources requires more than 1 connection (while it
is usually able to retry requests at least once, even with 2 connections
there are occasional failures for some reason), Safari if loading requires
more than 3 connections, and Firefox if loading requires more than 10
connections (can be configured with network.http.request.max-attempts,
defaults to 10).
It does not seem to be possible to resolve this on nginx side, even strict
limiting of maximum concurrency does not help, and loading issues seems to
be triggered by merely queueing of a request for a particular connection.
The only available mitigation seems to use higher keepalive_requests value.
The new default is 1000 and matches previously used default for
http2_max_requests. It is expected to be enough for 99.98% of the pages
(https://httparchive.org/reports/state-of-the-web?start=latest#reqTotal)
even in Chrome.
Similar to lingering_time, it limits total connection lifetime before
keepalive is switched off. The default is 1 hour, which is close to
the total maximum connection lifetime possible with default
keepalive_requests and keepalive_timeout.
Firefox uses several idle streams for PRIORITY frames[1], and
"http2_max_concurrent_streams 1;" results in "client sent too many
PRIORITY frames" errors when a connection is established by Firefox.
Fix is to relax the PRIORITY frames limit to use at least 100 as
the initial value (which is the recommended by the HTTP/2 protocol
minimum limit on the number of concurrent streams, so it is not
unreasonable for clients to assume that similar number of idle streams
can be used for prioritization).
[1] https://hg.mozilla.org/mozilla-central/file/32a9e6e145d6e3071c3993a20bb603a2f388722b/netwerk/protocol/http/Http2Stream.cpp#l1270
In FreeBSD 13, eventfd(2) was added, and this breaks build
with --test-build-epoll and without --with-file-aio. Fix is
to move eventfd(2) detection to auto/os/linux, as it is used
only on Linux as a notification mechanism for epoll().
In current versions (all versions based on zlib 1.2.11, at least
since 2018) it no longer uses 64K hash and does not force window
bits to 13 if it is less than 13. That is, it needs just 16 bytes
more memory than normal zlib, so these bytes are simply added to
the normal size calculation.
In limit_req, auth_delay, and upstream code to check for broken
connections, tests for possible connection close by the client
did not work if the connection was already closed when relevant
event handler was set. This happened because there were no additional
events in case of edge-triggered event methods, and read events
were disabled in case of level-triggered ones.
Fix is to explicitly post a read event if the c->read->ready flag
is set.
For new data to be reported with eventport on Solaris,
ngx_handle_read_event() needs to be called after reading response
headers. To do so, ngx_http_upstream_process_non_buffered_upstream()
now called unconditionally if there are no prepread data. This
won't cause any read() syscalls as long as upstream connection
is not ready for reading (c->read->ready is not set), but will result
in proper handling of all events.
If we need to be notified about further events, ngx_handle_read_event()
needs to be called after a read event is processed. Without this,
an event can be removed from the kernel and won't be reported again,
notably when using oneshot event methods, such as eventport on Solaris.
While here, error handling is also added, similar to one present in
ngx_resolver_tcp_read(). This is not expected to make a difference
and mostly added for consistency.
If an attempt is made to delete an event which was already reported,
port_dissociate() returns an error. Fix is avoid doing anything if
ev->active is not set.
Possible alternative approach would be to avoid calling ngx_del_event()
at all if ev->active is not set. This approach, however, will require
something else to re-add the other event of the connection, since both
read and write events are dissociated if an event is reported on a file
descriptor. Currently ngx_eventport_del_event() re-associates write
event if called to delete read event, and vice versa.
If, at the start of an event loop iteration, there are any timers
in the past (including timers expiring now), the ngx_process_events()
function is called with zero timeout, and returns immediately even
if there are no events. But the following code only calls
ngx_event_expire_timers() if time actually changed, so this results
in nginx spinning in the event loop till current time changes.
While such timers are not expected to appear under normal conditions,
as all such timers should be removed on previous event loop iterations,
they still can appear due to bugs, zero timeouts set in the configuration
(if this is not explicitly handled by the code), or due to external
time changes on systems without clock_gettime(CLOCK_MONOTONIC).
Fix is to call ngx_event_expire_timers() unconditionally. Calling
it on each event loop iteration is not expected to be significant from
performance point of view, especially compared to a syscall in
ngx_process_events().
Without explicit handling, a zero timer was actually added, leading to
multiple unneeded syscalls. Further, sending GOAWAY frame early might
be beneficial for clients.
Reported by Sergey Kandaurov.