A QUIC handshake failure breaks down into several cases:
- a handshake error which leads to a send_alert call
- an error triggered by the add_handshake_data callback
- internal errors (allocation etc)
Previously, in the first case, only error code was set in the send_alert
callback. Now the "handshake failed" reason phrase is set there as well.
In the second case, both code and reason are set by add_handshake_data.
In the last case, setting reason phrase is removed: returning NGX_ERROR
now leads to closing the connection with just INTERNAL_ERROR.
Reported by Jiuzhou Cui.
Previously, since 3550b00d9dc8, the token was allocated on stack, to get
rid of pool usage. Now the token is allocated by ngx_quic_copy_buffer()
in QUIC buffers, also used for STREAM, CRYPTO and ACK frames.
In nginx source code the inttypes.h include, if available, is used to define
standard integer types. Changed the SO_COOKIE configure test to follow this.
Specifically, now it is kept unset until streams are initialized.
Notably, this unbreaks OCSP with client certificates after 35e27117b593.
Previously, the read event could be posted prematurely via ngx_quic_set_event()
e.g., as part of handling a STREAM frame.
Previously, streams were initialized in early keys handler. However, client
transport parameters may not be available by then. This happens, for example,
when using QuicTLS. Now streams are initialized in ngx_quic_crypto_input()
after calling SSL_do_handshake() for both 0-RTT and 1-RTT.
Previously, there was no timeout for a request stream blocked on insert count,
which could result in infinite wait. Now client_header_timeout is set when
stream is first blocked.
Now, when RESET_STREAM is sent or received, or when streams are closed,
stream connection error flag is set. Previously, only stream state was
changed, which resulted in setting the error flag only after calling
recv()/send()/send_chain(). However, there are cases when none of these
functions is called, but it's still important to know if the stream is being
closed. For example, when an HTTP/3 request stream is blocked on insert count,
receiving RESET_STREAM should trigger stream closure, which was not the case.
The change also fixes ngx_http_upstream_check_broken_connection() and
ngx_http_test_reading() with QUIC streams.
Previously, stream events were added and deleted by ngx_handle_read_event() and
ngx_handle_write_event() in a way similar to level-triggered events. However,
QUIC stream events are effectively edge-triggered and can stay active all time.
Moreover, the events are now active since the moment a stream is created.
Previously, start_time wasn't set for a new stream.
The fix is to derive it from the parent connection.
Also it's used to simplify tracking keepalive_time.
As per RFC 9204, section 3.2.2, a new entry can reference an entry in the
dynamic table that will be evicted when adding this new entry into the dynamic
table.
Previously, such inserts resulted in use-after-free since the old entry was
evicted before the insertion (ticket #2431). Now it's evicted after the
insertion.
This change fixes Insert with Name Reference and Duplicate encoder instructions.
Ports difference must be respected when checking addresses for duplicates,
otherwise configurations like this are broken:
listen 127.0.0.1:6000-6005
It was broken by 4cc2bfeff46c (nginx 1.23.3).
Fixed event flags handling edge cases in ngx_wsarecv() and ngx_wsarecv_chain(),
notably to always reset rev->ready in case of errors (which wasn't the case
after ngx_socket_nread() errors), and after EOF (rev->ready was not cleared
if due to a misconfiguration a zero-sized buffer was used for reading).
With this change, behaviour of ngx_ssl_recv() now matches ngx_unix_recv(),
which used to always reset c->read->ready to 0 when returning errors.
This fixes an infinite loop in unbuffered SSL proxying if writing to the
client is blocked and an SSL error happens (ticket #2418).
With this change, the fix for a similar issue in the stream module
(6868:ee3645078759), which used a different approach of explicitly
testing c->read->error instead, is no longer needed and was reverted.
Casts are believed to be not needed, since memcmp() has "const void *"
arguments since introduction of the "void" type in C89. And on pre-C89
platforms nginx is unlikely to compile without warnings anyway, as there
are no casts in memcpy() and memmove() calls.
These casts were added in 1648:89a47f19b9ec without any details on why they
were added, and Igor does not remember details either. The most plausible
explanation is that they were copied from ngx_strcmp() and were not really
needed even at that time.
Prodded by Alejandro Colomar.
Binary upgrades are not supported without master process, but it is,
however, possible, that nginx running with master process is asked
to upgrade binary, and the configuration file as available on disk
at this time includes "master_process off;".
If this happens, listening sockets inherited from the previous binary
will have ls[i].previous set. But the old cycle on initial process
startup, including startup after binary upgrade, is destroyed by
ngx_init_cycle() once configuration parsing is complete. As a result,
an attempt to dereference ls[i].previous in ngx_event_process_init()
accesses already freed memory.
Fix is to avoid looking into ls[i].previous if the old cycle is already
freed.
With this change it is also no longer needed to clear ls[i].previous in
worker processes, so the relevant code was removed.
Cloning of listening sockets for each worker process does not make sense
when working without master process, and causes some of the connections
not to be accepted if worker_processes is set to more than one and there
are listening sockets configured with the reuseport flag. Fix is to
disable cloning when master process is disabled.
Due to the glibc bug[1], getaddrinfo("localhost") with AI_ADDRCONFIG
on a typical host with glibc and without IPv6 returns two 127.0.0.1
addresses, and therefore "listen localhost:80;" used to result in
"duplicate ... address and port pair" after 4f9b72a229c1.
Fix is to explicitly filter out duplicate addresses returned during
resolution of a name.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=14969
Previously, if an event was posted by a read event handler, called by
ngx_close_idle_connections(), that event was not processed until the next
event loop iteration, which could happen after a timeout.
As the SSI parser always uses the context from the main request for storing
variables and blocks, that context should always exist for subrequests using
SSI, even though the main request does not necessarily have SSI enabled.
However, `ngx_http_get_module_ctx(r->main, ...)` is getting NULL in such cases,
resulting in the worker crashing SIGSEGV when accessing its attributes.
This patch links the first initialized context to the main request, and
upgrades it only when main context is initialized.
The check is not expected to fail unless there is a bug in the calling
code. But given the check is here, it should log an alert if it fails
instead of silently closing the connection.
Maximum size for reading the PROXY protocol header is increased to 4096 to
accommodate a bigger number of TLVs, which are supported since cca4c8a715de.
Maximum size for writing the PROXY protocol header is not changed since only
version 1 is currently supported.
Previously, keepalive timer was deleted in ngx_http_v3_wait_request_handler()
and set in request cleanup handler. This worked for HTTP/3 connections, but not
for hq connections. Now keepalive timer is deleted in
ngx_http_v3_init_request_stream() and set in connection cleanup handler,
which works both for HTTP/3 and hq.
It's called after handshake completion or prior to the first early data stream
creation. The callback should initialize application-level data before
creating streams.
HTTP/3 callback implementation sets keepalive timer and sends SETTINGS.
Also, this allows to limit max handshake time in ngx_http_v3_init_stream().
Most atoms should not appear more than once in a container. Previously,
this was not enforced by the module, which could result in worker process
crash, memory corruption and disclosure.
Now it properly detects invalid shared zone configuration with omitted size.
Previously it used to read outside of the buffer boundary.
Found with AddressSanitizer.
OpenSSL with TLSv1.3 updates the session creation time on session
resumption and keeps the session timeout unmodified, making it possible
to maintain the session forever, bypassing client certificate expiration
and revocation. To make sure session timeouts are actually used, we
now update the session creation time and reduce the session timeout
accordingly.
BoringSSL with TLSv1.3 ignores configured session timeouts and uses a
hardcoded timeout instead, 7 days. So we update session timeout to
the configured value as soon as a session is created.
Instead of syncing keys with shared memory on each ticket operation,
the code now does this only when the worker is going to change expiration
of the current key, or going to switch to a new key: that is, usually
at most once per second.
To do so without races, the code maintains 3 keys: current, previous,
and next. If a worker will switch to the next key earlier, other workers
will still be able to decrypt new tickets, since they will be encrypted
with the next key.