Previously ngx_quic_frame_sendto() ignored congestion control and did not
contribute to in_flight counter.
Now congestion control window is checked unless ignore_congestion flag is set.
Also, in_flight counter is incremented and the frame is stored in ctx->sent
queue if it's ack-eliciting. This behavior is now similar to
ngx_quic_output_packet().
According to RFC 9000, an endpoint SHOULD NOT send multiple PATH_CHALLENGE
frames in a single packet. The change adds a check to enforce this claim to
optimize server behavior. Previously each PATH_CHALLENGE always resulted in a
single response datagram being sent to client. The effect of this was however
limited by QUIC flood protection.
Also, PATH_CHALLENGE is explicitly disabled in Initial and Handshake levels,
see RFC 9000, Table 3. However, technically it may be sent by client in 0-RTT
over a new path without actual migration, even though the migration itself is
prohibited during handshake. This allows client to coalesce multiple 0-RTT
packets each carrying a PATH_CHALLENGE and end up with multiple PATH_CHALLENGEs
per datagram. This again leads to suboptimal behavior, see above. Since the
purpose of sending PATH_CHALLENGE frames in 0-RTT is unclear, these frames are
now only allowed in 1-RTT. For 0-RTT they are silently ignored.
Previously, when using ngx_quic_frame_sendto() to explicitly send a packet with
a single frame, anti-amplification limit was not properly enforced. Even when
there was no quota left for the packet, it was sent anyway, but with no padding.
Now the packet is not sent at all.
This function is called to send PATH_CHALLENGE/PATH_RESPONSE, PMTUD and probe
packets. For all these cases packet send is retried later in case the send was
not successful.
By default packets with these frames are expanded to 1200 bytes. Previously,
if anti-amplification limit did not allow this expansion, it was limited to
whatever size was allowed. However RFC 9000 clearly states no partial
expansion should happen in both cases.
Section 8.2.1. Initiating Path Validation:
An endpoint MUST expand datagrams that contain a PATH_CHALLENGE frame
to at least the smallest allowed maximum datagram size of 1200 bytes,
unless the anti-amplification limit for the path does not permit
sending a datagram of this size.
Section 8.2.2. Path Validation Responses:
An endpoint MUST expand datagrams that contain a PATH_RESPONSE frame
to at least the smallest allowed maximum datagram size of 1200 bytes.
...
However, an endpoint MUST NOT expand the datagram containing the
PATH_RESPONSE if the resulting data exceeds the anti-amplification limit.
If URI is not fully parsed yet, some pointers are not set. As a result,
the calculation of "new + (ptr - old)" expression is flawed.
According to C11, 6.5.6 Additive operators, p.9:
: When two pointers are subtracted, both shall point to elements
: of the same array object, or one past the last element of the
: array object
Since "ptr" is not set, subtraction leads to undefined behaviour, because
"ptr" and "old" are not in the same buffer (i.e. array objects).
Prodded by GCC undefined behaviour sanitizer.
Neither r->port_start nor r->port_end were ever used.
The r->port_end is set by the parser, though it was never used by
the following code (and was never usable, since not copied by the
ngx_http_alloc_large_header_buffer() without r->port_start set).
Currently, packets generated by ngx_quic_frame_sendto() and
ngx_quic_send_early_cc() are not logged, thus making it hard
to read logs due to gaps appearing in packet numbers sequence.
At frames level, it is handy to see immediately packet number
in which they arrived or being sent.
As part of normal HTTP/2 processing, incomplete frames are saved in the
control state using a fixed size memcpy of NGX_HTTP_V2_STATE_BUFFER_SIZE.
For this matter, two state buffers are reserved in the HTTP/2 recv buffer.
As part of HTTP/2 auto-detection on plain TCP connections, initial data
is first read into a buffer specified by the client_header_buffer_size
directive that doesn't have state reservation. Previously, this made it
possible to over-read the buffer as part of saving the state.
The fix is to read the available buffer size rather than a fixed size.
Although memcpy of a fixed size can produce a better optimized code,
handling of incomplete frames isn't a common execution path, so it was
sacrificed for the sake of simplicity of the fix.
It is made local as it is only needed now when creating crypto context.
BoringSSL lacks EVP interface for ChaCha20, providing instead
a function for one-shot encryption, thus hp is still preserved.
Based on a patch by Roman Arutyunyan.
Previously it was possible to generate ACK frames using formally discarded
protection keys, in particular, when acknowledging a client Handshake packet
used to complete the TLS handshake and to discard handshake protection keys.
As it happens late in packet processing, it could be possible to generate ACK
frames after the keys were already discarded.
ACK frames are generated from ngx_quic_ack_packet(), either using a posted
push event, which envolves ngx_quic_generate_ack() as a part of the final
packet assembling, or directly in ngx_quic_ack_packet(), such as when there
is no room to add a new ACK range or when the received packet is out of order.
The added keys availability check is used to avoid generating late ACK frames
in both cases.
In addition to triggering alert, it ensures that such packets won't be sent.
With the previous change that marks server keys as discarded by zeroing the
key lengh, it is now an error to send packets with discarded keys. OpenSSL
based stacks tolerate such behaviour because key length isn't used in packet
protection, but BoringSSL will raise the UNSUPPORTED_KEY_SIZE cipher error.
It won't be possible to use discarded keys with reused crypto contexts as it
happens in subsequent changes.
Keys may be released by TLS stack in different times, so it makes sense
to check this independently as well. This allows to fine-tune what key
direction is used when checking keys availability.
When discarding, server keys are now marked in addition to client keys.
This improves nginx startup times significantly when using very large number
of locations due to computational complexity of the sorting algorithm being
used: insertion sort is O(n*n) on average, while merge sort is O(n*log(n)).
In particular, in a test configuration with 20k locations total startup
time is reduced from 8 seconds to 0.9 seconds.
Prodded by Yusuke Nojima,
https://mailman.nginx.org/pipermail/nginx-devel/2023-September/NUL3Y2FPPFSHMPTFTL65KXSXNTX3NQMK.html
In ngx_regex_cleanup() allocator wasn't configured when calling
pcre2_compile_context_free() and pcre2_match_data_free(), resulting
in no ngx_free() call and leaked memory. Fix is ensure that allocator
is configured for global allocations, so that ngx_free() is actually
called to free memory.
Additionally, ngx_regex_compile_context was cleared in
ngx_regex_module_init(). It should be either not cleared, so it will
be freed by ngx_regex_cleanup(), or properly freed. Fix is to
not clear it, so ngx_regex_cleanup() will be able to free it.
Reported by ZhenZhong Wu,
https://mailman.nginx.org/pipermail/nginx-devel/2023-September/3Z5FIKUDRN2WBSL3JWTZJ7SXDA6YIWPB.html
To ensure that attempts to flood servers with many streams are detected
early, a limit of no more than 2 * max_concurrent_streams new streams per one
event loop iteration was introduced. This limit is applied even if
max_concurrent_streams is not yet reached - for example, if corresponding
streams are handled synchronously or reset.
Further, refused streams are now limited to maximum of max_concurrent_streams
and 100, similarly to priority_limit initial value, providing some tolerance
to clients trying to open several streams at the connection start, yet
low tolerance to flooding attempts.
The error may be triggered in add_handhshake_data() by incorrect transport
parameter sent by client. The expected behaviour in this case is to close
connection complaining about incorrect parameter. Currently the connection
just times out.
Previously, the timer was never reset due to an explicit check. The check was
added in 36b59521a41c as part of connection close simplification. The reason
was to retain the earliest timeout. However, the timeouts are all the same
while QUIC handshake is in progress and resetting the timer for the same value
has no performance implications. After handshake completion there's only
application level. The change removes the check.
Now the session object is assigned to c->data while ngx_http_connection_t
object is referenced by its http_connection field, similar to
ngx_http_v2_connection_t and ngx_http_request_t.
The change allows to eliminate v3_session field from ngx_http_connection_t.
The field was under NGX_HTTP_V3 macro, which was a source of binary
compatibility problems when nginx/module is build with/without HTTP/3 support.
Postponing is essential since c->data should retain the reference to
ngx_http_connection_t object throughout QUIC handshake, because SSL callbacks
ngx_http_ssl_servername() and ngx_http_ssl_alpn_select() rely on this.
Instead, when worker is shutting down and handshake is not yet completed,
connection is terminated immediately.
Previously the callback could be called while QUIC handshake was in progress
and, what's more important, before the init() callback. Now it's postponed
after init().
This change is a preparation to postponing HTTP/3 session creation to init().
Previously QUIC did not have such parameter and handshake duration was
controlled by HTTP/3. However that required creating and storing HTTP/3
session on first client datagram. Apparently there's no convenient way to
store the session object until QUIC handshake is complete. In the followup
patches session creation will be postponed to init() callback.
As explained in BoringSSL change[1], levels were introduced in the original
QUIC API to draw a line between when keys are released and when are active.
In the new QUIC API they are released in separate calls when it's needed.
BoringSSL has then a consideration to remove levels API, hence the change.
If not available e.g. from a QUIC packet header, levels can be taken based on
keys availability. The only real use of levels is to prevent using app keys
before they are active in QuicTLS that provides the old BoringSSL QUIC API,
it is replaced with an equivalent check of c->ssl->handshaked.
This change also removes OpenSSL compat shims since they are no longer used.
The only exception left is caching write level from the keylog callback in
the internal field which is a handy equivalent of checking keys availability.
[1] https://boringssl.googlesource.com/boringssl/+/1e859054
As per RFC 9000, section 10.2.3, to ensure that peer successfully removed
packet protection, CONNECTION_CLOSE can be sent in multiple packets using
different packet protection levels.
Now it is sent in all protection levels available.
This roughly corresponds to the following paragraph:
* Prior to confirming the handshake, a peer might be unable to process 1-RTT
packets, so an endpoint SHOULD send a CONNECTION_CLOSE frame in both Handshake
and 1-RTT packets. A server SHOULD also send a CONNECTION_CLOSE frame in an
Initial packet.
In practice, this change allows to avoid sending an Initial packet when we know
the client has handshake keys, by checking if we have discarded initial keys.
Also, this fixes sending CONNECTION_CLOSE when using QuicTLS with old QUIC API,
where TLS stack releases application read keys before handshake confirmation;
it is fixed by sending CONNECTION_CLOSE additionally in a Handshake packet.
Status header with an empty reason-phrase, such as "Status: 404 ", is
valid per CGI specification, but looses the trailing space during parsing.
Currently, this results in "HTTP/1.1 404" HTTP status line in the response,
which violates HTTP specification due to missing trailing space.
With this change, only the status code is used from such short Status
header lines, so nginx will generate status line itself, with the space
and appropriate reason phrase if available.
Reported at:
https://mailman.nginx.org/pipermail/nginx/2023-August/EX7G4JUUHJWJE5UOAZMO5UD6OJILCYGX.html
Previously, a socket error on a path being validated resulted in validation
error and subsequent QUIC connection closure. Now the error is ignored and
path validation proceeds as usual, with several retries and a timeout.
When validating the old path after an apparent migration, that path may already
be unavailable and sendmsg() may return an error, which should not result in
QUIC connection close.
When validating the new path, it's possible that the new client address is
spoofed (See RFC 9000, 9.3.2. On-Path Address Spoofing). This address may
as well be unavailable and should not trigger QUIC connection closure.
Previously, original dcid was used to receive initial client packets in case
server initial response was lost. However, last dcid should be used instead.
These two are the same unless retry is used. In case of retry, client resends
initial packet with a new dcid, that is different from the original dcid. If
server response is lost, the client resends this packet again with the same
dcid. This is shown in RFC 9000, 7.3. Authenticating Connection IDs, Figure 8.
The issue manifested itself with creating multiple server sessions in response
to each post-retry client initial packet, if server response is lost.
Since at least f9fbeb4ee0de and certainly after 924882f42dea, which
TLS Key Update support predates, queued data output is deferred to a
posted push handler. To address timing signals after these changes,
generating next keys is now posted to run after the push handler.
Previously, NGX_AGAIN returned by ngx_quic_send() was treated by
ngx_quic_frame_sendto() as error, which triggered errors in its callers.
However, a blocked socket is not an error. Now NGX_AGAIN is passed as is to
the ngx_quic_frame_sendto() callers, which can safely ignore it.