The support first appeared in OS X Mavericks 10.9 and documented since
OS X Yosemite 10.10.
It has a subtle implementation difference from other operating systems
in that the TCP_KEEPALIVE socket option (used in place of TCP_KEEPIDLE)
isn't inherited from a listening socket to an accepted socket.
An apparent reason for this behaviour is that it might be preserved for
the sake of backward compatibility. The TCP_KEEPALIVE socket option is
not inherited since appearance in OS X Panther 10.3, which long predates
two other TCP_KEEPINTVL and TCP_KEEPCNT socket options.
Thanks to Andy Pan for initial work.
Certain providers may attempt to reload the key on the first use after a
fork. Such attempt would require re-prompting the pin, and this time we
are not able to pass the password callback.
While it is addressable with configuration for a specific provider, it would
be prudent to ensure that no such prompts could block worker processes by
setting the default UI method.
UI_null() first appeared in 1.1.1 along with the OSSL_STORE, so it is safe
to assume the same set of guards.
A new "store:..." prefix for the "ssl_certificate_key" directive allows
loading keys via the OSSL_STORE API.
The change is required to support hardware backed keys in OpenSSL 3.x using
the new "provider(7ossl)" modules, such as "pkcs11-provider". While the
engine API is present in 3.x, some operating systems (notably, RHEL10)
have already disabled it in their builds of OpenSSL.
Related: https://trac.nginx.org/nginx/ticket/2449
Similarly to the QUIC API originated in BoringSSL, this API allows
to register custom TLS callbacks for an external QUIC implementation.
See the SSL_set_quic_tls_cbs manual page for details.
Due to a different approach used in OpenSSL 3.5, handling of CRYPTO
frames was streamlined to always write an incoming CRYPTO buffer to
the crypto context. Using SSL_provide_quic_data(), this results in
transient allocation of chain links and buffers for CRYPTO frames
received in order. Testing didn't reveal performance degradation of
QUIC handshakes, https://github.com/nginx/nginx/pull/646 provides
specific results.
Using SSL_in_init() to inspect a handshake state was replaced with
SSL_is_init_finished(). This represents a more complete fix to the
BoringSSL issue addressed in 22671b37e.
This provides awareness of the early data handshake state when using
OpenSSL 3.5 TLS callbacks in 0-RTT enabled configurations, which, in
particular, is used to avoid premature completion of the initial TLS
handshake, before required client handshake messages are received.
This is a non-functional change when using BoringSSL. It supersedes
testing non-positive SSL_do_handshake() results in all supported SSL
libraries, hence simplified.
In preparation for using OpenSSL 3.5 TLS callbacks.
Encryption level values are decoupled from ssl_encryption_level_t,
which is now limited to BoringSSL QUIC callbacks, with mappings
provided. Although the values match, this provides a technically
safe approach, in particular, to access protection level sized arrays.
In preparation for using OpenSSL 3.5 TLS callbacks.
It is now called from ngx_quic_handle_crypto_frame(), prior to proceeding
with the handshake. With this logic removed, the handshake function is
renamed to ngx_quic_handshake() to better match ngx_ssl_handshake().
All definitions now set in ngx_event_quic.h, this includes moving
NGX_QUIC_OPENSSL_COMPAT from autotests to compile time. Further,
to improve code readability, a new NGX_QUIC_QUICTLS_API macro is
used for QuicTLS that provides old BoringSSL QUIC API.
Previously, they might be logged on every add_handshake_data
callback invocation when using OpenSSL compat layer and processing
coalesced handshake messages.
Further, the ALPN error message is adjusted to signal the missing
extension. Possible reasons were previously narrowed down with
ebb6f7d65 changes in the ALPN callback that is invoked earlier in
the handshake.
Following the previous change that removed posting a close event
in OpenSSL compat layer, now ngx_quic_close_connection() is always
called on error path with either NGX_ERROR or qc->error set.
This allows to remove a special value -1 served as a missing error,
which simplifies the code. Partially reverts d3fb12d77.
Also, this improves handling of the draining connection state, which
consists of posting a close event with NGX_OK and no qc->error set,
where it was previously converted to NGX_QUIC_ERR_INTERNAL_ERROR.
Notably, this is rather a cosmetic fix, because drained connections
do not send any packets including CONNECTION_CLOSE, and qc->error
is not otherwise used.
Changed handshake callbacks to always return success. This allows to avoid
logging SSL_do_handshake() errors with empty or cryptic "internal error"
OpenSSL error messages at the inappropriate "crit" log level.
Further, connections with failed callbacks are closed now right away when
using OpenSSL compat layer. This change supersedes and reverts c37fdcdd1,
with the conditions to check callbacks invocation kept to slightly improve
code readability of control flow; they are optimized out in the resulting
assembly code.
Logging level for such errors, which should not normally happen,
is changed to NGX_LOG_ALERT, and ngx_log_error() is replaced with
ngx_ssl_error() for consistency with the rest of the code.
Previously, it was not possible to send acknowledgments if the
congestion window was limited or temporarily exceeded, such as
after sending a large response or MTU probe. If ACKs were not
received from the peer for some reason to update the in-flight
bytes counter below the congestion window, this might result in
a stalled connection.
The fix is to send ACKs regardless of congestion control. This
meets RFC 9002, Section 7:
: Similar to TCP, packets containing only ACK frames do not count
: toward bytes in flight and are not congestion controlled.
This is a simplified implementation to send ACK frames from the
head of the queue. This was made possible after 6f5f17358.
Reported in trac ticket #2621 and subsequently by Vladimir Homutov:
https://mailman.nginx.org/pipermail/nginx-devel/2025-April/ZKBAWRJVQXSZ2ISG3YJAF3EWMDRDHCMO.html
This extends the target selection implemented in dad6ec3aa6 to support
Windows ARM64 platforms. OpenSSL support for VC-WIN64-ARM target first
appeared in 1.1.1 and is present in all currently supported (3.x)
branches.
As a side effect, ARM64 Windows builds will get 16-byte alignment along
with the rest of non-x86 platforms. This is safe, as malloc on 64-bit
Windows guarantees the fundamental alignment of allocations, 16 bytes.
Previously, the default pool alignment used sizeof(unsigned long), with
the expectation that this would match to a platform word size. Certain
64-bit platforms prove this assumption wrong by keeping the 32-bit long
type, which is fully compliant with the C standard.
This introduces a possibility of suboptimal misaligned access to the
data allocated with ngx_palloc() on the affected platforms, which is
addressed here by changing the default NGX_ALIGNMENT to a pointer size.
As we override the detection in auto/os/conf for all the machine types
except x86, and Unix-like 64-bit systems prefer the 64-bit long, the
impact of the change should be limited to Win64 x64.
After fixing ngx_http_v3_encode_varlen_int() in 400eb1b628,
NGX_HTTP_V3_VARLEN_INT_LEN retained the old value of 4, which is
insufficient for the values over 1073741823 (1G - 1).
The NGX_HTTP_V3_VARLEN_INT_LEN macro is used in ngx_http_v3_uni.c to
format stream and frame types. Old buffer size is enough for formatting
this data. Also, the macro is used in ngx_http_v3_filter_module.c to
format output chunks and trailers. Considering output_buffers and
proxy_buffer_size are below 1G in all realistic scenarios, the old buffer
size is enough here as well.
RFC 9002, Section 6.1.1 defines packet reordering threshold as 3. Testing
shows that such low value leads to spurious packet losses followed by
congestion window collapse. The change implements dynamic packet threshold
detection based on in-flight packet range. Packet threshold is defined
as half the number of in-flight packets, with mininum value of 3.
Also, renamed ngx_quic_lost_threshold() to ngx_quic_time_threshold()
for better compliance with RFC 9002 terms.
Previosly the threshold was hardcoded at 10000. This value is too low for
high BDP networks. For example, if all frames are STREAM frames, and MTU
is 1500, the upper limit for congestion window would be roughly 15M
(10000 * 1500). With 100ms RTT it's just a 1.2Gbps network (15M * 10 * 8).
In reality, the limit is even lower because of other frame types. Also,
the number of frames that could be used simultaneously depends on the total
amount of data buffered in all server streams, and client flow control.
The change sets frame threshold based on max concurrent streams and stream
buffer size, the product of which is the maximum number of in-flight stream
data in all server streams at any moment. The value is divided by 2000 to
account for a typical MTU 1500 and the fact that not all frames are STREAM
frames.
If connection is network-limited, MTU probes have little chance of being
sent since congestion window is almost always full. As a result, PMTUD
may not be able to reach the real MTU and the connection may operate with
a reduced MTU. The solution is to ignore the congestion window. This may
lead to a temporary increase in in-flight count beyond congestion window.
As per RFC 9000, Section 14.4:
Loss of a QUIC packet that is carried in a PMTU probe is therefore
not a reliable indication of congestion and SHOULD NOT trigger a
congestion control reaction.
Previously, these functions operated on a per-level basis. This however
resulted in excessive logging of in_flight and will also led to extra
work detecting underutilized congestion window in the followup patches.
On some systems the value of ngx_current_msec is derived from monotonic
clock, for which the following is defined by POSIX:
For this clock, the value returned by clock_gettime() represents
the amount of time (in seconds and nanoseconds) since an unspecified
point in the past.
As as result, overflow protection is needed when comparing two ngx_msec_t.
The change adds such protection to the ngx_quic_detect_lost() function.
Since recovery_start field was initialized with ngx_current_msec, all
congestion events that happened within the same millisecond or cycle
iteration, were treated as in recovery mode.
Also, when handling persistent congestion, initializing recovery_start
with ngx_current_msec resulted in treating all sent packets as in recovery
mode, which violates RFC 9002, see example in Appendix B.8.
While here, also fixed recovery_start wrap protection. Previously it used
2 * max_idle_timeout time frame for all sent frames, which is not a
reliable protection since max_idle_timeout is unrelated to congestion
control. Now recovery_start <= now condition is enforced. Note that
recovery_start wrap is highly unlikely and can only occur on a
32-bit system if there are no congestion events for 24 days.
Previously, the expiration caused QUIC connection finalization even if
there are application-terminated streams finishing sending data. Such
finalization terminated these streams.
An easy way to trigger this is to request a large file from HTTP/3 over
a small MTU. In this case keepalive timeout expiration may abruptly
terminate the request stream.
Improved logging for simpler data extraction for plotting congestion
window graphs. In particular, added current milliseconds number from
ngx_current_msec.
While here, simplified logging text and removed irrelevant data.
Starting with OpenSSL 3.0, groups may be added externally with pluggable
KEM providers. Using SSL_get_negotiated_group(), which makes lookup in a
static table with known groups, doesn't allow to list such groups by names
leaving them in hex. Adding X25519MLKEM768 to the default group list in
OpenSSL 3.5 made this problem more visible. SSL_get0_group_name() and,
apparently, SSL_group_to_name() allow to resolve such provider-implemented
groups, which is also "generally preferred" over SSL_get_negotiated_group()
as documented in OpenSSL git commit 93d4f6133f.
This change makes external groups listing by name using SSL_group_to_name()
available since OpenSSL 3.0. To preserve "prime256v1" naming for the group
0x0017, and to avoid breaking BoringSSL and older OpenSSL versions support,
it is used supplementary for a group that appears to be unknown.
See https://github.com/openssl/openssl/issues/27137 for related discussion.
Passwords were not preserved in optimized SSL contexts, the bug had
appeared in d791b4aab (1.23.1), as in the following configuration:
server {
proxy_ssl_password_file password;
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
location /original/ {
proxy_pass https://u1/;
}
location /optimized/ {
proxy_pass https://u2/;
}
}
The fix is to always preserve passwords, by copying to the configuration
pool, if dynamic certificates are used. This is done as part of merging
"ssl_passwords" configuration.
To minimize the number of copies, a preserved version is then used for
inheritance. A notable exception is inheritance of preserved empty
passwords to the context with statically configured certificates:
server {
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
location / {
proxy_pass ...;
proxy_ssl_certificate example.com.crt;
proxy_ssl_certificate_key example.com.key;
}
}
In this case, an unmodified version (NULL) of empty passwords is set,
to allow reading them from the password prompt on nginx startup.
As an additional optimization, a preserved instance of inherited
configured passwords is set to the previous level, to inherit it
to other contexts:
server {
proxy_ssl_password_file password;
location /1/ {
proxy_pass https://u1/;
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
}
location /2/ {
proxy_pass https://u2/;
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
}
}
It was possible to write outside of the buffer used to keep UTF-8
decoded values when parsing conversion table configuration.
Since this happened before UTF-8 decoding, the fix is to check in
advance if character codes are of more than 3-byte sequence. Note
that this is already enforced by a later check for ngx_utf8_decode()
decoded values for 0xffff, which corresponds to the maximum value
encoded as a valid 3-byte sequence, so the fix does not affect the
valid values.
Found with AddressSanitizer.
Fixes GitHub issue #529.
As uncovered by recent addition in slice.t, a partially initialized
context, coupled with HTTP 206 response from stub backend, might be
accessed in the next slice subrequest.
Found by bad memory allocator simulation.
Upstream SSL sessions may be of a noticeably larger size with tickets
in TLSv1.2 and older versions, or with "stateless" tickets in TLSv1.3,
if a client certificate is saved into the session. Further, certain
stateless session resumption implemetations may store additional data.
Such one is JDK, known to also include server certificates in session
ticket data, which roughly doubles a decoded session size to slightly
beyond the previous limit. While it's believed to be an issue on the
JDK side, this change allows to save such sessions.
Another, innocent case is using RSA certificates with 8192 key size.