This makes the behavior of HTTP/2 and HTTP/3 much more similar. In
particular, the HTTP/3 :authority pseudoheader is used to set the Host
header, instead of the virtual server. This is arguably less correct,
but it is consistent with the existing HTTP/2 behavior and unbreaks
users of PHP-FPM and other FastCGI applications. In the future, NGINX
could have a config option that caused :authority and Host to be treated
separately in both HTTP/2 and HTTP/3.
HTTP headers must be an RFC9110 token, so only a subset of characters
are permitted. RFC9113 and RFC9114 require rejecting invalid header
characters in HTTP/2 and HTTP/3 respectively, so reject them in HTTP/1.0
and HTTP/1.1 for consistency. This also requires removing the ignore
hack for (presumably ancient) versions of IIS.
RFC9113 and RFC9114 both require requests with connection-specific
headers to be treated as malformed, with the exception of "te: trailers".
Reject requests containing them.
All versions of HTTP forbid field (header and trailer) values from
having leading or trailing horizontal whitespace (0x20 and 0x09). In
HTTP/1.0 and HTTP/1.1, leading and trailing whitespace must be stripped
from the field value before further processing. In HTTP/2 and HTTP/3,
leading and trailing whitespace must cause the entire message to be
considered malformed.
Willy Tarreau (lead developer of HAProxy) has indicated that there are
clients that actually do send leading and/or trailing whitespace in
HTTP/2 and/or HTTP/3 cookie headers, which is why HAProxy accepts them.
Therefore, the fix is disabled by default and must be enabled with the
reject_leading_trailing_whitespace directive. Stripping leading and/or
trailing whitespace would require either allocating a new buffer or
changing the pointers in the existing buffer, and I am not familiar
enough with NGINX to know if subsequent code expects a buffer that was
allocated in a particualar way. If header values were ever passed to
ngx_pfree(), munging them to skip leading whitespace would mean that a
request with leading whitespace would cause ngx_pfree() to be called
with an invalid pointer, which would be a security vulnerability.
Rejecting the request doesn't introduce any new error paths that clients
cannot already trigger, and it doesn't risk violating any invariants
that existing code might assume. Also, Varnish Cache rejects HTTP/2
requests with leading and/or trailing whitespace in field values, so
there is precedent for doing so.
The header validation required by HTTP/2 and HTTP/3 is identical, so use
a common function for both. This will make it easier to add additional
validation in the future. Move the function to ngx_http_parse.c so that
it can share code with the HTTP/1.x parser.
HTTP considers 0x09 (horizontal tab) to be valid horizontal whitespace
in a field value, and there are badly-behaved clients in the wild that
rely on this behavior and cannot be fixed. This also ensures that NGINX
is not itself such a badly-behaved client and that, for HTTP/1.x
requests, the values of the $http_* variables agree with what upstream
servers will see.
Fixes: #187
As uncovered by recent addition in slice.t, a partially initialized
context, coupled with HTTP 206 response from stub backend, might be
accessed in the next slice subrequest.
Found by bad memory allocator simulation.
Upstream SSL sessions may be of a noticeably larger size with tickets
in TLSv1.2 and older versions, or with "stateless" tickets in TLSv1.3,
if a client certificate is saved into the session. Further, certain
stateless session resumption implemetations may store additional data.
Such one is JDK, known to also include server certificates in session
ticket data, which roughly doubles a decoded session size to slightly
beyond the previous limit. While it's believed to be an issue on the
JDK side, this change allows to save such sessions.
Another, innocent case is using RSA certificates with 8192 key size.
Previously, request might be left in inconsistent state in case of error,
which manifested in "http request count is zero" alerts when used by SSI
filter.
The fix is to reshuffle initialization order to postpone committing state
changes until after any potentially failing parts.
Found by bad memory allocator simulation.
In OpenSSL, session resumption always happens in the default SSL context,
prior to invoking the SNI callback. Further, unlike in TLSv1.2 and older
protocols, SSL_get_servername() returns values received in the resumption
handshake, which may be different from the value in the initial handshake.
Notably, this makes the restriction added in b720f650b insufficient for
sessions resumed with different SNI server name.
Considering the example from b720f650b, previously, a client was able to
request example.org by presenting a certificate for example.org, then to
resume and request example.com.
The fix is to reject handshakes resumed with a different server name, if
verification of client certificates is enabled in a corresponding server
configuration.
The directive sets a timeout during which a keepalive connection will
not be closed by nginx for connection reuse or graceful shutdown.
The change allows clients that send multiple requests over the same
connection without delay or with a small delay between them, to avoid
receiving a TCP RST in response to one of them. This excludes network
issues and non-graceful shutdown. As a side-effect, it also addresses
the TCP reset problem described in RFC 9112, Section 9.6, when the last
sent HTTP response could be damaged by a followup TCP RST. It is important
for non-idempotent requests, which cannot be retried by client.
It is not recommended to set keepalive_min_timeout to large values as
this can introduce an additional delay during graceful shutdown and may
restrict nginx from effective connection reuse.
The build location of the resulting libatomic_ops.a was changed in v7.4.0
after converting libatomic_ops to use libtool. The fix is to use library
from the install path, this allows building with both old and new versions.
Initially reported here:
https://mailman.nginx.org/pipermail/nginx/2018-April/056054.html
This can happen with certificates and certificate keys specified
with variables due to partial cache update in various scenarios:
- cache expiration with only one element of pair evicted
- on-disk update with non-cacheable encrypted keys
- non-atomic on-disk update
The fix is to retry with fresh data on X509_R_KEY_VALUES_MISMATCH.
A new directive "ssl_certificate_cache max=N [valid=time] [inactive=time]"
enables caching of SSL certificate chain and secret key objects specified
by "ssl_certificate" and "ssl_certificate_key" directives with variables.
Co-authored-by: Aleksei Bavshin <a.bavshin@nginx.com>
SSL object cache, as previously introduced in 1.27.2, did not take
into account encrypted certificate keys that might be unexpectedly
fetched from the cache regardless of the matching passphrase. To
avoid this, caching of encrypted certificate keys is now disabled
based on the passphrase callback invocation.
A notable exception is encrypted certificate keys configured without
ssl_password_file. They are loaded once resulting in the passphrase
prompt on startup and reused in other contexts as applicable.
Memory based objects are always inherited, engine based objects are
never inherited to adhere the volatile nature of engines, file based
objects are inherited subject to modification time and file index.
The previous behaviour to bypass cache from the old configuration cycle
is preserved with a new directive "ssl_object_cache_inheritable off;".
It now uses 5/4 times more memory for the pending buffer.
Further, a single allocation is now used, which takes additional 56 bytes
for deflate_allocs in 64-bit mode aligned to 16, to store sub-allocation
pointers, and the total allocation size now padded up to 128 bytes, which
takes theoretically 200 additional bytes in total. This fits though into
"4 * (64 + sizeof(void*))" additional space for ZALLOC used in zlib-ng
2.1.x versions. The comment was updated to reflect this.
While trying to close a stream in ngx_quic_close_streams() by calling its
read event handler, the next stream saved prior to that could be destroyed
recursively. This caused a segfault while trying to access the next stream.
The way the next stream could be destroyed in HTTP/3 is the following.
A request stream read event handler ngx_http_request_handler() could
end up calling ngx_http_v3_send_cancel_stream() to report a cancelled
request stream in the decoder stream. If sending stream cancellation
decoder instruction fails for any reason, and the decoder stream is the
next in order after the request stream, the issue is triggered.
The fix is to postpone calling read event handlers for all streams being
closed to avoid closing a released stream.
Previously, such packets were treated as long header packets with unknown
version 0, and a version negotiation packet was sent in response. This
could be used to set up an infinite traffic reflect loop with another nginx
instance.
Now version negotiation packets are ignored. As per RFC 9000, Section 6.1:
An endpoint MUST NOT send a Version Negotiation packet in response to
receiving a Version Negotiation packet.
Since 0-RTT and 1-RTT data exist in the same packet number space,
ngx_quic_discard_ctx incorrectly discards 1-RTT packets when
0-RTT keys are discarded.
The issue was introduced by 58b92177e7.
In particular, an untagged CAPABILITY response as described in the
interim RFC 3501 internet drafts was seen in various IMAP servers.
Previously resulted in a broken connection, now an untagged response
is proxied to client.
When client address is received, IPv6 address could be specified without
square brackets and without port, as well as both with the brackets and
port. The change allows IPv6 in square brackets and no port, which was
previously considered an error. This format conforms to RFC 3986.
The change also affects proxy_bind and friends.
Renaming a temporary file to an empty path ("") returns NGX_ENOPATH
with a subsequent ngx_create_full_path() to create the full path.
This function skips initial bytes as part of path separator lookup,
which causes out of bounds access on short strings.
The fix is to avoid renaming a temporary file to an obviously invalid
path, as well as explicitly forbid such syntax for literal values.
Although Coverity reports about potential type underflow, it is not
actually possible because the terminating '\0' is always included.
Notably, the run-time check is sufficient enough for Win32 as well.
Other short invalid values result either in NGX_ENOENT or NGX_EEXIST
and "MoveFile() .. failed" critical log messages, which involves a
separate error handling.
Prodded by Coverity (CID 1605485).
This simplifies merging protocol values after ea15896 and ebd18ec.
Further, as outlined in ebd18ec18, for libraries preceeding TLSv1.2+
support, only meaningful versions TLSv1 and TLSv1.1 are set by default.
While here, fixed indentation.
When cropping stsc atom, it's assumed that chunk index is never 0.
Based on this assumption, start_chunk and end_chunk are calculated
by subtracting 1 from it. If chunk index is zero, start_chunk or
end_chunk may underflow, which will later trigger
"start/end time is out mp4 stco chunks" error. The change adds an
explicit check for zero chunk index to avoid underflow and report
a proper error.
Zero chunk index is explicitly banned in ISO/IEC 14496-12, 8.7.4
Sample To Chunk Box. It's also implicitly banned in QuickTime File
Format specification. Description of chunk offset table references
"Chunk 1" as the first table element.
Currently an error is triggered if any of the chunk runs in stsc are
unordered. This however does not include the final chunk run, which
ends with trak->chunks + 1. The previous chunk index can be larger
leading to a 32-bit overflow. This could allow to skip the validity
check "if (start_sample > n)". This could later lead to a large
trak->start_chunk/trak->end_chunk, which would be caught later in
ngx_http_mp4_update_stco_atom() or ngx_http_mp4_update_co64_atom().
While there are no implications of the validity check being avoided,
the change still adds a check to ensure the final chunk run is ordered,
to produce a meaningful error and avoid a potential integer overflow.