Previously, "%uA" was used, which corresponds to ngx_atomic_uint_t.
Size of ngx_atomic_uint_t can be easily different from uint64_t,
leading to undefined results.
In TLSv1.3, NewSessionTicket messages arrive after the handshake and
can come at any time. Therefore we use a callback to save the session
when we know about it. This approach works for < TLSv1.3 as well.
The callback function is set once per location on merge phase.
Since SSL_get_session() in BoringSSL returns an unresumable session for
TLSv1.3, peer save_session() methods have been updated as well to use a
session supplied within the callback. To preserve API, the session is
cached in c->ssl->session. It is preferably accessed in save_session()
methods by ngx_ssl_get_session() and ngx_ssl_get0_session() wrappers.
In OpenSSL 1.1.0 the SSL_CTRL_CLEAR_OPTIONS macro was removed, so
conditional compilation test on it results in SSL_clear_options()
and SSL_CTX_clear_options() not being used. Notably, this caused
"ssl_prefer_server_ciphers off" to not work in SNI-based virtual
servers if server preference was switched on in the default server.
It looks like the only possible fix is to test OPENSSL_VERSION_NUMBER
explicitly.
Starting with OpenSSL 1.1.0, SSL_R_UNSUPPORTED_PROTOCOL instead of
SSL_R_UNKNOWN_PROTOCOL is reported when a protocol is disabled via
an SSL_OP_NO_* option.
Additionally, SSL_R_VERSION_TOO_LOW is reported when using MinProtocol
or when seclevel checks (as set by @SECLEVEL=n in the cipher string)
rejects a protocol, and this is what happens with SSLv3 and @SECLEVEL=1,
which is the default.
There is also the SSL_R_VERSION_TOO_HIGH error code, but it looks like
it is not possible to trigger it.
There should be at least one worker connection for each listening socket,
plus an additional connection for channel between worker and master,
or starting worker processes will fail.
Previously, listenings sockets were not cloned if the worker_processes
directive was specified after "listen ... reuseport".
This also simplifies upcoming configuration check on the number
of worker connections, as it needs to know the number of listening
sockets before cloning.
The variable keeps the latest SSL protocol version supported by the client.
The variable has the same format as $ssl_protocol.
The version is read from the client_version field of ClientHello. If the
supported_versions extension is present in the ClientHello, then the version
is set to TLSv1.3.
Errors when sending UDP datagrams can happen, e.g., when local IP address
changes (see fa0e093b64d7), or an unavailable DNS server on the LAN can cause
send() to fail with EHOSTDOWN on BSD systems. If this happens during
initial query, retry sending immediately, to a different DNS server when
possible. If this is not enough, allow normal resend to happen by ignoring
the return code of the second ngx_resolver_send_query() call, much like we
do in ngx_resolver_resend().
The "http request" and "https proxy request" errors cannot happen
with HTTP due to pre-handshake checks in ngx_http_ssl_handshake(),
but can happen when SSL is used in stream and mail modules.
With gRPC it is possible that a request sending is blocked due to flow
control. Moreover, further sending might be only allowed once the
backend sees all the data we've already sent. With such a backend
it is required to clear the TCP_NOPUSH socket option to make sure all
the data we've sent are actually delivered to the backend.
As such, we now clear TCP_NOPUSH in ngx_http_upstream_send_request()
also on NGX_AGAIN if c->write->ready is set. This fixes a test (which
waits for all the 64k bytes as per initial window before allowing more
bytes) with sendfile enabled when the body was written to a file
in a different context.
Now tcp_nopush on peer connections is disabled if it is disabled on
the client connection, similar to how we handle c->sendfile. Previously,
tcp_nopush was always used on upstream connections, regardless of
the "tcp_nopush" directive.
We copy input buffers to our buffers, so various flags might be
unexpectedly set in buffers returned by ngx_chain_get_free_buf().
In particular, the b->in_file flag might be set when the body was
written to a file in a different context. With sendfile enabled this
in turn might result in protocol corruption if such a buffer was reused
for a control frame.
Make sure to clear buffers and set only fields we really need to be set.
The module implements random load-balancing algorithm with optional second
choice. In the latter case, the best of two servers is chosen, accounting
number of connections and server weight.
Example:
upstream u {
random [two [least_conn]];
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
}
Before 4a8c9139e579, ngx_resolver_create() didn't use configuration
pool, and allocations were done using malloc().
In 016352c19049, when resolver gained support of several servers,
new allocations were done from the pool.
With u->conf->preserve_output set the request body file might be used
after the response header is sent, so avoid cleaning it. (Normally
this is not a problem as u->conf->preserve_output is only set with
r->request_body_no_buffering, but the request body might be already
written to a file in a different context.)
Previously, only one client packet could be processed in a udp stream session
even though multiple response packets were supported. Now multiple packets
coming from the same client address and port are delivered to the same stream
session.
If it's required to maintain a single stream of data, nginx should be
configured in a way that all packets from a client are delivered to the same
worker. On Linux and DragonFly BSD the "reuseport" parameter should be
specified for this. Other systems do not currently provide appropriate
mechanisms. For these systems a single stream of udp packets is only
guaranteed in single-worker configurations.
The proxy_response directive now specifies how many packets are expected in
response to a single client packet.
Previously, ngx_event_recvmsg() got remote socket addresses after creating
the connection object. In preparation to handling multiple UDP packets in a
single session, this code was moved up.
On Linux recvmsg() syscall may return a zero-length client address when
receiving a datagram from an unbound unix datagram socket. It is usually
assumed that socket address has at least the sa_family member. Zero-length
socket address caused buffer over-read in functions which receive socket
address, for example ngx_sock_ntop(). Typically the over-read resulted in
unexpected socket family followed by session close. Now a fake socket address
is allocated instead of a zero-length client address.
Negative times can appear since workers only update time on an event
loop iteration start. If a worker was blocked for a long time during
an event loop iteration, it is possible that another worker already
updated the time stored in the node. As such, time since last update
of the node (ms) will be negative.
Previous code used ngx_abs(ms) in the calculations. That is, negative
times were effectively treated as positive ones. As a result, it was
not possible to maintain high request rates, where the same node can be
updated multiple times from during an event loop iteration.
In particular, this affected setups with many SSL handshakes, see
http://mailman.nginx.org/pipermail/nginx/2018-May/056291.html.
Fix is to only update the last update time stored in the node if the
new time is larger than previously stored one. If a future time is
stored in the node, we preserve this time as is.
To prevent breaking things on platforms without monotonic time available
if system time is updated backwards, a safety limit of 60 seconds is
used. If the time stored in the node is more than 60 seconds in the future,
we assume that the time was changed backwards and update lr->last
to the current time.
The bug in question was fixed in glibc 2.3.2 and is no longer expected
to manifest itself on real servers. On the other hand, the workaround
causes compilation problems on various systems. Previously, we've
already fixed the code to compile with musl libc (fd6fd02f6a4d), and
now it is broken on Fedora 28 where glibc's crypt library was replaced
by libxcrypt. So the workaround was removed.
FreeBSD returns EINVAL when getsockopt(TCP_FASTOPEN) is called on a unix
domain socket, resulting in "getsockopt(TCP_FASTOPEN) ... failed" messages
during binary upgrade when unix domain listen sockets are present in
the configuration. Added EINVAL to the list of ignored error codes.
Previously, only unix domain sockets were reopened to tolerate cases when
local syslog server was restarted. It makes sense to treat other cases
(for example, local IP address changes) similarly.
Cast to intermediate "void *" to lose compiler knowledge about the original
type and pass the warning. This is not a real fix but rather a workaround.
Found by gcc8.
In mail and stream modules, no certificate provided is a fatal condition,
much like with the "ssl" and "starttls" directives.
In http, "listen ... ssl" can be used in a non-default server without
certificates as long as there is a certificate in the default one, so
missing certificate is only fatal for default servers.
In 51e1f047d15d, the "ssl" directive name was incorrectly hardcoded
in the error message shown when there are some SSL keys defined, but
not for all certificates. Right approach is to use the "mode" variable,
which can be either "ssl" or "starttls".
Previously, result of ngx_atoi() was assigned to an ngx_uint_t variable,
and errors reported by ngx_atoi() became positive, so the following check
in "status < 100" failed to catch them. This resulted in the configurations
like "proxy_cache_valid 2xx 30s" being accepted as correct, while they
in fact do nothing. Changing type to ngx_int_t fixes this, and such
configurations are now properly rejected.
Previously, ngx_http_upstream_process_header() might be called after
we've finished reading response headers and switched to a different read
event handler, leading to errors with gRPC proxying. Additionally,
the u->conf->read_timeout timer might be re-armed during reading response
headers (while this is expected to be a single timeout on reading
the whole response header).
Previously, ngx_http_upstream_test_next() used an outdated condition on
whether it will be possible to switch to a different server or not. It
did not take into account restrictions on non-idempotent requests, requests
with non-buffered request body, and the next upstream timeout.
For such requests, switching to the next upstream server was rejected
later in ngx_http_upstream_next(), resulting in nginx own error page
being returned instead of the original upstream response.
- use normal prefixes for types and macros
- removed some macros and types
- revised debug messages
- removed useless check of ngx_sock_ntop() returning 0
- removed special processing of AF_UNSPEC
The protocol used on inbound connection is auto-detected and corresponding
parser is used to extract passed addresses. TLV parameters are ignored.
The maximum supported size of PROXY protocol header is 107 bytes
(similar to version 1).
All cases are harmless and should not happen on valid values, though can
result in bad values being shown incorrectly in logs.
Found by Coverity (CID 1430311, 1430312, 1430313).
The fields "uri", "location", and "url" from ngx_http_upstream_conf_t
moved to ngx_http_proxy_loc_conf_t and ngx_http_proxy_vars_t, reflect
this change in create_loc_conf comments.
The gRPC protocol makes a distinction between HEADERS frame with
the END_STREAM flag set, and a HEADERS frame followed by an empty
DATA frame with the END_STREAM flag. The latter is not permitted,
and results in errors not being propagated through nginx. Instead,
gRPC clients complain that "server closed the stream without sending
trailers" (seen in grpc-go) or "13: Received RST_STREAM with error
code 2" (seen in grpc-c).
To fix this, nginx now returns HEADERS with the END_STREAM flag if
the response length is known to be 0, and we are not expecting
any trailer headers to be added. And the response length is
explicitly set to 0 in the gRPC proxy if we see initial HEADERS frame
with the END_STREAM flag set.
According to the gRPC protocol specification, the "TE" header is used
to detect incompatible proxies, and at least grpc-c server rejects
requests without "TE: trailers".
To preserve the logic, we have to pass "TE: trailers" to the backend if
and only if the original request contains "trailers" in the "TE" header.
Note that no other TE values are allowed in HTTP/2, so we have to remove
anything else.
The module allows passing requests to upstream gRPC servers.
The module is built by default as long as HTTP/2 support is compiled in.
Example configuration:
grpc_pass 127.0.0.1:9000;
Alternatively, the "grpc://" scheme can be used:
grpc_pass grpc://127.0.0.1:9000;
Keepalive support is available via the upstream keepalive module. Note
that keepalive connections won't currently work with grpc-go as it fails
to handle SETTINGS_HEADER_TABLE_SIZE.
To use with SSL:
grpc_pass grpcs://127.0.0.1:9000;
SSL connections use ALPN "h2" when available. At least grpc-go works fine
without ALPN, so if ALPN is not available we just establish a connection
without it.
Tested with grpc-c++ and grpc-go.
The flag can be used to continue sending request body even after we've
got a response from the backend. In particular, this is needed for gRPC
proxying of bidirectional streaming RPCs, and also to send control frames
in other forms of RPCs.
The flag indicates whether last ngx_output_chain() returned NGX_AGAIN
or not. If the flag is set, we arm the u->conf->send_timeout timer.
The flag complements c->write->ready test, and allows to stop sending
the request body in an output filter due to protocol-specific flow
control.
Basic trailer headers support allows one to access response trailers
via the $upstream_trailer_* variables.
Additionally, the u->conf->pass_trailers flag was introduced. When the
flag is set, trailer headers from the upstream response are passed to
the client. Like normal headers, trailer headers will be hidden
if present in u->conf->hide_headers_hash.
When clock_gettime(CLOCK_MONOTONIC) (or faster variants, _FAST on FreeBSD,
and _COARSE on Linux) is available, we now use it for ngx_current_msec.
This should improve handling of timers if system time changes (ticket #189).
The r->out chain link could be left uninitialized in case of error.
A segfault could happen if the subrequest handler accessed it.
The issue was introduced in commit 20f139e9ffa8.
Previously, only the upstream response body could be accessed with the
NGX_HTTP_SUBREQUEST_IN_MEMORY feature. Now any response body from a subrequest
can be saved in a memory buffer. It is available as a single buffer in r->out
and the buffer size is configured by the subrequest_output_buffer_size
directive.
Upstream, proxy and fastcgi code used to handle the old-style feature is
removed.
On some platforms (for example, Linux with glibc 2.12-2.25) IPv4 transparent
proxying is available, but IPv6 transparent proxying is not. The entire feature
is enabled in this case and NGX_HAVE_TRANSPARENT_PROXY macro is set to 1.
Previously, an attempt to enable transparency for an IPv6 socket was silently
ignored in this case and was usually followed by a bind(2) EADDRNOTAVAIL error
(ticket #1487). Now the error is generated for unavailable IPv6 transparent
proxy.
If during configuration parsing of the geo directive the memory
allocation has failed, pool used to parse configuration inside
the block, and sometimes the temporary pool were not destroyed.
There is no need to calculate hashes of static strings at runtime. The
ngx_hash() macro can be used to do it during compilation instead, similarly
to how it is done in ngx_http_proxy_module.c for "Server" and "Date" headers.
In particular, if a stream object allocation failed, and a client sent
the PRIORITY frame for this stream, ngx_http_v2_set_dependency() could
dereference a null pointer while trying to re-parent a dependency node.
r->headers_in.host can be NULL in ngx_http_v2_push_resource().
This happens when a request is terminated with 400 before the :authority
or Host header is parsed, and either pushing is enabled on the server{}
level or error_page 400 redirects to a location with pushes configured.
Found by Coverity (CID 1429156).
Resources to be pushed are configured with the "http2_push" directive.
Also, preload links from the Link response headers, as described in
https://www.w3.org/TR/preload/#server-push-http-2, can be pushed, if
enabled with the "http2_push_preload" directive.
Only relative URIs with absolute paths can be pushed.
The number of concurrent pushes is normally limited by a client, but
cannot exceed a hard limit set by the "http2_max_concurrent_pushes"
directive.
Previously, when request body was not available or was previously read in
memory rather than a file, client received HTTP 500 error, but no explanation
was logged in error log. This could happen, for example, if request body was
read or discarded prior to error_page redirect, or if mirroring was enabled
along with dav.
This fixes segfault in configurations with multiple virtual servers sharing
the same port, where a non-default virtual server block misses certificate.
Following ad3f342f14ba046c (1.9.13), it is possible that a request where
header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY,
triggering an attempt to return additional error response and the
"header already sent" alert as a result.
In particular, it is trivial to reproduce the problem with a HEAD request
and caching enabled. With caching enabled nginx will change HEAD to GET
and will set u->pipe->downstream_error to suppress sending the response
body to the client. When a backend-related error occurs (for example,
proxy_read_timeout expires), ngx_http_finalize_upstream_request() will
be called with NGX_HTTP_BAD_GATEWAY. After ad3f342f14ba046c this will
result in ngx_http_finalize_request(NGX_HTTP_BAD_GATEWAY).
Fix is to move u->pipe->downstream_error handling to a later point,
where all special response codes are changed to NGX_ERROR.
Reported by Jan Prachar,
http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html.
Specifically, it is now allowed to start with a variable expression with braces:
${name}. The opening curly bracket in such a token was previously considered
the start of a new block. Variables located anywhere else in a token worked
fine: foo${name}.
Previously, capset(2) was called with the 64-bit capabilities version
_LINUX_CAPABILITY_VERSION_3. With this version Linux kernel expected two
copies of struct __user_cap_data_struct, while only one was submitted. As a
result, random stack memory was accessed and random capabilities were requested
by the worker. This sometimes caused capset() errors. Now the 32-bit version
_LINUX_CAPABILITY_VERSION_1 is used instead. This is OK since CAP_NET_RAW is
a 32-bit capability (CAP_NET_RAW = 13).
Previously included file sys/capability.h mentioned in capset(2) man page,
belongs to the libcap-dev package, which may not be installed on some Linux
systems when compiling nginx. This prevented the capabilities feature from
being detected and compiled on that systems.
Now linux/capability.h system header is included instead. Since capset()
declaration is located in sys/capability.h, now capset() syscall is defined
explicitly in code using the SYS_capset constant, similarly to other
Linux-specific features in nginx.
The capability is retained automatically in unprivileged worker processes after
changing UID if transparent proxying is enabled at least once in nginx
configuration.
The feature is only available in Linux.
If the flag space_in_uri is set, the URI in HTTP upstream request is escaped to
convert space to %20. However this flag is not checked while creating the
default cache key. This leads to different cache keys for requests
'/foo bar' and '/foo%20bar', while the upstream requests are identical.
Additionally, the change fixes background cache updates when the client URI
contains unescaped space. Default cache key in a subrequest is always based on
escaped URI, while the main request may not escape it. As a result, background
cache update subrequest may update a different cache entry.
Inheriting this flag will make the cloned subrequest behave consistently with
the parent. Specifically, the upstream HTTP request and cache key created by
the proxy module may depend directly on unparsed_uri if valid_unparsed_uri flag
is set. Previously, the flag was zero for cloned requests, which could make
background update proxy a request different than its parent and cache the result
with a different key. For example, if client URI contained the escaped slash
character %2F, it was used as is by the proxy module in the main request, but
was unescaped in the subrequests.
Similar problems exist in the slice module.
Previously, the unparsed uri was explicitly allowed to be used only by the main
request. However the valid_unparsed_uri flag is nonzero only in the main
request, which makes the main request check pointless.
If the data to write is bigger than what the socket can send, and the
reminder is smaller than NGX_SSL_BUFSIZE, then SSL_write() fails with
SSL_ERROR_WANT_WRITE. The reminder of payload however is successfully
copied to the low-level buffer and all the output chain buffers are
flushed. This means that retry logic doesn't work because
ngx_http_upstream_process_non_buffered_request() checks only if there's
anything in the output chain buffers and ignores the fact that something
may be buffered in low-level parts of the stack.
Signed-off-by: Patryk Lesiewicz <patryk@google.com>
If a connection with the read delayed flag set was stored in the keepalive
cache, and after picking it from the cache a read timer was set on that
connection, this timer was considered a delay timer rather than a socket read
event timer as expected. The latter timeout is usually much longer than the
former, which caused a significant delay in request processing.
The issue manifested itself with proxy_limit_rate and upstream keepalive
enabled and exists since 973ee2276300 (1.7.7) when proxy_limit_rate was
introduced.
On some systems, it's possible that reaper of orphaned processes is
set to something other than "init" process. On such systems, the
changing binary procedure did not work.
The fix is to check if PPID has changed, instead of assuming it's
always 1 for orphaned processes.
The ngx_http_upstream_process_upgraded() did not handle c->close request,
and upgraded connections do not use the write filter. As a result,
worker_shutdown_timeout did not affect upgraded connections (ticket #1419).
Fix is to handle c->close in the ngx_http_request_handler() function, thus
covering most of the possible cases in http handling.
Additionally, mail proxying did not handle neither c->close nor c->error,
and thus worker_shutdown_timeout did not work for mail connections. Fix is
to add c->close handling to ngx_mail_proxy_handler().
Also, added explicit handling of c->close to stream proxy,
ngx_stream_proxy_process_connection(). This improves worker_shutdown_timeout
handling in stream, it will no longer wait for some data being transferred
in a connection before closing it, and will also provide appropriate
logging at the "info" level.
A zlib variant from Intel as available from https://github.com/jtkukunas/zlib
uses 64K hash instead of scaling it from the specified memory level, and
also uses 16-byte padding in one of the window-sized memory buffers, and can
force window bits to 13 if compression level is set to 1 and appropriate
compile options are used. As a result, nginx complained with "gzip filter
failed to use preallocated memory" alerts.
This change improves deflate_state allocation detection by testing that
items is 1 (deflate_state is the only allocation where items is 1).
Additionally, on first failure to use preallocated memory we now assume
that we are working with the Intel's modified zlib, and switch to using
appropriate preallocations. If this does not help, we complain with the
usual alerts.
Previous version of this patch was published at
http://mailman.nginx.org/pipermail/nginx/2014-July/044568.html.
The zlib variant in question is used by default in ClearLinux from Intel,
see http://mailman.nginx.org/pipermail/nginx-ru/2017-October/060421.html,
http://mailman.nginx.org/pipermail/nginx-ru/2017-November/060544.html.
Previously, nginx failed to move buffer position when parsing an incomplete
record header, and due to this wasn't be able to continue parsing once
remaining bytes of the record header were received.
This can affect response header parsing, potentially generating spurious errors
like "upstream sent unexpected FastCGI request id high byte: 1 while reading
response header from upstream". While this is very unlikely, since usually
record headers are written in a single buffer, this still can happen in real
life, for example, if a record header will be split across two TCP packets
and the second packet will be delayed.
This does not affect non-buffered response body proxying, due to "buf->pos =
buf->last;" at the start of the ngx_http_fastcgi_non_buffered_filter()
function. Also this does not affect buffered response body proxying, as
each input buffer is only passed to the filter once.
This is what usually happens for zones no longer used in the new
configuration, but zones where size or tag were changed were freed
when creating new memory zones. If reconfiguration failed (for
example, due to a conflicting listening socket), this resulted in a
segmentation fault in the master process.
Reported by Zhihua Cao,
http://mailman.nginx.org/pipermail/nginx-devel/2017-October/010536.html.
In particular, if ngx_http_postpone_filter_add() fails in ngx_chain_add_copy(),
the output chain of the postponed request was left in an invalid state.
This header carries the definition of HMAC_Init_ex(). In OpenSSL this
header is included by <openssl/ssl.h>, but it's not so in BoringSSL.
It's probably a good idea to explicitly include this header anyway,
regardless of whether it's included by other headers or not.
Upgrading an upstream connection is usually followed by reading from the client
which a subrequest is not allowed to do. Moreover, accessing the header_in
request field while processing upgraded connection ends up with a null pointer
dereference since the header_in buffer is only created for the the main request.
If proxy_next_upstream includes http_503/http_504, and upstream
returns 503/504, $upstream_status converted this to 502 for any
values except the last one.
The NGX_DONE value returned from ngx_http_upstream_cache_send() indicates
that upstream was already finalized in ngx_http_upstream_process_headers().
It was treated as a generic error which resulted in duplicate finalization.
Handled NGX_HTTP_UPSTREAM_INVALID_HEADER from ngx_http_upstream_cache_send().
Previously, it could return within ngx_http_upstream_finalize_request(), and
since it's below NGX_HTTP_SPECIAL_RESPONSE, a client connection could stuck.
When parsing of headers in a cache file fails, already parsed headers
need to be cleared, and protocol state needs to be reinitialized. To do
so, u->request_sent is now set to ensure ngx_http_upstream_reinit() will
be called.
This change complements improvements in 46ddff109e72.
This slightly reduces cost of selecting a peer if all or almost all peers
failed, see ticket #1030. There should be no measureable difference with
other workloads.
While this may result in non-ideal distribution of requests if nginx
won't be able to select a server in a reasonable number of attempts,
this still looks better than severe performance degradation observed
if there is no limit and there are many points configured (ticket #1030).
This is also in line with what we do for other hash balancing methods.
Previously, unix sockets were treated as AF_INET ones, and this may
result in buffer overread on Linux, where unbound unix sockets have
2-byte addresses.
Note that it is not correct to use just sun_path as a binary representation
for unix sockets. This will result in an empty string for unbound unix
sockets, and thus behaviour of limit_req and limit_conn will change when
switching from $remote_addr to $binary_remote_addr. As such, normal text
representation is used.
Reported by Stephan Dollberg.
At least FreeBSD, macOS, NetBSD, and OpenBSD can return unix sockets
with non-null-terminated sun_path. Additionally, the address may become
non-null-terminated if it does not fit into the buffer provided and was
truncated (may happen on macOS, NetBSD, and Solaris, which allow unix socket
addresess larger than struct sockaddr_un). As such, ngx_sock_ntop() might
overread the sockaddr provided, as it used "%s" format and thus assumed
null-terminated string.
To fix this, the ngx_strnlen() function was introduced, and it is now used
to calculate correct length of sun_path.
Some OSes (notably macOS, NetBSD, and Solaris) allow unix socket addresses
larger than struct sockaddr_un. Moreover, some of them (macOS, Solaris)
return socklen of the socket address before it was truncated to fit the
buffer provided. As such, on these systems socklen must not be used without
additional check that it is within the buffer provided.
Appropriate checks added to ngx_event_accept() (after accept()),
ngx_event_recvmsg() (after recvmsg()), and ngx_set_inherited_sockets()
(after getsockname()).
We also obtain socket addresses via getsockname() in
ngx_connection_local_sockaddr(), but it does not need any checks as
it is only used for INET and INET6 sockets (as there can be no
wildcard unix sockets).
The sync flag of HTTP/2 request body buffer is used when the size of request
body is unknown or bigger than configured "client_body_buffer_size". In this
case the buffer points to body data inside the global receive buffer that is
used for reading all HTTP/2 connections in the worker process. Thus, when the
sync flag is set, the buffer must be flushed to a temporary file, otherwise
the request body data can be overwritten.
Previously, the sync buffer wasn't flushed to a temporary file if the whole
body was received in one DATA frame with the END_STREAM flag and wasn't
copied into the HTTP/2 body preread buffer. As a result, the request body
might be corrupted (ticket #1384).
Now, setting r->request_body_in_file_only enforces writing the sync buffer
to a temporary file in all cases.
When caching intercepted errors, previous behaviour was to use
proxy_cache_valid times specified, regardless of various cache control
headers present in the response. Fix is to check u->cacheable and
use u->cache->valid_sec as set by various cache control response headers,
similar to how we do this in the normal caching code path.
If cache file is truncated, it is possible that u->process_header()
will return NGX_AGAIN. Added appropriate handling of this case by
changing the error to NGX_HTTP_UPSTREAM_INVALID_HEADER.
Also, added appropriate logging of this and NGX_HTTP_UPSTREAM_INVALID_HEADER
cases at the "crit" level. Note that this will result in duplicate logging
in case of NGX_HTTP_UPSTREAM_INVALID_HEADER. While this is something better
to avoid, it is considered to be an overkill to implement cache-specific
error logging in u->process_header().
Additionally, u->buffer.start is now reset to be able to receive a new
response, and u->cache_status set to MISS to provide the value in the
$upstream_cache_status variable, much like it happens on other cache file
errors detected by ngx_http_file_cache_read(), instead of HIT, which is
believed to be misleading.