Following ad3f342f14ba046c (1.9.13), it is possible that a request where
header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY,
triggering an attempt to return additional error response and the
"header already sent" alert as a result.
In particular, it is trivial to reproduce the problem with a HEAD request
and caching enabled. With caching enabled nginx will change HEAD to GET
and will set u->pipe->downstream_error to suppress sending the response
body to the client. When a backend-related error occurs (for example,
proxy_read_timeout expires), ngx_http_finalize_upstream_request() will
be called with NGX_HTTP_BAD_GATEWAY. After ad3f342f14ba046c this will
result in ngx_http_finalize_request(NGX_HTTP_BAD_GATEWAY).
Fix is to move u->pipe->downstream_error handling to a later point,
where all special response codes are changed to NGX_ERROR.
Reported by Jan Prachar,
http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html.
Specifically, it is now allowed to start with a variable expression with braces:
${name}. The opening curly bracket in such a token was previously considered
the start of a new block. Variables located anywhere else in a token worked
fine: foo${name}.
Previously, capset(2) was called with the 64-bit capabilities version
_LINUX_CAPABILITY_VERSION_3. With this version Linux kernel expected two
copies of struct __user_cap_data_struct, while only one was submitted. As a
result, random stack memory was accessed and random capabilities were requested
by the worker. This sometimes caused capset() errors. Now the 32-bit version
_LINUX_CAPABILITY_VERSION_1 is used instead. This is OK since CAP_NET_RAW is
a 32-bit capability (CAP_NET_RAW = 13).
Previously included file sys/capability.h mentioned in capset(2) man page,
belongs to the libcap-dev package, which may not be installed on some Linux
systems when compiling nginx. This prevented the capabilities feature from
being detected and compiled on that systems.
Now linux/capability.h system header is included instead. Since capset()
declaration is located in sys/capability.h, now capset() syscall is defined
explicitly in code using the SYS_capset constant, similarly to other
Linux-specific features in nginx.
The capability is retained automatically in unprivileged worker processes after
changing UID if transparent proxying is enabled at least once in nginx
configuration.
The feature is only available in Linux.
In 2c7b488a61fb, IP_BIND_ADDRESS_NO_PORT test was accidentally placed
between SO_BINDANY, IP_TRANSPARENT, and IP_BINDANY tests. Moved it after
these tests.
If the flag space_in_uri is set, the URI in HTTP upstream request is escaped to
convert space to %20. However this flag is not checked while creating the
default cache key. This leads to different cache keys for requests
'/foo bar' and '/foo%20bar', while the upstream requests are identical.
Additionally, the change fixes background cache updates when the client URI
contains unescaped space. Default cache key in a subrequest is always based on
escaped URI, while the main request may not escape it. As a result, background
cache update subrequest may update a different cache entry.
Inheriting this flag will make the cloned subrequest behave consistently with
the parent. Specifically, the upstream HTTP request and cache key created by
the proxy module may depend directly on unparsed_uri if valid_unparsed_uri flag
is set. Previously, the flag was zero for cloned requests, which could make
background update proxy a request different than its parent and cache the result
with a different key. For example, if client URI contained the escaped slash
character %2F, it was used as is by the proxy module in the main request, but
was unescaped in the subrequests.
Similar problems exist in the slice module.
Previously, the unparsed uri was explicitly allowed to be used only by the main
request. However the valid_unparsed_uri flag is nonzero only in the main
request, which makes the main request check pointless.
If the data to write is bigger than what the socket can send, and the
reminder is smaller than NGX_SSL_BUFSIZE, then SSL_write() fails with
SSL_ERROR_WANT_WRITE. The reminder of payload however is successfully
copied to the low-level buffer and all the output chain buffers are
flushed. This means that retry logic doesn't work because
ngx_http_upstream_process_non_buffered_request() checks only if there's
anything in the output chain buffers and ignores the fact that something
may be buffered in low-level parts of the stack.
Signed-off-by: Patryk Lesiewicz <patryk@google.com>
If a connection with the read delayed flag set was stored in the keepalive
cache, and after picking it from the cache a read timer was set on that
connection, this timer was considered a delay timer rather than a socket read
event timer as expected. The latter timeout is usually much longer than the
former, which caused a significant delay in request processing.
The issue manifested itself with proxy_limit_rate and upstream keepalive
enabled and exists since 973ee2276300 (1.7.7) when proxy_limit_rate was
introduced.
On some systems, it's possible that reaper of orphaned processes is
set to something other than "init" process. On such systems, the
changing binary procedure did not work.
The fix is to check if PPID has changed, instead of assuming it's
always 1 for orphaned processes.
As per POSIX, basic regular expressions have no alternations, and the
interpretation of the "\|" construct is undefined. At least on MINIX
and Solaris grep interprets "\|" as literal "|", and not as an alternation
as GNU grep does. Removed such constructs introduced in f1daa0356a1d.
This fixes clang detection on MINIX.
The ngx_http_upstream_process_upgraded() did not handle c->close request,
and upgraded connections do not use the write filter. As a result,
worker_shutdown_timeout did not affect upgraded connections (ticket #1419).
Fix is to handle c->close in the ngx_http_request_handler() function, thus
covering most of the possible cases in http handling.
Additionally, mail proxying did not handle neither c->close nor c->error,
and thus worker_shutdown_timeout did not work for mail connections. Fix is
to add c->close handling to ngx_mail_proxy_handler().
Also, added explicit handling of c->close to stream proxy,
ngx_stream_proxy_process_connection(). This improves worker_shutdown_timeout
handling in stream, it will no longer wait for some data being transferred
in a connection before closing it, and will also provide appropriate
logging at the "info" level.
A zlib variant from Intel as available from https://github.com/jtkukunas/zlib
uses 64K hash instead of scaling it from the specified memory level, and
also uses 16-byte padding in one of the window-sized memory buffers, and can
force window bits to 13 if compression level is set to 1 and appropriate
compile options are used. As a result, nginx complained with "gzip filter
failed to use preallocated memory" alerts.
This change improves deflate_state allocation detection by testing that
items is 1 (deflate_state is the only allocation where items is 1).
Additionally, on first failure to use preallocated memory we now assume
that we are working with the Intel's modified zlib, and switch to using
appropriate preallocations. If this does not help, we complain with the
usual alerts.
Previous version of this patch was published at
http://mailman.nginx.org/pipermail/nginx/2014-July/044568.html.
The zlib variant in question is used by default in ClearLinux from Intel,
see http://mailman.nginx.org/pipermail/nginx-ru/2017-October/060421.html,
http://mailman.nginx.org/pipermail/nginx-ru/2017-November/060544.html.
Previously, nginx failed to move buffer position when parsing an incomplete
record header, and due to this wasn't be able to continue parsing once
remaining bytes of the record header were received.
This can affect response header parsing, potentially generating spurious errors
like "upstream sent unexpected FastCGI request id high byte: 1 while reading
response header from upstream". While this is very unlikely, since usually
record headers are written in a single buffer, this still can happen in real
life, for example, if a record header will be split across two TCP packets
and the second packet will be delayed.
This does not affect non-buffered response body proxying, due to "buf->pos =
buf->last;" at the start of the ngx_http_fastcgi_non_buffered_filter()
function. Also this does not affect buffered response body proxying, as
each input buffer is only passed to the filter once.
This is what usually happens for zones no longer used in the new
configuration, but zones where size or tag were changed were freed
when creating new memory zones. If reconfiguration failed (for
example, due to a conflicting listening socket), this resulted in a
segmentation fault in the master process.
Reported by Zhihua Cao,
http://mailman.nginx.org/pipermail/nginx-devel/2017-October/010536.html.
In particular, if ngx_http_postpone_filter_add() fails in ngx_chain_add_copy(),
the output chain of the postponed request was left in an invalid state.
This header carries the definition of HMAC_Init_ex(). In OpenSSL this
header is included by <openssl/ssl.h>, but it's not so in BoringSSL.
It's probably a good idea to explicitly include this header anyway,
regardless of whether it's included by other headers or not.
Upgrading an upstream connection is usually followed by reading from the client
which a subrequest is not allowed to do. Moreover, accessing the header_in
request field while processing upgraded connection ends up with a null pointer
dereference since the header_in buffer is only created for the the main request.
If proxy_next_upstream includes http_503/http_504, and upstream
returns 503/504, $upstream_status converted this to 502 for any
values except the last one.
The NGX_DONE value returned from ngx_http_upstream_cache_send() indicates
that upstream was already finalized in ngx_http_upstream_process_headers().
It was treated as a generic error which resulted in duplicate finalization.
Handled NGX_HTTP_UPSTREAM_INVALID_HEADER from ngx_http_upstream_cache_send().
Previously, it could return within ngx_http_upstream_finalize_request(), and
since it's below NGX_HTTP_SPECIAL_RESPONSE, a client connection could stuck.
When parsing of headers in a cache file fails, already parsed headers
need to be cleared, and protocol state needs to be reinitialized. To do
so, u->request_sent is now set to ensure ngx_http_upstream_reinit() will
be called.
This change complements improvements in 46ddff109e72.
This slightly reduces cost of selecting a peer if all or almost all peers
failed, see ticket #1030. There should be no measureable difference with
other workloads.