The sync flag of HTTP/2 request body buffer is used when the size of request
body is unknown or bigger than configured "client_body_buffer_size". In this
case the buffer points to body data inside the global receive buffer that is
used for reading all HTTP/2 connections in the worker process. Thus, when the
sync flag is set, the buffer must be flushed to a temporary file, otherwise
the request body data can be overwritten.
Previously, the sync buffer wasn't flushed to a temporary file if the whole
body was received in one DATA frame with the END_STREAM flag and wasn't
copied into the HTTP/2 body preread buffer. As a result, the request body
might be corrupted (ticket #1384).
Now, setting r->request_body_in_file_only enforces writing the sync buffer
to a temporary file in all cases.
When caching intercepted errors, previous behaviour was to use
proxy_cache_valid times specified, regardless of various cache control
headers present in the response. Fix is to check u->cacheable and
use u->cache->valid_sec as set by various cache control response headers,
similar to how we do this in the normal caching code path.
If cache file is truncated, it is possible that u->process_header()
will return NGX_AGAIN. Added appropriate handling of this case by
changing the error to NGX_HTTP_UPSTREAM_INVALID_HEADER.
Also, added appropriate logging of this and NGX_HTTP_UPSTREAM_INVALID_HEADER
cases at the "crit" level. Note that this will result in duplicate logging
in case of NGX_HTTP_UPSTREAM_INVALID_HEADER. While this is something better
to avoid, it is considered to be an overkill to implement cache-specific
error logging in u->process_header().
Additionally, u->buffer.start is now reset to be able to receive a new
response, and u->cache_status set to MISS to provide the value in the
$upstream_cache_status variable, much like it happens on other cache file
errors detected by ngx_http_file_cache_read(), instead of HIT, which is
believed to be misleading.
It is to be used as a bitmask with various bits set/reset when appropriate.
63b8b157b776 made a similar change to ngx_http_upstream_rr_peer_t.down and
ngx_stream_upstream_rr_peer_t.down.
Previously, "get indexed header" message was logged when in fact only
header name was obtained using an index, and "get indexed header name"
was logged when full header representation (name and value) was obtained
using an index. Fixed version logs "get indexed name" and "get indexed
header" respectively.
Previously, when the first UDP response packet was not received from the
proxied server within proxy_timeout, no error message was logged before
switching to the next upstream. Additionally, when one of succeeding response
packets was not received within the timeout, the timeout error had low severity
because it was logged as a client connection error as opposed to upstream
connection error.
Various buffers are allocated in an assumption that there would be
no more than 4 year digits. This might not be true on platforms
with 64-bit time_t, as 64-bit time_t is able to represent more than that.
Such dates with more than 4 year digits hardly make sense though, as
various date formats in use do not allow them anyway.
As such, all dates are now truncated by ngx_gmtime() to December 31, 9999.
This should have no effect on valid dates, though will prevent potential
buffer overflows on invalid ones.
In ngx_gmtime(), instead of casting to ngx_uint_t we now work with
time_t directly. This allows using dates after 2038 on 32-bit platforms
which use 64-bit time_t, notably NetBSD and OpenBSD.
As the code is not able to work with negative time_t values, argument
is now set to 0 for negative values. As a positive side effect, this
results in Epoch being used for such values instead of a date in distant
future.
This change lets NGINX talk to clients with SETTINGS_HEADER_TABLE_SIZE
smaller than the default 4KB. Previously, NGINX would ACK the SETTINGS
frame with a small dynamic table size, but it would never send dynamic
table size update, leading to a connection-level COMPRESSION_ERROR.
Also, it allows clients to release 4KB of memory per connection, since
NGINX doesn't use HPACK's dynamic table when encoding headers, however
clients had to maintain it, since NGINX never signaled that it doesn't
use it.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
When switching to a next upstream, some buffers could be stuck in the middle
of the filter chain. A condition existed that raised an error when this
happened. As it turned out, this condition prevented switching to a next
upstream if ssl preread was used with the TCP protocol (see the ticket).
In fact, the condition does not make sense for TCP, since after successful
connection to an upstream switching to another upstream never happens. As for
UDP, the issue with stuck buffers is unlikely to happen, but is still possible.
Specifically, if a filter delays sending data to upstream.
The condition can be relaxed to only check the "buffered" bitmask of the
upstream connection. The new condition is simpler and fixes the ticket issue
as well. Additionally, the upstream_out chain is now reset for UDP prior to
connecting to a new upstream to prevent repeating the client data twice.
When secure link checksum has length of 23 or 24 bytes, decoded base64 value
could occupy 17 or 18 bytes which is more than 16 bytes previously allocated
for it on stack. The buffer overflow does not have any security implications
since only one local variable was corrupted and this variable was not used in
this case.
The fix is to increase buffer size up to 18 bytes. Useless buffer size
initialization is removed as well.
This fixes at least the following cases, where no last_modified_time
(assuming caching is not enabled) resulted in incorrect behaviour:
- slice filter and If-Range requests (ticket #1357);
- If-Range requests with proxy_force_ranges;
- expires modified.
The $ssl_server_name variable used SSL_get_servername() result directly,
but this is not safe: it references a memory allocation in an SSL
session, and this memory might be freed at any time due to renegotiation.
Instead, copy the name to memory allocated from the pool.
This variable contains URL-encoded client SSL certificate. In contrast
to $ssl_client_cert, it doesn't depend on deprecated header continuation.
The NGX_ESCAPE_URI_COMPONENT variant of encoding is used, so the resulting
variable can be safely used not only in headers, but also as a request
argument.
The $ssl_client_cert variable should be considered deprecated now.
The $ssl_client_raw_cert variable will be eventually renambed back
to $ssl_client_cert.
Total length of a response with multiple ranges can be larger than a size_t
variable can hold, so type changed to off_t. Previously, an incorrect
Content-Length was returned when requesting more than 4G of ranges from
a large enough file on a 32-bit system.
An additional size_t variable introduced to calculate size of the boundary
header buffer, as off_t is not needed here and will require type casts on
win32.
Reported by Shuxin Yang,
http://mailman.nginx.org/pipermail/nginx/2017-July/054384.html.
The "fd" field should be after 3 pointers for ngx_event_ident() to use it.
This was broken by ccad84a174e0. While it does not seem to be currently used
for aio-related events, it should be a good idea to preserve the correct
layout nevertheless.
Pass NGX_FILE_OPEN to ngx_open_file() to fix "The parameter is incorrect"
error on win32 when using the ssl_session_ticket_key directive or loading
a binary geo base. On UNIX, this change is a no-op.
On Windows, a worker process does not call ngx_slab_init() from
ngx_init_zone_pool(), so ngx_slab_max_size, ngx_slab_exact_size,
and ngx_slab_exact_shift were left uninitialized.
The variable was considered non-existent in the absence of any
valid_referers directives.
Given the following config snippet,
location / {
return 200 $invalid_referer;
}
location /referer {
valid_referers server_names;
}
"location /" should work identically and independently on other
"location /referer".
The fix is to always add the $invalid_referer variable as long
as the module is compiled in, as is done by other modules.
The shared objects should generally be allocated from shared memory.
While peers->name and the data it points to allocated from cf->pool
happened to work on UNIX, it broke on Windows. On UNIX this worked
only because the shared memory zone for upstreams is re-created for
every new configuration.
But on Windows, a worker process does not inherit the address space
of the master process, so the peers->name pointed to data allocated
from cf->pool by the master process, and was invalid.
The phase is added instead of the try_files phase. Unlike the old phase, the
new one supports registering multiple handlers. The try_files implementation is
moved to a separate ngx_http_try_files_module, which now registers a precontent
phase handler.
The new request flag "preserve_body" indicates that the request body file should
not be removed by the upstream module because it may be used later by a
subrequest. The flag is set by the SSI (ticket #585), addition and slice
modules. Additionally, it is also set by the upstream module when a background
cache update subrequest is started to prevent the request body file removal
after an internal redirect. Only the main request is now allowed to remove the
file.