The problem occured if first uri in try_files was shorter than request uri,
resulting in reserve being 0 and hence allocation skipped. The bug was
introduced in r4584 (1.1.19).
Instead of checking if there is events{} section present in configuration
in init_module handler we now do the same in init_conf handler. This
allows master process to detect incorrect configuration early and
reject it.
We now stop on IOV_MAX iovec entries only if we are going to add new one,
i.e. next buffer can't be coalesced into last iovec.
This also fixes incorrect checks for trailer creation on FreeBSD and
Mac OS X, header.nelts was checked instead of trailer.nelts.
The "complete" flag wasn't cleared on loop iteration start, resulting in
broken behaviour if there were more than IOV_MAX buffers and first
iteration was fully completed (and hence the "complete" flag was set
to 1).
Previous (incorrect) behaviour was to inherit ipv6 rules separately from
ipv4 ones. Now all rules are either inherited (if there are no rules
defined at current level) or not (if there are any rules defined).
Integer overflow is undefined behaviour in C and this indeed caused
problems on Solaris/SPARC (at least in some cases). Fix is to
subtract unsigned integers instead, and then cast result to a signed
one, which is implementation-defined behaviour and used to work.
Strictly speaking, we should compare (unsigned) result with the maximum
value of the corresponding signed integer type instead, this will be
defined behaviour. This will require much more changes though, and
considered to be overkill for now.
Such upstreams cause CPU hog later in the code as number of peers isn't
expected to be 0. Currently this may happen either if there are only backup
servers defined in an upstream block, or if server with ipv6 address used
in an upstream block.
Note that "ctxt->loadsubset = 1" previously used isn't really correct as
ctxt->loadsubset is a bitfield now. The use of xmlCtxtUseOptions() with
XML_PARSE_DTDLOAD is believed to be a better way to do the same thing.
Patch by Laurence Rowe.
POSIX doesn't require it to be defined, and Debian GNU/Hurd doesn't define
it. Note that if there is no MAX_PATH defined we have to use realpath()
with NULL argument and free() the result.
Most of the systems have it included due to namespace pollution, but
relying on this is a bad idea. Explicit include is required for at least
Debian GNU/Hurd.
The problem was introduced in 0.7.44 (r2589) during conversion to complex
values. Previously string.len included space for terminating NUL, but
with complex values it doesn't.
The bug in question is likely already fixed (though unfortunately we have
no information available as Apple's bugtracker isn't open), and the
workaround seems to be too pessimistic for modern versions of Safari
as well as other webkit-based browsers pretending to be Safari.
- Removed "hash" element from ngx_http_header_val_t which was always 1.
- Replaced NGX_HTTP_EXPIRES_* with ngx_http_expires_t enum type.
- Added prototype for ngx_http_add_header()
- Simplified ngx_http_set_last_modified().
This resulted in a disclosure of previously freed memory if upstream
server returned specially crafted response, potentially exposing
sensitive information.
Reported by Matthew Daley.
Embedded perl module assumes there is a space for terminating NUL character,
make sure to provide it in all situations by allocating one extra byte for
value buffer. Default ssi_value_length is reduced accordingly to
preserve 256 byte allocations.
While here, fixed another one byte value buffer overrun possible in
ssi_quoted_symbol_state.
Reported by Matthew Daley.
It wasn't enforced for a long time, and there are reports that people
use up to 100 simultaneous subrequests now. As this is a safety limit
to prevent loops, it's raised accordingly.
ZFS reports incorrect st_blocks until file settles on disk, and this
may take a while (i.e. just after creation of a file the st_blocks value
is incorrect). As a workaround we now use st_blocks only if
st_blocks * 512 > st_size, this should fix ZFS problems while still
preserving accuracy for other filesystems.
The problem had appeared in r3900 (1.0.1).
Previous code incorrectly assumed that nodes with identical keys are linked
together. This might not be true after tree rebalance.
Patch by Lanshun Zhou.
The cycle->new_log.file may not be set before config parsing finished if
there are no error_log directive defined at global level. Fix is to
copy it after config parsing.
Patch by Roman Arutyunyan.
With previous code raw buffer might be lost if p->input_filter() was called
on a buffer without any data and used ngx_event_pipe_add_free_buf() to
return it to the free list. This eventually might cause "all buffers busy"
problem, resulting in segmentation fault due to null pointer dereference in
ngx_event_pipe_write_chain_to_temp_file().
In ngx_event_pipe_add_free_buf() the buffer was added to the list start
due to pos == last, and then "p->free_raw_bufs = cl->next" in
ngx_event_pipe_read_upstream() dropped both chain links to the buffer
from the p->free_raw_bufs list.
Fix is to move "p->free_raw_bufs = cl->next" before calling the
p->input_filter().
the last path component if "if_not_owner" parameter is used.
To prevent race condition we have to open a file before checking its owner and
there's no way to change access flags for already opened file descriptor, so
we disable symlinks for the last path component at all if flags allow creating
or truncating the file.
Solaris has AT_FDCWD defined to unsigned value, and comparison of a file
descriptor with it causes warnings in modern versions of gcc. Explicitly
cast AT_FDCWD to ngx_fd_t to resolve these warnings.
To completely disable symlinks (disable_symlinks on)
we use openat(O_NOFOLLOW) for each path component
to avoid races.
To allow symlinks with the same owner (disable_symlinks if_not_owner),
use openat() (followed by fstat()) and fstatat(AT_SYMLINK_NOFOLLOW),
and then compare uids between fstat() and fstatat().
As there is a race between openat() and fstatat() we don't
know if openat() in fact opened symlink or not. Therefore,
we have to compare uids even if fstatat() reports the opened
component isn't a symlink (as we don't know whether it was
symlink during openat() or not).
Default value is off, i.e. symlinks are allowed.
Nuke NGX_PARSE_LARGE_TIME, it's not used since 0.6.30. The only error
ngx_parse_time() can currently return is NGX_ERROR, check it explicitly
and make sure to cast it to appropriate type (either time_t or ngx_msec_t)
to avoid signedness warnings on platforms with unsigned time_t (notably QNX).
Now redirects to named locations are counted against normal uri changes
limit, and post_action respects this limit as well. As a result at least
the following (bad) configurations no longer trigger infinite cycles:
1. Post action which recursively triggers post action:
location / {
post_action /index.html;
}
2. Post action pointing to nonexistent named location:
location / {
post_action @nonexistent;
}
3. Recursive error page for 500 (Internal Server Error) pointing to
a nonexistent named location:
location / {
recursive_error_pages on;
error_page 500 @nonexistent;
return 500;
}
Without the protection, subrequest loop results in r->count overflow and
SIGSEGV. Protection was broken in 0.7.25.
Note that this also limits number of parallel subrequests. This
wasn't exactly the case before 0.7.25 as local subrequests were
completed directly.
See here for details:
http://nginx.org/pipermail/nginx-ru/2010-February/032184.html
Variables with the "not_found" flag set follow the same rules as ones with
the "valid" flag set. Make sure ngx_http_get_flushed_variable() will flush
non-cacheable variables with the "not_found" flag set.
This fixes at least one known problem with $args not available in a subrequest
(with args) when there were no args in the main request and $args variable was
queried in the main request (reported by Laurence Rowe aka elro on irc).
Also this eliminates unneeded call to ngx_http_get_indexed_variable() in
cacheable case (as it will return cached value anyway).
Temporary files might not be removed if the "proxy_store" or "fastcgi_store"
directives were used for subrequests (e.g. ssi includes) and client closed
connection prematurely.
Non-active subrequests are finalized out of the control of the upstream
module when client closes a connection. As a result, the code to remove
unfinished temporary files in ngx_http_upstream_process_request() wasn't
executed.
Fix is to move relevant code into ngx_http_upstream_finalize_request() which
is called in all cases, either directly or via the cleanup handler.
Empty flush buffers are legitimate and may happen e.g. due to $r->flush()
calls in embedded perl. If there are no data buffered in zlib, deflate()
will return Z_BUF_ERROR (i.e. no progress possible) without adding anything
to output. Don't treat Z_BUF_ERROR as fatal and correctly send empty flush
buffer if we have no data in output at all.
See this thread for details:
http://mailman.nginx.org/pipermail/nginx/2010-November/023693.html
If header filter postponed processing of a header by returning NGX_AGAIN
and not moved u->buffer->pos, previous check incorrectly assumed there
is additional space and did another recv() with zero-size buffer. This
resulted in "upstream prematurely closed connection" error instead
of correct "upstream sent too big header" one.
Patch by Feibo Li.
the way.
It was unintentionally changed in r4272, so that it could only limit the first
location where the processing of the request has reached PREACCESS phase.
The PCRE JIT compiler uses mmap to allocate memory for its executable codes, so
we have to explicitly call the pcre_free_study() function to free this memory.
Example configuration to reproduce:
server {
proxy_redirect off;
location / {
proxy_pass http://localhost:8000;
proxy_redirect http://localhost:8000/ /;
location ~ \.php$ {
proxy_pass http://localhost:8000;
# proxy_redirect must be inherited from the level above,
# but instead it was switched off here
}
}
}
Previously if ngx_add_event() failed a connection was freed two times (once
in the ngx_event_connect_peer(), and again by a caller) as pc->connection was
left set. Fix is to always use ngx_close_connection() to close connection
properly and set pc->connection to NULL on errors.
Patch by Piotr Sikora.
Doing a cleanup before every lookup seems to be too aggressive. It can lead to
premature removal of the nodes still usable, which increases the amount of work
under a mutex lock and therefore decreases performance.
In order to improve cleanup behavior, cleanup function call has been moved right
before the allocation of a new node.
"limit_req_zone" directive; minimum size of zone is increased.
Previously an unsigned variable was used to keep the return value of
ngx_parse_size() function, which led to an incorrect zone size if NGX_ERROR
was returned.
The new code has been taken from the "limit_conn_zone" directive.
The aio_return() must be called regardless of the error returned by
aio_error(). Not calling it resulted in various problems up to segmentation
faults (as AIO events are level-triggered and were reported again and again).
Additionally, in "aio sendfile" case r->blocked was incremented in case of
error returned from ngx_file_aio_read(), thus causing request hangs.
It's already called by OPENSSL_config(). Calling it again causes some
openssl engines (notably GOST) to corrupt memory, as they don't expect
to be created more than once.
The ngx_hash_init() function did not expect call with zero elements count,
which caused FPE error on configs with an empty "types" block in http context
and "types_hash_max_size" > 10000.
Example configuration to reproduce:
events { }
http {
types_hash_max_size 10001;
types {}
server {}
}
Second argument (cpusetsize) is size in bytes, not in bits. Previously
used constant 32 resulted in reading of uninitialized memory and caused
EINVAL to be returned on some Linux kernels.
Support for TLSv1.1 and TLSv1.2 protocols was introduced in OpenSSL 1.0.1
(-beta1 was recently released). This change makes it possible to disable
these protocols and/or enable them without other protocols.
The problem was localized in ngx_http_proxy_rewrite_redirect_regex() handler
function which did not take into account prefix when overwriting header value.
New directives: proxy_cache_lock on/off, proxy_cache_lock_timeout. With
proxy_cache_lock set to on, only one request will be allowed to go to
upstream for a particular cache item. Others will wait for a response
to appear in cache (or cache lock released) up to proxy_cache_lock_timeout.
Waiting requests will recheck if they have cached response ready (or are
allowed to run) every 500ms.
Note: we intentionally don't intercept NGX_DECLINED possibly returned by
ngx_http_file_cache_read(). This needs more work (possibly safe, but needs
further investigation). Anyway, it's exceptional situation.
Note: probably there should be a way to disable caching of responses
if there is already one request fetching resource to cache (without waiting
at all). Two possible ways include another cache lock option ("no_cache")
or using proxy_no_cache with some supplied variable.
Note: probably there should be a way to lock updating requests as well. For
now "proxy_cache_use_stale updating" is available.
It's possible that configured limit_rate will permit more bytes per
single operation than sendfile_max_chunk. To protect disk from takeover
by a single client it is necessary to apply sendfile_max_chunk as a limit
regardless of configured limit_rate.
See here for report (in Russian):
http://mailman.nginx.org/pipermail/nginx-ru/2010-March/032806.html
If proxy_pass was used with variables and there was no URI component,
nginx always used unparsed URI. This isn't consistent with "no variables"
case, where e.g. rewrites are applied even if there is no URI component.
Fix is to use the same logic in both cases, i.e. only use unparsed URI if
it's valid and request is the main one.
This resolves issue with try_files (see ticket #70), configuration like
location / { try_files $uri /index.php; }
location /index.php { proxy_pass http://backend; }
caused nginx to use original request uri in a request to a backend.
Historically, not clearing of the r->valid_unparsed_uri on internal redirect
was a feature: it allowed to pass the same request to (another) upstream
server via error_page redirection. Since then named locations appeared
though, and it's time to start resetting r->valid_unparsed_uri on internal
redirects. Configurations still using this feature should be converted
to use named locations instead.
Patch by Lanshun Zhou.
The SCGI specification doesn't specify format of the response, and assuming
CGI specs should be used there is no reason to complain. RFC 3875
explicitly states that "A Status header field is optional, and status
200 'OK' is assumed if it is omitted".
The r->http_version is a version of client's request, and modules must
not set it unless they are really willing to downgrade protocol version
used for a response (i.e. to HTTP/0.9 if no response headers are available).
In neither case r->http_version may be upgraded.
The former code downgraded response from HTTP/1.1 to HTTP/1.0 for no reason,
causing various problems (see ticket #66). It was also possible that
HTTP/0.9 requests were upgraded to HTTP/1.0.
There have been multiple reports of cases where a real locked entry was
removed, resulting in a segmentation fault later in a worker which locked
the entry. It looks like default inactive timeout isn't enough in real
life.
For now just ignore such locked entries, and move them to the top of the
inactive queue to allow processing of other entries.
There are two possible situations which can lead to this: response was
cached with bigger proxy_buffer_size value (and nginx was restared since
then, i.e. shared memory zone content was lost), or due to the race in
the cache update code (see [1]) we've end up with fcn->body_start from
a different response stored in shared memory zone.
[1] http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001287.html
The ngx_http_cache() and ngx_http_no_cache_set_slot() functions were replaced
by ngx_http_test_predicates() and ngx_http_set_predicate_slot() in 0.8.46 and
no longer used since then.
FreeBSD kernel checks headers/trailers pointer against NULL, not
corresponding count. Passing NULL if there are no headers/trailers
helps to avoid unneeded work in kernel, as well as unexpected 0 bytes
GIO in traces.
haven't been sent to a client.
The ngx_http_variable_headers() and ngx_http_variable_unknown_header() functions
did not ignore response header entries with zero "hash" field.
Thanks to Yichun Zhang (agentzh).
It was working for nginx's own 206 replies as they are seen as 200 in the
headers filter module (range filter goes later in the headers filter chain),
but not for proxied replies.
Additional parsing logic added to correctly handle RFC 3986 compliant IPv6 and
IPvFuture characters enclosed in square brackets.
The host validation was completely rewritten. The behavior for non IP literals
was changed in a more proper and safer way:
- Host part is now delimited either by the first colon or by the end of string
if there's no colon. Previously the last colon was used as delimiter which
allowed substitution of a port number in the $host variable.
(e.g. Host: 127.0.0.1:9000:80)
- Fixed stripping of the ending dot in the Host header when the host was also
followed by a port number.
(e.g. Host: nginx.com.:80)
- Fixed upper case characters detection. Previously it was broken which led to
wasting memory and CPU.
If process exited abnormally while holding lock on some shared memory zone -
unlock it. It may be not safe thing to do (as crash with lock held may
result in corrupted shared memory structure, and other processes will
subsequently crash while trying to access shared data), therefore complain
loudly if unlock succeeds.
It is currently used from master process on abnormal worker termination to
unlock accept mutex (unlocking of accept mutex was broken in 1.0.2). It is
expected to be used in the future to unlock other mutexes as well.
Shared mutex code was rewritten to make this possible in a safe way, i.e.
with a check if lock was actually held by the exited process. We again use
pid to lock mutex, and use separate atomic variable for a count of processes
waiting in sem_wait().
Stale write event may happen if epoll_wait() reported both read and write
events, and processing of the read event closed descriptor.
Patch by Yichun Zhang (agentzh).
Check if received data length match Content-Length header (if present),
don't cache response if no match found. This prevents caching of corrupted
response in case of premature connection close by upstream.
Used "\x5" in 5th byte to claim presence of both audio and video. Used
previous tag size 0 in the beginning of the flv body (bytes 10 .. 13) as
required by specification (see http://www.adobe.com/devnet/f4v.html).
Patch by Piotr Sikora.
Previously it used a hardcoded value of 300 seconds. Also added the
"valid=" parameter to the "resolver" directive that can be used to
override the cache validity time.
Patch by Kirill A. Korinskiy with minor changes.
The following problems were fixed:
1. Directive fastcgi_cache affected headers sent to backends in unrelated
servers / locations (see ticket #45).
2. If-Unmodified-Since, If-Match and If-Range headers were sent to backends
if fastcgi_cache was used.
3. Cache-related headers were sent to backends if there were no fastcgi_param
directives and fastcgi_cache was used at server level.
Headers cleared with cache enabled (If-Modified-Since etc.) might be cleared
in unrelated servers/locations without proxy_cache enabled if proxy_cache was
used in some server/location.
Example config which triggered the problem:
proxy_set_header X-Test "test";
server { location /1 { proxy_cache name; proxy_pass ... } }
server { location /2 { proxy_pass ... } }
Another one:
server {
proxy_cache name;
location /1 { proxy_pass ... }
location /2 { proxy_cache off; proxy_pass ... }
}
In both cases If-Modified-Since header wasn't sent to backend in location /2.
Fix is to not modify conf->headers_source, but instead merge user-supplied
headers from conf->headers_source and default headers (either cache or not)
into separate headers_merged array.
The following config caused segmentation fault due to conf->file not
being properly set if "ssl on" was inherited from the http level:
http {
ssl on;
server {
}
}