If proxy_pass was used with variables and there was no URI component,
nginx always used unparsed URI. This isn't consistent with "no variables"
case, where e.g. rewrites are applied even if there is no URI component.
Fix is to use the same logic in both cases, i.e. only use unparsed URI if
it's valid and request is the main one.
This resolves issue with try_files (see ticket #70), configuration like
location / { try_files $uri /index.php; }
location /index.php { proxy_pass http://backend; }
caused nginx to use original request uri in a request to a backend.
Historically, not clearing of the r->valid_unparsed_uri on internal redirect
was a feature: it allowed to pass the same request to (another) upstream
server via error_page redirection. Since then named locations appeared
though, and it's time to start resetting r->valid_unparsed_uri on internal
redirects. Configurations still using this feature should be converted
to use named locations instead.
Patch by Lanshun Zhou.
The SCGI specification doesn't specify format of the response, and assuming
CGI specs should be used there is no reason to complain. RFC 3875
explicitly states that "A Status header field is optional, and status
200 'OK' is assumed if it is omitted".
The r->http_version is a version of client's request, and modules must
not set it unless they are really willing to downgrade protocol version
used for a response (i.e. to HTTP/0.9 if no response headers are available).
In neither case r->http_version may be upgraded.
The former code downgraded response from HTTP/1.1 to HTTP/1.0 for no reason,
causing various problems (see ticket #66). It was also possible that
HTTP/0.9 requests were upgraded to HTTP/1.0.
There have been multiple reports of cases where a real locked entry was
removed, resulting in a segmentation fault later in a worker which locked
the entry. It looks like default inactive timeout isn't enough in real
life.
For now just ignore such locked entries, and move them to the top of the
inactive queue to allow processing of other entries.
There are two possible situations which can lead to this: response was
cached with bigger proxy_buffer_size value (and nginx was restared since
then, i.e. shared memory zone content was lost), or due to the race in
the cache update code (see [1]) we've end up with fcn->body_start from
a different response stored in shared memory zone.
[1] http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001287.html
The ngx_http_cache() and ngx_http_no_cache_set_slot() functions were replaced
by ngx_http_test_predicates() and ngx_http_set_predicate_slot() in 0.8.46 and
no longer used since then.
FreeBSD kernel checks headers/trailers pointer against NULL, not
corresponding count. Passing NULL if there are no headers/trailers
helps to avoid unneeded work in kernel, as well as unexpected 0 bytes
GIO in traces.
haven't been sent to a client.
The ngx_http_variable_headers() and ngx_http_variable_unknown_header() functions
did not ignore response header entries with zero "hash" field.
Thanks to Yichun Zhang (agentzh).
It was working for nginx's own 206 replies as they are seen as 200 in the
headers filter module (range filter goes later in the headers filter chain),
but not for proxied replies.
Additional parsing logic added to correctly handle RFC 3986 compliant IPv6 and
IPvFuture characters enclosed in square brackets.
The host validation was completely rewritten. The behavior for non IP literals
was changed in a more proper and safer way:
- Host part is now delimited either by the first colon or by the end of string
if there's no colon. Previously the last colon was used as delimiter which
allowed substitution of a port number in the $host variable.
(e.g. Host: 127.0.0.1:9000:80)
- Fixed stripping of the ending dot in the Host header when the host was also
followed by a port number.
(e.g. Host: nginx.com.:80)
- Fixed upper case characters detection. Previously it was broken which led to
wasting memory and CPU.
If process exited abnormally while holding lock on some shared memory zone -
unlock it. It may be not safe thing to do (as crash with lock held may
result in corrupted shared memory structure, and other processes will
subsequently crash while trying to access shared data), therefore complain
loudly if unlock succeeds.
It is currently used from master process on abnormal worker termination to
unlock accept mutex (unlocking of accept mutex was broken in 1.0.2). It is
expected to be used in the future to unlock other mutexes as well.
Shared mutex code was rewritten to make this possible in a safe way, i.e.
with a check if lock was actually held by the exited process. We again use
pid to lock mutex, and use separate atomic variable for a count of processes
waiting in sem_wait().
Stale write event may happen if epoll_wait() reported both read and write
events, and processing of the read event closed descriptor.
Patch by Yichun Zhang (agentzh).