Previously, ngx_http_map_uri_to_path() errors were not checked in
ngx_http_upstream_store(). Moreover, in case of errors temporary
files were not deleted, as u->store was set to 0, preventing cleanup
code in ngx_http_upstream_finalize_request() from removing them. With
this patch, u->store is set to 0 only if there were no errors.
Reported by Feng Gu.
Previously, nginx closed client connection in cases when a response body
from upstream was needed to be cached or stored but shouldn't be sent to
the client. While this is normal for HTTP, it is unacceptable for SPDY.
Fix is to use instead the p->downstream_error flag to prevent nginx from
sending anything downstream. To make this work, the event pipe code was
modified to properly cache empty responses with the flag set.
The ngx_http_upstream_dummy_handler() must be set regardless of
the read event state. This prevents possible additional call of
ngx_http_upstream_send_request_handler().
Previously, last_modified_time was tested against -1 to check if the
not modified filter should be skipped. Notably, this prevented nginx
from additional If-Modified-Since (et al.) checks on proxied responses.
Such behaviour is suboptimal in some cases though, as checks are always
skipped on responses from a cache with ETag only (without Last-Modified),
resulting in If-None-Match being ignored in such cases. Additionally,
it was not possible to return 412 from the If-Unmodified-Since if last
modification time was not known for some reason.
This change introduces explicit r->disable_not_modified flag instead,
which is set by ngx_http_upstream_process_headers().
Previous code in ngx_http_upstream_send_response() used last modified time
from r->headers_out.last_modified_time after the header filter chain was
already called. At this point, last_modified_time may be already cleared,
e.g., with SSI, resulting in incorrect last modified time stored in a
cache file. Fix is to introduce u->headers_in.last_modified_time instead.
Clearing of the r->headers_out.last_modified_time field if a response
isn't cacheable in ngx_http_upstream_send_response() was introduced
in 3b6afa999c2f, the commit to enable not modified filter for cacheable
responses. It doesn't make sense though, as at this point header was
already sent, and not modified filter was already executed. Therefore,
the line was removed to simplify code.
The 7022564a9e0e changeset made ineffective workaround from 2464ccebdb52
to avoid NULL pointer dereference with "if". It is now restored by
moving the u->ssl_name initialization after the check.
Found by Coverity (CID 1210408).
These directives allow to switch on Server Name Indication (SNI) while
connecting to upstream servers.
By default, proxy_ssl_server_name is currently off (that is, no SNI) and
proxy_ssl_name is set to a host used in the proxy_pass directive.
If set, it means that response body is going to be in more than one buffer,
hence only range requests with a single range should be honored.
The flag is now used by mp4 and cacheable upstream responses, thus allowing
range requests of mp4 files with start/end, as well as range processing
on a first request to a not-yet-cached files with proxy_cache.
Notably this makes it possible to play mp4 files (with proxy_cache, or with
mp4 module) on iOS devices, as byte-range support is required by Apple.
If a request is finalized in the first call to the
ngx_http_upstream_process_upgraded() function, e.g., because upstream
server closed the connection for some reason, in the second call
the u->peer.connection pointer will be null, resulting in segmentation
fault.
Fix is to avoid second direct call, and post event instead. This ensures
that ngx_http_upstream_process_upgraded() won't be called again if
a request is finalized.
If "proxy_pass" is specified with variables, the resulting
hostname is looked up in the list of upstreams defined in
configuration. The search was case-sensitive, as opposed
to the case of "proxy_pass" specified without variables.
Read event on a client connection might have been disabled during
previous processing, and we at least need to handle events. Calling
ngx_http_upstream_process_upgraded() is a simpliest way to do it.
Notably this change is needed for select, poll and /dev/poll event
methods.
Previous version of this patch was posted here:
http://mailman.nginx.org/pipermail/nginx/2014-January/041839.html
Not really a strict check (as X-Accel-Expires might be ignored or
contain invalid value), but quite simple to implement and better
than what we have now.
The following new directives are introduced: proxy_cache_revalidate,
fastcgi_cache_revalidate, scgi_cache_revalidate, uwsgi_cache_revalidate.
Default is off. When set to on, they enable cache revalidation using
conditional requests with If-Modified-Since for expired cache items.
As of now, no attempts are made to merge headers given in a 304 response
during cache revalidation with headers previously stored in a cache item.
Headers in a 304 response are only used to calculate new validity time
of a cache item.
With previous code only part of u->buffer might be emptied in case
of special responses, resulting in partial responses seen by SSI set
in case of simple protocols, or spurious errors like "upstream sent
invalid chunked response" in case of complex ones.
Without u->header_sent set a special response might be generated following
an upgraded connection. The problem appeared in 1ccdda1f37f3 (1.5.3).
Catched by "header already sent" alerts in 1.5.4 after upstream timeouts.
Missing call to ngx_http_run_posted_request() resulted in a main request hang
if subrequest's ssl handshake with an upstream server failed for some reason.
Reported by Aviram Cohen.
Previously, after sending a header we always sent a last buffer and
finalized a request with code 0, even in case of errors. In some cases
this resulted in a loss of ability to detect the response wasn't complete
(e.g. if Content-Length was removed from a response by gzip filter).
This change tries to propogate to a client information that a response
isn't complete in such cases. In particular, with this change we no longer
pretend a returned response is complete if we wasn't able to create
a temporary file.
If an error code suggests the error wasn't fatal, we flush buffered data
and disable keepalive, then finalize request normally. This allows to to
propogate information about a problem to a client, while still sending all
the data we've got from an upstream.
No semantic changes expected, though some checks are done differently.
In particular, the r->cached flag is no longer explicitly checked. Instead,
we relay on u->header_sent not being set if a response is sent from
a cache.
The NGX_HTTP_CLIENT_CLOSED_REQUEST code is allowed to happen after we
started sending a response (much like NGX_HTTP_REQUEST_TIME_OUT), so there
is no need to reset response code to 0 in this case.
Checks were added to both buffered and unbuffered code paths to detect
and complain if a response is incomplete. Appropriate error codes are
now passed to ngx_http_upstream_finalize_request().
With this change in unbuffered mode we now use u->length set to -1 as an
indicator that EOF is allowed per protocol and used to indicate response
end (much like its with p->length in buffered mode). Proxy module was
changed to set u->length to 1 (instead of previously used -1) in case of
chunked transfer encoding used to comply with the above.
That is, by default we assume that response end is signalled by
a connection close. This seems to be better default, and in line
with u->pipe->length behaviour.
Memcached module was modified accordingly.
In case of upstream eof, only responses with u->pipe->length == -1
are now cached/stored. This ensures that unfinished chunked responses
are not cached.
Note well - previously used checks for u->headers_in.content_length_n are
preserved. This provides an additional level of protection if protol data
disagree with Content-Length header provided (e.g., a FastCGI response
is sent with wrong Content-Length, or an incomple SCGI or uwsgi response),
as well as protects from storing of responses to HEAD requests. This should
be reconsidered if we'll consider caching of responses to HEAD requests.
There is no real difference from previously used 0 as NGX_HTTP_* will
become 0 in ngx_http_upstream_finalize_request(), but the change
preserves information about a timeout a bit longer. Previous use of
ETIMEDOUT in one place was just wrong.
Note well that with cacheable responses there will be a difference
(code in ngx_http_upstream_finalize_request() will store the error
in cache), though this change doesn't touch cacheable case.
Previously, ngx_http_upstream_finalize_request(0) was used in most
cases after errors. While with current code there is no difference,
use of NGX_ERROR allows to pass a bit more information into
ngx_http_upstream_finalize_request().
In all cases ngx_http_upstream_finalize_request() with NGX_ERROR now used.
Previously used NGX_HTTP_INTERNAL_SERVER_ERROR in the subrequest in memory
case don't cause any harm, but inconsistent with other uses.