For normal cached responses ngx_http_cache_send() sends last buffer and then
request finalized via ngx_http_finalize_request() call, i.e. everything is
ok.
But for stale responses (i.e. when upstream died, but we have something in
cache) the same ngx_http_cache_send() sends last buffer, but then in
ngx_http_upstream_finalize_request() another last buffer is send. This
causes duplicate final chunk to appear if chunked encoding is used (and
resulting problems with keepalive connections and so on).
Fix this by not sending in ngx_http_upstream_finalize_request()
another last buffer if we know response was from cache.
This fixes crashes observed with some 3rd party balancer modules. Standard
balancer modules (round-robin and ip hash) explicitly set pc->connection
(aka u->peer.connection) to NULL and aren't affected.
As long as ngx_event_pipe() has more data read from upstream than specified
in p->length it's passed to input filter even if buffer isn't yet full. This
allows to process data with known length without relying on connection close
to signal data end.
By default p->length is set to -1 in upstream module, i.e. end of data is
indicated by connection close. To set it from per-protocol handlers upstream
input_filter_init() now called in buffered mode (as well as in
unbuffered mode).
Previous use of size_t may cause wierd effects on 32bit platforms with certain
big responses transferred in unbuffered mode.
Nuke "if (size > u->length)" check as it's not usefull anyway (preread
body data isn't subject to this check) and now requires additional check
for u->length being positive.
We no longer use r->headers_out.content_length_n as a primary source of
backend's response length. Instead we parse response length to
u->headers_in.content_length_n and copy to r->headers_out.content_length_n
when needed.
Just doing another connect isn't safe as peer.get() may expect peer.tries
to be strictly positive (this is the case e.g. with round robin with multiple
upstream servers). Increment peer.tries to at least avoid cpu hog in
round robin balancer (with the patch alert will be seen instead).
This is not enough to fully address the problem though, hence TODO. We
should be able to inform balancer that the error wasn't considered fatal
and it may make sense to retry the same peer.
The ngx_chain_update_chains() needs pool to free chain links used for buffers
with non-matching tags. Providing one helps to reduce memory consumption
for long-lived requests.
*) now ngx_http_file_cache_cleanup() uses ngx_http_file_cache_free()
*) ngx_http_file_cache_free() interface has been changed to accept r->cache
ngx_http_file_cache_cleanup() must use r->cache, but not r, because
there can be several r->cache's during request processing, r->cache may
be NULL at request finalising, etc.
*) test if updating request does not complete correctly