Stale write event may happen if read and write events was reported both,
and processing of the read event closed descriptor.
In practice this might result in "sendfilev() failed (134: ..." or
"writev() failed (134: ..." errors when switching to next upstream server.
See report here:
http://mailman.nginx.org/pipermail/nginx/2013-April/038421.html
To avoid further breaks it's now done properly, all the dependencies
are now passed to Makefile.PL. While here, fixed include list passed to
Makefile.PL to use Makefile variables rather than a list expanded during
configure.
Filename extension used for dynamically loaded perl modules isn't
necessarily ".so" (e.g., it's ".bundle" on Mac OS X).
This fixes "make" after "make" unnecessarily rebuilding perl module.
Added missing dependencies for perl module's Makefile.
Simplified dependencies for perl module nginx.so: it depends
on Makefile that in turn depends on other perl bits.
Problems with setsockopt(TCP_NODELAY) and setsockopt(TCP_NOPUSH), as well
as sendfile() syscall on Solaris, are specific to UNIX-domain sockets.
Other address families, i.e. AF_INET and AF_INET6, are fine.
On Win32 platforms 0 is used to indicate errors in file operations, so
comparing against -1 is not portable.
This was not much of an issue in patched code, since only ngx_fd_info() test
is actually reachable on Win32 and in worst case it might result in bogus
error log entry.
Patch by Piotr Sikora.
Sorting of upstream servers by their weights is not required by
current balancing algorithms.
This will likely change mapping to backends served by ip_hash
weighted upstreams.
While exporting parts of the tree might be better in some cases, it
is awfully slow overseas, and also requires unlocking ssh key multiple
times. Exporting the whole repo and removing directories not needed for
zip is faster here.
It is also a required step before we can switch to Mercurial.
And corresponding variable $connections_waiting was added.
Previously, waiting connections were counted as the difference between
active connections and the sum of reading and writing connections.
That made it impossible to count more than one request in one connection
as reading or writing (as is the case for SPDY).
Also, we no longer count connections in handshake state as waiting.
This should improve behavior under deficiency of connections.
Since SSL handshake usually takes significant amount of time,
we exclude connections from reusable queue during this period
to avoid premature flush of them.
If proxy_pass to a host with dynamic resolution was used to handle
a subrequest, and host resolution failed, the main request wasn't run
till something else happened on the connection. E.g. request to "/zzz"
with the following configuration hanged:
addition_types *;
resolver 8.8.8.8;
location /test {
set $ihost xxx;
proxy_pass http://$ihost;
}
location /zzz {
add_after_body /test;
return 200 "test";
}
Report and original version of the patch by Lanshun Zhou,
http://mailman.nginx.org/pipermail/nginx-devel/2013-March/003476.html.
Code to reuse of r->request_body->buf in upstream module assumes it's
dedicated buffer, hence after 1.3.9 (r4931) it might reuse r->header_in
if client_body_in_file_only was set, resulting in original request
corruption. It is considered to be safer to always create a dedicated
buffer for rb->bufs to avoid such problems.
After introduction of chunked request body handling in 1.3.9 (r4931),
r->request_body->bufs buffers have b->start pointing to original buffer
start (and b->pos pointing to real data of this particular buffer).
While this is ok as per se, it caused bad things (usually original request
headers included before the request body) after reinit of the request
chain in ngx_http_upstream_reinit() while sending the request to a next
upstream server (which used to do b->pos = b->start for each buffer
in the request chain).
Patch by Piotr Sikora.
If c->recv() returns 0 there is no sense in using ngx_socket_errno for
logging, its value meaningless. (The code in question was copied from
ngx_http_keepalive_handler(), but ngx_socket_errno makes sense there as it's
used as a part of ECONNRESET handling, and the c->recv() call is preceeded
by the ngx_set_socket_errno(0) call.)
In r2411 setting of NGX_HTTP_GZIP_BUFFERED in c->buffered was moved from
ngx_http_gzip_filter_deflate_start() to ngx_http_gzip_filter_buffer() since
it was always called first. But in r2543 the "postpone_gzipping" directive
was introduced, and if postponed gzipping is disabled (the default setting),
ngx_http_gzip_filter_buffer() is not called at all.
We must always set NGX_HTTP_GZIP_BUFFERED after the start of compression
since there is always a trailer that is buffered.
There are no known cases when it leads to any problem with current code.
But we already had troubles in upcoming SPDY implementation.
Not only this is useful for the upcoming SPDY support, but it can
also help to improve HTTPS performance by enabling TLS False Start
in Chrome/Chromium browsers [1]. So, we always enable NPN for HTTPS
if it is supported by OpenSSL.
[1] http://www.imperialviolet.org/2012/04/11/falsestart.html
The c->single_connection was intended to be used as lock mechanism
to serialize modifications of request object from several threads
working with client and upstream connections. The flag is redundant
since threads in nginx have never been used that way.