Now ngx_http_spdy_state_protocol_error() is able to close stream,
so there is no need in a separate call for this.
Also fixed zero status code in logs for some cases.
The 7022564a9e0e changeset made ineffective workaround from 2464ccebdb52
to avoid NULL pointer dereference with "if". It is now restored by
moving the u->ssl_name initialization after the check.
Found by Coverity (CID 1210408).
Previous code failed to properly restore cf->conf_file in case of
ngx_close_file() errors, potentially resulting in double free of
cf->conf_file->buffer->start.
Found by Coverity (CID 1087507).
While managing big caches it is possible that expiring old cache items
in ngx_http_file_cache_expire() will take a while. Added a check for
ngx_quit / ngx_terminate to make sure cache manager can be terminated
while in ngx_http_file_cache_expire().
The ngx_http_proxy_rewrite_cookie() function expects the value of the
"Set-Cookie" header to be null-terminated, and for headers obtained
from proxied server it is usually true.
Now the ngx_http_proxy_rewrite() function preserves the null character
while rewriting headers.
This fixes accessing memory outside of rewritten value if both the
"proxy_cookie_path" and "proxy_cookie_domain" directives are used in
the same location.
There's a race condition between closing a stream by one endpoint
and sending a WINDOW_UPDATE frame by another. So it would be better
to just skip such frames for unknown streams, like is already done
for the DATA frames.
These directives allow to switch on Server Name Indication (SNI) while
connecting to upstream servers.
By default, proxy_ssl_server_name is currently off (that is, no SNI) and
proxy_ssl_name is set to a host used in the proxy_pass directive.
The SSL_CTX_set_cipher_list() may fail if there are no valid ciphers
specified in proxy_ssl_ciphers / uwsgi_ssl_ciphers, resulting in
SSL context leak.
In theory, ngx_pool_cleanup_add() may fail too, but this case is
intentionally left out for now as it's almost impossible and proper fix
will require changes to http ssl and mail ssl code as well.
This should prevent attempts of using pointer before it was checked, since
all modern compilers are able to spot access to uninitialized variable.
No functional changes.
Previously, an empty frame object was created for an output chain that contains
only sync or flush empty buffers. But since 39d7eef2e332 every DATA frame has
the flush flag set on its last buffer, so there's no need any more in additional
flush buffers in the output queue and they can be skipped.
Note that such flush frames caused an incorrect $body_bytes_sent value.
The SPDY draft 2 specification requires that if an endpoint receives a
control frame for a type it does not recognize, it must ignore the frame.
But the 3 and 3.1 drafts don't seem to declare any behavior for such case.
Then sticking with the previous draft in this matter looks to be right.
But previously, only 8 least significant bits of the type field were
parsed while the rest of 16 bits of the field were checked against zero.
Though there are no known frame types bigger than 255, this resulted in
inconsistency in handling of such frames: they were not recognized as
valid frames at all, and the connection was closed.
If start time is within the track but end time is out of it, error
"end time is out mp4 stts samples" is generated. However it's
better to ignore the error and output the track until its end.
The ngx_cacheline_size may be too low on some platforms, resulting
in unexpected hash build problems (as no collisions are tolerated due
to low bucket_size, and max_size isn't big enough to build a hash without
collisions). These problems aren't fatal anymore but nevertheless
need to be addressed.
The flag allows to suppress "ngx_slab_alloc() failed: no memory" messages
from a slab allocator, e.g., if an LRU expiration is used by a consumer
and allocation failures aren't fatal.
The flag is now used in the SSL session cache code, and in the limit_req
module.
The "ngx_quit" may be reset by the worker thread before it's seen
by a ngx_cache_manager_thread(), resulting in an infinite loop. Make
sure to test ngx_exiting as well.
This brings Cygwin compilation in line with other case-insensitive
systems (notably win32 and OS X) where one can force case sensitivity
using -DNGX_HAVE_CASELESS_FILESYSTEM=0.
Despite introducing start and end crop operations existing log
messages still mostly refer only to start. Logging is improved
to match both cases.
New debug logging is added to track entry count in atoms after
cropping.
Two format type mismatches are fixed as well.
When "start" value is equal to a track duration the request
fails with "time is out mp4 stts" like it did before track
duration check was added. Now such tracks are considered
short and skipped.
The SPDY/3.1 specification requires that the server must respond with
a 400 "Bad request" error if the sum of the data frame payload lengths
does not equal the size of the Content-Length header.
This also fixes "zero size buf in output" alert, that might be triggered
if client sends a greater than zero Content-Length header and closes
stream using the FIN flag with an empty request body.
There are a few cases in ngx_http_spdy_state_read_data() related to error
handling when ngx_http_spdy_state_skip() might be called with an inconsistent
state between *pos and sc->length, that leads to violation of frame layout
parsing and resuted in corruption of spdy connection.
Based on a patch by Xiaochen Wang.
Previously, only one case was checked: if there's more data to parse
in a r->header_in buffer, but the buffer can be filled to the end by
the last parsed entry, so we also need to check that there's no more
data to inflate.
If set, it means that response body is going to be in more than one buffer,
hence only range requests with a single range should be honored.
The flag is now used by mp4 and cacheable upstream responses, thus allowing
range requests of mp4 files with start/end, as well as range processing
on a first request to a not-yet-cached files with proxy_cache.
Notably this makes it possible to play mp4 files (with proxy_cache, or with
mp4 module) on iOS devices, as byte-range support is required by Apple.
Client address specified in the PROXY protocol header is now
saved in the $proxy_protocol_addr variable and can be used in
the realip module.
This is currently not implemented for mail.
Additionally, make sure to check for errors from ngx_http_parse_header_line()
call after joining saved parts. There shouldn't be any errors, though
check may help to catch bugs like missing f->split_parts reset.
Reported by Lucas Molas.
Previously r->header_size was used to store length for a part of
value that represents an individual already parsed HTTP header,
while r->header_end pointed to the end of the whole value.
Instead of storing length of a following name or value as pointer
to a potential end address (r->header_name_end and r->header_end)
that might be overflowed, now r->lowercase_index counter is used
to store remaining length of a following unparsed field.
It also fixes incorrect $body_bytes_sent value if a request is
closed while parsing of the request header. Since r->header_size
is intended for counting header size, thus abusing it for header
parsing purpose was certainly a bad idea.
If something like "error_page 400 @name" is used in a configuration,
a request could be passed to a named location without URI set, and this
in turn might result in segmentation faults or other bad effects
as most of the code assumes URI is set.
With this change nginx will catch such configuration problems in
ngx_http_named_location() and will stop request processing if URI
is not set, returning 500.
If a request is finalized in the first call to the
ngx_http_upstream_process_upgraded() function, e.g., because upstream
server closed the connection for some reason, in the second call
the u->peer.connection pointer will be null, resulting in segmentation
fault.
Fix is to avoid second direct call, and post event instead. This ensures
that ngx_http_upstream_process_upgraded() won't be called again if
a request is finalized.
The fix removes useless stsc entry in result mp4.
If start_sample == n then current stsc entry should be skipped
and the result stsc should start with the next entry.
The reason for that is start_sample starts from 0, not 1.
Previously, upstream's status code was overwritten with
cached response's status code when STALE or REVALIDATED
response was sent to the client.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
The size of the priority field is increased by one bit in spdy/3,
and now it's a 3-bit field followed by 5 bits of unused space.
But a shift of these bits hasn't been adjusted in 39d7eef2e332
accordingly.
Linux returns EOPNOTSUPP for non-TCP sockets and ENOPROTOOPT for TCP
sockets, because getsockopt(TCP_FASTOPEN) is not implemented so far.
While there, lower the log level from ALERT to NOTICE to match other
getsockopt() failures.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
If "proxy_pass" is specified with variables, the resulting
hostname is looked up in the list of upstreams defined in
configuration. The search was case-sensitive, as opposed
to the case of "proxy_pass" specified without variables.
If seek position is within the last track chunk
and that chunk is standalone (stsc entry describes only
this chunk) such seek generates stsc seek error. The
problem is that chunk numbers start with 1, not with 0.
Mp4 module does not check movie and track durations when reading
file. Instead it generates errors when track metadata is shorter
than seek position. Now such tracks are skipped and movie duration
check is performed at file read stage.
Mp4 module does not allow seeks after the last key frame. Since
stss atom only contains key frames it's usually shorter than
other track atoms. That leads to stss seek error when seek
position is close to the end of file. The fix outputs empty
stss frame instead of generating error.
Backed out 05a56ebb084a, as it turns out that kernel can return connections
without any delay if syncookies are used. This basically means we can't
assume anything about connections returned with deferred accept set.
To solve original problem the 05a56ebb084a tried to solve, i.e. to don't
wait longer than needed if a connection was accepted after deferred accept
timeout, this patch changes a timeout set with setsockopt(TCP_DEFER_ACCEPT)
to 1 second, unconditionally. This is believed to be enough for speed
improvements, and doesn't imply major changes to timeouts used.
Note that before 2.6.32 connections were dropped after a timeout. Though
it is believed that 1s is still appropriate for kernels before 2.6.32,
as previously tcp_synack_retries controlled the actual timeout and 1s results
in more than 1 minute actual timeout by default.
If there is no SSI context in a given request at a given time,
the $date_local and $date_gmt variables used "%s" format, instead
of "%A, %d-%b-%Y %H:%M:%S %Z" documented as the default and used
if there is SSI module context and timefmt wasn't modified using
the "config" SSI command.
While use of these variables outside of the SSI evaluation isn't strictly
valid, previous behaviour is certainly inconsistent, hence the fix.
Even during execution of a request it is possible that there will be
no session available, notably in case of renegotiation. As a result
logging of $ssl_session_id in some cases caused NULL pointer dereference
after revision 97e3769637a7 (1.5.9). The check added returns an empty
string if there is no session available.
Read event on a client connection might have been disabled during
previous processing, and we at least need to handle events. Calling
ngx_http_upstream_process_upgraded() is a simpliest way to do it.
Notably this change is needed for select, poll and /dev/poll event
methods.
Previous version of this patch was posted here:
http://mailman.nginx.org/pipermail/nginx/2014-January/041839.html
Previously, it used to contain full session serialized instead of just
a session id, making it almost impossible to use the variable in a safe
way.
Thanks to Ivan Ristić.
It simplifies the code and allows easy reuse the same queue pointer to store
streams in various queues with different requirements. Future implementation
of SPDY/3.1 will take advantage of this quality.
The "length" value better corresponds with the specification and reduces
confusion about whether frame's header is included in "size" or not.
Also this change simplifies some parts of code, since in more cases the
length of frame is more useful than its actual size, especially considering
that the size of frame header is constant.
There is no need in separate "free" pointer and like it is for ngx_chain_t
the "next" pointer can be used. But after this change successfully handled
frame should not be accessed, so the frame handling cycle was improved to
store pointer to the next frame before processing.
Also worth noting that initializing "free" pointer to NULL in the original
code was surplus.
That code was based on misunderstanding of spdy specification about
configuration applicability in the SETTINGS frames. The original
interpretation was that configuration is assigned for the whole
SPDY connection, while it is only for the endpoint.
Moreover, the strange thing is that specification forbids multiple
entries in the SETTINGS frame with the same ID even if flags are
different. As a result, Chrome sends two SETTINGS frames: one with
its own configuration, and another one with configuration stored
for a server (when the FLAG_SETTINGS_PERSIST_VALUE flags were used
by the server).
To simplify implementation we refuse to use the persistent settings
feature and thereby avoid all the complexity related with its proper
support.
The "headers" is not a good term, since it is used not only to count
name/value pairs in the HEADERS block but to count SETTINGS entries too.
Moreover, one name/value pair in HEADERS can contain multiple http headers
with the same name.
No functional changes.
While processing a DATA frame, the link to related stream is stored in spdy
connection object as part of connection state. But this stream can be closed
between receiving parts of the frame.
Previously pool->current wasn't moved back to pool, resulting in blocks
not used for further allocations if pool->current was already moved at the
time of ngx_reset_pool(). Additionally, to preserve logic of moving
pool->current, the p->d.failed counters are now properly cleared. While
here, pool->chain is also cleared.
This change is essentially a nop with current code, but generally improves
things.
During the processing of input some control frames can be added to the queue.
And if there were no writing streams at the moment, these control frames might
be left unsent for a long time (or even forever).
This long delay is especially critical for PING replies since a client can
consider connection as broken and then resend exactly the same request over
a new connection, which is not safe in case of non-idempotent HTTP methods.
There is no reason to allocate it from connection pool that more like just
a bug especially since ngx_http_spdy_settings_frame_handler() already uses
sc->pool to free a chain.
The frame->stream pointer should always be initialized for control frames since
the check against it can be performed in ngx_http_spdy_filter_cleanup().
Parameters of ngx_http_spdy_filter_get_shadow() are changed from size_t to off_t
since the last call of the function may get size and offset from the rest of a
file buffer. This fixes possible data loss rightfully complained by MSVC on 32
bits systems where off_t is 8 bytes long while size_t is only 4 bytes.
The other two type casts are needed just to suppress warnings about possible
data loss also complained by MSVC but false positive in these cases.
False positive warning about the "cl" variable may be uninitialized in
the ngx_http_spdy_filter_get_data_frame() call was suppressed.
It is always initialized either in the "while" cycle or in the following
"if" condition since frame_size cannot be zero.
The "delayed" flag always should be set if there are unsent frames,
but this might not be the case if ngx_http_spdy_body_filter() was
called with NULL chain.
As a result, the "send_timeout" timer could be set on a stream in
ngx_http_writer(). And if the timeout occurred before all the stream
data has been sent, then the request was finalized with the "client
timed out" error.
It was used to prevent destroying of request object when there are unsent
frames in queue for the stream. Since it was incremented for each frame
and is only 8 bits long, so it was not very hard to overflow the counter.
Now the stream->queued counter is checked instead.
This adds support so it's possible to explicitly disable SSL Session
Tickets. In order to have good Forward Secrecy support either the
session ticket key has to be reloaded by using nginx' binary upgrade
process or using an external key file and reloading the configuration.
This directive adds another possibility to have good support by
disabling session tickets altogether.
If session tickets are enabled and the process lives for a long a time,
an attacker can grab the session ticket from the process and use that to
decrypt any traffic that occured during the entire lifetime of the
process.
If a request had an empty request body (with Content-Length: 0), and there
were preread data available (e.g., due to a pipelined request in the buffer),
the "zero size buf in output" alert might be logged while proxying the
request to an upstream.
Similar alerts appeared with client_body_in_file_only if a request had an
empty request body.
Not really a strict check (as X-Accel-Expires might be ignored or
contain invalid value), but quite simple to implement and better
than what we have now.
Fallback to synchronous sendfile() now only done on 3rd EBUSY without
any progress in a row. Not falling back is believed to be better
in case of occasional EBUSY, though protection is still needed to
make sure there will be no infinite loop.
This fixes content type set in stub_status and autoindex responses
to be usable in content type checks made by filter modules, such
as charset and sub filters.
There is no need to pass FLAG_FIN as a separate argument since it can always be
detected from the last_buf flag of the last frame buffer.
No functional changes.
Processing events from upstream connection can result in sending queued frames
from other streams. In this case such streams were not added to handling queue
and properly handled.
A global per connection flag was replaced by a per stream flag that indicates
currently sending stream while all other streams can be added to handling
queue.
Conditions for skipping ineligible peers are rewritten to make adding of new
conditions simpler and be in line with the "round_robin" and "least_conn"
modules. No functional changes.
Stricten response header checks: ensure that reserved bits are zeroes,
and that the opcode is "standard query".
Fixed the "zero-length domain name in DNS response" condition.
Renamed ngx_resolver_query_t to ngx_resolver_hdr_t as it describes
the header that is common to DNS queries and answers.
Replaced the magic number 12 by the size of the header structure.
The other changes are self-explanatory.
This flag in SPDY fake write events serves the same purposes as the "ready"
flag in real events, and it must be dropped if request needs to be handled.
Otherwise, it can prevent the request from finalization if ngx_http_writer()
was set, which results in a connection leak.
Found by Xiaochen Wang.
If c->read->ready was reset, but later some data were read from a socket
buffer due to a call to ngx_ssl_recv(), the c->read->ready flag should
be restored if not all data were read from OpenSSL buffers (as kernel
won't notify us about the data anymore).
More details are available here:
http://mailman.nginx.org/pipermail/nginx/2013-November/041178.html
The following new directives are introduced: proxy_cache_revalidate,
fastcgi_cache_revalidate, scgi_cache_revalidate, uwsgi_cache_revalidate.
Default is off. When set to on, they enable cache revalidation using
conditional requests with If-Modified-Since for expired cache items.
As of now, no attempts are made to merge headers given in a 304 response
during cache revalidation with headers previously stored in a cache item.
Headers in a 304 response are only used to calculate new validity time
of a cache item.
We should just call post_handler() when subrequest wants to read body, like
it happens for HTTP since rev. f458156fd46a. An attempt to init request body
for subrequests results in hang if the body was not already read.
Recent Linux versions started to return EOPNOTSUPP to getsockopt() calls
on unix sockets, resulting in log pollution on binary upgrade. Such errors
are silently ignored now.
The accept_filter and deferred options were not applied to sockets
that were added to configuration during binary upgrade cycle.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
The 403 (Forbidden) should not overwrite 401 (Unauthorized) as the
latter should be returned with the WWW-Authenticate header to request
authentication by a client.
The problem could be triggered with 3rd party modules and the "deny"
directive, or with auth_basic and auth_request which returns 403
(in 1.5.4+).
Patch by Jan Marc Hoffmann.
Much like with other headers, "add_header Cache-Control $value;" no longer
results in anything added to response headers if $value evaluates to an
empty string.
In order to support key rollover, ssl_session_ticket_key can be defined
multiple times. The first key will be used to issue and resume Session
Tickets, while the rest will be used only to resume them.
ssl_session_ticket_key session_tickets/current.key;
ssl_session_ticket_key session_tickets/prev-1h.key;
ssl_session_ticket_key session_tickets/prev-2h.key;
Please note that nginx supports Session Tickets even without explicit
configuration of the keys and this feature should be only used in setups
where SSL traffic is distributed across multiple nginx servers.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
The timeout set is used by OpenSSL as a hint for clients in TLS Session
Tickets. Previous code resulted in a default timeout (5m) used for TLS
Sessions Tickets if there was no session cache configured.
Prodded by Piotr Sikora.