OCSP responses may contain no nextUpdate. As per RFC 6960, this means
that nextUpdate checks should be bypassed. Handle this gracefully by
using NGX_MAX_TIME_T_VALUE as "valid" in such a case.
The problem was introduced by 6893a1007a7c (1.9.2).
Reported by Matthew Baldwin.
Broken by 6893a1007a7c (1.9.2) during introduction of strict OCSP response
validity checks. As stapling file is expected to be returned unconditionally,
fix is to set its validity to the maximum supported time.
Reported by Faidon Liambotis.
Once upstream is connected, the upstream buffer is allocated. Previously, the
proxy module used the buffer allocation status to check if upstream is
connected. Now it's enough to check the flag.
If the -T option is passed, additionally to configuration test, configuration
files are output to stdout.
In the debug mode, configuration files are kept in memory and can be accessed
using a debugger.
The function is now called ngx_parse_http_time(), and can be used by
any code to parse HTTP-style date and time. In particular, it will be
used for OCSP stapling.
For compatibility, a macro to map ngx_http_parse_time() to the new name
provided for a while.
With this change it's no longer needed to pass -D_GNU_SOURCE manually,
and -D_FILE_OFFSET_BITS=64 is set to use 64-bit off_t.
Note that nginx currently fails to work properly with master process
enabled on GNU Hurd, as fcntl(F_SETOWN) returns EOPNOTSUPP for sockets
as of GNU Hurd 0.6. Additionally, our strerror() preloading doesn't
work well with GNU Hurd, as it uses large numbers for most errors.
When configured, an individual listen socket on a given address is
created for each worker process. This allows to reduce in-kernel lock
contention on configurations with high accept rates, resulting in better
performance. As of now it works on Linux and DragonFly BSD.
Note that on Linux incoming connection requests are currently tied up
to a specific listen socket, and if some sockets are closed, connection
requests will be reset, see https://lwn.net/Articles/542629/. With
nginx, this may happen if the number of worker processes is reduced.
There is no such problem on DragonFly BSD.
Based on previous work by Sepherosa Ziehau and Yingqi Lu.
There is no need to set "i" to 0, as it's expected to be 0 assuming
the bindings are properly sorted, and we already rely on this when
explicitly set hport->naddrs to 1. Remaining conditional code is
replaced with identical "hport->naddrs = i + 1".
Identical modifications are done in the mail and stream modules,
in the ngx_mail_optimize_servers() and ngx_stream_optimize_servers()
functions, respectively.
No functional changes.
This may happen if eventfd() returns ENOSYS, notably seen on CentOS 5.4.
Such a failure will now just disable the notification mechanism and let
the callers cope with it, instead of failing to start worker processes.
If thread pools are not configured, this can safely be ignored.
Two mechanisms are implemented to make it possible to store pointers
in shared memory on Windows, in particular on Windows Vista and later
versions with ASLR:
- The ngx_shm_remap() function added to allow remapping of a shared memory
zone to the address originally used for it in the master process. While
important, it doesn't solve the problem by itself as in many cases it's
not possible to use the address because of conflicts with other
allocations.
- We now create mappings at the same address in all processes by starting
mappings at predefined addresses normally unused by newborn processes.
These two mechanisms combined allow to use shared memory on Windows
almost without problems, including reloads.
Based on the patch by Sergey Brester:
http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006836.html
It's now enough to specify proxy_protocol option in one listen directive to
enable it in all servers listening on the same address/port. Previously,
the setting from the first directive was always used.
When client or upstream connection is closed, level-triggered read event
remained active until the end of the session leading to cpu hog. Now the flag
NGX_CLOSE_EVENT is used to unschedule the event.
If a peer was initially skipped due to max_fails, there's no reason
not to try it again if enough time has passed, and the next_upstream
logic is in action.
This also reduces diffs with NGINX Plus.
Similar to ngx_http_file_cache_set_slot(), the last component of file->name
with a fixed length of 10 bytes, as generated in ngx_create_temp_path(), is
used as a source for the names of intermediate subdirectories with each one
taking its own part. Ensure that the sum of specified levels with slashes
fits into the length (ticket #731).
Missing call to X509_STORE_CTX_free when X509_STORE_CTX_init fails.
Missing call to OCSP_CERTID_free when OCSP_request_add0_id fails.
Possible leaks in vary particular scenariis of memory shortage.
This helps to avoid suboptimal behavior when a client waits for a control
frame or more data to increase window size, but the frames have been delayed
in the socket buffer.
The delays can be caused by bad interaction between Nagle's algorithm on
nginx side and delayed ACK on the client side or by TCP_CORK/TCP_NOPUSH
if SPDY was working without SSL and sendfile() was used.
The pushing code is now very similar to ngx_http_set_keepalive().
If any preread body bytes were sent in the first chain, chunk size was
incorrectly added before the whole chain, including header, resulting in
an invalid request sent to upstream. Fixed to properly add chunk size
after the header.
The r->request_body_no_buffering flag was introduced. It instructs
client request body reading code to avoid reading the whole body, and
to call post_handler early instead. The caller should use the
ngx_http_read_unbuffered_request_body() function to read remaining
parts of the body.
Upstream module is now able to use this mode, if configured with
the proxy_request_buffering directive.
If the last header evaluation resulted in an empty header, the e.skip flag
was set and was not reset when we've switched to evaluation of body_values.
This incorrectly resulted in body values being skipped instead of producing
some correct body as set by proxy_set_body. Fix is to properly reset
the e.skip flag.
As the problem only appeared if the last potentially non-empty header
happened to be empty, it only manifested itself if proxy_set_body was used
with proxy_cache.
The SSL_MODE_NO_AUTO_CHAIN mode prevents OpenSSL from automatically
building a certificate chain on the fly if there is no certificate chain
explicitly provided. Before this change, certificates provided via the
ssl_client_certificate and ssl_trusted_certificate directives were
used by OpenSSL to automatically build certificate chains, resulting
in unexpected (and in some cases unneeded) chains being sent to clients.
LibreSSL removed support for export ciphers and a call to
SSL_CTX_set_tmp_rsa_callback() results in an error left in the error
queue. This caused alerts "ignoring stale global SSL error (...called
a function you should not call) while SSL handshaking" on a first connection
in each worker process.
LibreSSL 2.1.1+ started to set SSL_OP_NO_SSLv3 option by default on
new contexts. This makes sure to clear it to make it possible to use SSLv3
with LibreSSL if enabled in nginx config.
Prodded by Kuramoto Eiji.
Example of usage:
error_log memory:16m debug;
This allows to configure debug logging with minimum impact on performance.
It's especially useful when rare crashes are experienced under high load.
The log can be extracted from a coredump using the following gdb script:
set $log = ngx_cycle->log
while $log->writer != ngx_log_memory_writer
set $log = $log->next
end
set $buf = (ngx_log_memory_buf_t *) $log->wdata
dump binary memory debug_log.txt $buf->start $buf->end
Keeping the ready flag in this case might results in missing notification of
broken connection until nginx tried to use it again.
While there, stale comment about stale event was removed since this function
is also can be called directly.
The code that calls sendfile() was cut into a separate function.
This simplifies EINTR processing, yet is needed for the following
changes that add threads support.
In case of filter finalization, r->upstream might be changed during
the ngx_event_pipe() call. Added an argument to preserve it while
calling the ngx_http_upstream_process_request() function.
A request may be already finalized when ngx_http_upstream_finalize_request()
is called, due to filter finalization: after filter finalization upstream
can be finalized via ngx_http_upstream_cleanup(), either from
ngx_http_terminate_request(), or because a new request was initiated
to an upstream. Then the upstream code will see an error returned from
the filter chain and will call the ngx_http_upstream_finalize_request()
function again.
To prevent corruption of various upstream data in this situation, make sure
to do nothing but merely call ngx_http_finalize_request().
Prodded by Yichun Zhang, for details see the thread at
http://nginx.org/pipermail/nginx-devel/2015-February/006539.html.
Previously, connection hung after calling ngx_http_ssl_handshake() with
rev->ready set and no bytes in socket to read. It's possible in at least the
following cases:
- when processing a connection with expired TCP_DEFER_ACCEPT on Linux
- after parsing PROXY protocol header if it arrived in a separate TCP packet
Thanks to James Hamlin.
When replacing a stale cache entry, its last_modified and etag could be
inherited from the old entry if the response code is not 200 or 206. Moreover,
etag could be inherited with any response code if it's missing in the new
response. As a result, the cache entry is left with invalid last_modified or
etag which could lead to broken revalidation.
For example, when a file is deleted from backend, its last_modified is copied to
the new 404 cache entry and is used later for revalidation. Once the old file
appears again with its original timestamp, revalidation succeeds and the cached
404 response is sent to client instead of the file.
The problem appeared with etags in 44b9ab7752e3 (1.7.3) and affected
last_modified in 1573fc7875fa (1.7.9).
Repeatedly calling ngx_http_upstream_add_chash_point() to create
the points array in sorted order, is O(n^2) to the total weight.
This can cause nginx startup and reconfigure to be substantially
delayed. For example, when total weight is 1000, startup takes
5s on a modern laptop.
Replace this with a linear insertion followed by QuickSort and
duplicates removal. Startup for total weight of 1000 reduces to 40ms.
Based on a patch by Wai Keen Woon.
Previously, the Auth-SSL-Verify header with the "NONE" value was always passed
to the auth_http script if verification of client certificates is disabled.
The "ssl_verify_client", "ssl_verify_depth", "ssl_client_certificate",
"ssl_trusted_certificate", and "ssl_crl" directives introduced to control
SSL client certificate verification in mail proxy module.
If there is a certificate, detail of the certificate are passed to
the auth_http script configured via Auth-SSL-Verify, Auth-SSL-Subject,
Auth-SSL-Issuer, Auth-SSL-Serial, Auth-SSL-Fingerprint headers. If
the auth_http_pass_client_cert directive is set, client certificate
in PEM format will be passed in the Auth-SSL-Cert header (urlencoded).
If there is no required certificate provided during an SSL handshake
or certificate verification fails then a protocol-specific error is
returned after the SSL handshake and the connection is closed.
Based on previous work by Sven Peter, Franck Levionnois and Filipe Da Silva.
Initial size as calculated from the number of elements may be bigger
than max_size. If this happens, make sure to set size to max_size.
Reported by Chris West.
Previously, this function checked for connection local address existence
and returned error if it was missing. Now a new address is assigned in this
case making it possible to call this function not only for accepted connections.
It appeared that the NGX_HAVE_AIO_SENDFILE macro was defined regardless of
the "--with-file-aio" configure option and the NGX_HAVE_FILE_AIO macro.
Now they are related.
Additionally, fixed one macro.
This reduces layering violation and simplifies the logic of AIO preread, since
it's now triggered by the send chain function itself without falling back to
the copy filter. The context of AIO operation is now stored per file buffer,
which makes it possible to properly handle cases when multiple buffers come
from different locations, each with its own configuration.
If fastcgi_pass (or any look-alike that doesn't imply a default
port) is specified as an IP literal (as opposed to a hostname),
port absence was not detected at configuration time and could
result in EADDRNOTAVAIL at run time.
Fixed this in such a way that configs like
http {
server {
location / {
fastcgi_pass 127.0.0.1;
}
}
upstream 127.0.0.1 {
server 10.0.0.1:12345;
}
}
still work. That is, port absence check is delayed until after
we make sure there's no explicit upstream with such a name.
The mtx->wait counter was not decremented if we were able to obtain the lock
right after incrementing it. This resulted in unneeded sem_post() calls,
eventually leading to EOVERFLOW errors being logged, "sem_post() failed
while wake shmtx (75: Value too large for defined data type)".
To close the race, mtx->wait is now decremented if we obtain the lock right
after incrementing it in ngx_shmtx_lock(). The result can become -1 if a
concurrent ngx_shmtx_unlock() decrements mtx->wait before the added code does.
However, that only leads to one extra iteration in the next call of
ngx_shmtx_lock().
The use_temp_path http cache feature is now implemented using a separate temp
hierarchy in cache directory. Prefix-based temp files are no longer needed.
If use_temp_path is set to off, a subdirectory "temp" is created in the cache
directory. It's used instead of proxy_temp_path and friends for caching
upstream response.
The trailer.count variable was not initialized if there was a header,
resulting in "sendfile() failed (22: Invalid argument)" alerts on OS X
if the "sendfile" directive was used. The bug was introduced
in 8e903522c17a (1.7.8).
Some parts of code related to handling variants of a resource moved into
a separate function that is called earlier. This allows to use cache file
name as a prefix for temporary file in the following patch.
The original check for NGX_AGAIN was surplus, since the function returns
only NGX_OK or NGX_ERROR. Now it looks similar to other places.
No functional changes.
The configuration handling code has changed to look similar to the proxy_store
directive and friends. This simplifies adding variable support in the following
patch.
No functional changes.
Currently, storing and caching mechanisms cannot work together, and a
configuration error is thrown when the proxy_store and proxy_cache
directives (as well as their friends) are configured on the same level.
But configurations like in the example below were allowed and could result
in critical errors in the error log:
proxy_store on;
location / {
proxy_cache one;
}
Only proxy_store worked in this case.
For more predictable and errorless behavior these directives now prevent
each other from being inherited from the previous level.
This changes internal API related to handling of the "store"
flag in ngx_http_upstream_conf_t. Previously, a non-null value
of "store_lengths" was enough to enable store functionality with
custom path. Now, the "store" flag is also required to be set.
No functional changes.
The proxy_store, fastcgi_store, scgi_store and uwsgi_store were inherited
incorrectly if a directive with variables was defined, and then redefined
to the "on" value, i.e. in configurations like:
proxy_store /data/www$upstream_http_x_store;
location / {
proxy_store on;
}
In the following configuration request was sent to a backend without
URI changed to '/' due to if:
location /proxy-pass-uri {
proxy_pass http://127.0.0.1:8080/;
set $true 1;
if ($true) {
# nothing
}
}
Fix is to inherit conf->location from the location where proxy_pass was
configured, much like it's done with conf->vars.
The proxy_pass directive and other handlers are not expected to be inherited
into nested locations, but there is a special code to inherit upstream
handlers into limit_except blocks, as well as a configuration into if{}
blocks. This caused incorrect behaviour in configurations with nested
locations and limit_except blocks, like this:
location / {
proxy_pass http://u;
location /inner/ {
# no proxy_pass here
limit_except GET {
# nothing
}
}
}
In such a configuration the limit_except block inside "location /inner/"
unexpectedly used proxy_pass defined in "location /", while it shouldn't.
Fix is to avoid inheritance of conf->upstream.upstream (and
conf->proxy_lengths) into locations which don't have noname flag.
Instead of independant inheritance of conf->upstream.upstream (proxy_pass
without variables) and conf->proxy_lengths (proxy_pass with variables)
we now test them both and inherit only if neither is set. Additionally,
SSL context is also inherited only in this case now.
Based on the patch by Alexey Radkov.
RFC7232 says:
The 304 (Not Modified) status code indicates that a conditional GET
or HEAD request has been received and would have resulted in a 200
(OK) response if it were not for the fact that the condition
evaluated to false.
which means that there is no reason to send requests with "If-None-Match"
and/or "If-Modified-Since" headers for responses cached with other status
codes.
Also, sending conditional requests for responses cached with other status
codes could result in a strange behavior, e.g. upstream server returning
304 Not Modified for cached 404 Not Found responses, etc.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
In case of a cache lock timeout and in the aio handler we now call
r->write_event_handler() instead of a connection write handler,
to make sure to run appropriate subrequest. Previous code failed to run
inactive subrequests and hence resulted in suboptimal behaviour, see
report by Yichun Zhang:
http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004435.html
(Infinite hang claimed in the report seems impossible without 3rd party
modules, as subrequests will be eventually woken up by the postpone filter.)
To ensure proper logging make sure to set current_request in all event
handlers, including resolve, ssl handshake, cache lock wait timer and
aio read handlers. A macro ngx_http_set_log_request() introduced to
simplify this.
The alert was introduced in 03ff14058272 (1.5.4), and was triggered on each
post_action invocation.
There is no real need to call header filters in case of post_action,
so return NGX_OK from ngx_http_send_header() if r->post_action is set.
This helps to avoid delays in sending the last chunk of data because
of bad interaction between Nagle's algorithm on nginx side and
delayed ACK on the client side.
Delays could also be caused by TCP_CORK/TCP_NOPUSH if SPDY was
working without SSL and sendfile() was used.
In 954867a2f0a6, we switched to using resolver node as the timer event data.
This broke debug event logging.
Replaced now unused ngx_resolver_ctx_t.ident with ngx_resolver_node_t.ident
so that ngx_event_ident() extracts something sensible when accessing
ngx_resolver_node_t as ngx_connection_t.
In 954867a2f0a6, we switched to using resolver node as the
timer event data, so make sure we do not free resolver node
memory until the corresponding timer is deleted.
There was no real problem since the amount of bytes can be sent is limited by
NGX_SENDFILE_MAXSIZE to less than 2G. But that can be changed in the future
Though ngx_solaris_sendfilev_chain() shouldn't suffer from the problem mentioned
in d1bde5c3c5d2 since currently IOV_MAX on Solaris is 16, but this follows the
change from 3d5717550371 in order to make the code look similar to other systems
and potentially eliminates the problem in the future.
The upstream modules remove and alter a number of client headers
before sending the request to upstream. This set of headers is
smaller or even empty when cache is disabled.
It's still possible that a request in a cache-enabled location is
uncached, for example, if cache entry counter is below min_uses.
In this case it's better to alter a smaller set of headers and
pass more client headers to backend unchanged. One of the benefits
is enabling server-side byte ranges in such requests.
Once this age is reached, the cache lock is discarded and another
request can acquire the lock. Requests which failed to acquire
the lock are not allowed to cache the response.
For further progress a new buffer must be at least two bytes larger than
the remaining unparsed data. One more byte is needed for null-termination
and another one for further progress. Otherwise inflate() fails with
Z_BUF_ERROR.
Instead of collecting a number of the possible SSL_CTX_use_PrivateKey_file()
error codes that becomes more and more difficult with the rising variety of
OpenSSL versions and its derivatives, just continue with the next password.
Multiple passwords in a single ssl_password_file feature was broken after
recent OpenSSL changes (commit 4aac102f75b517bdb56b1bcfd0a856052d559f6e).
Affected OpenSSL releases: 0.9.8zc, 1.0.0o, 1.0.1j and 1.0.2-beta3.
Reported by Piotr Sikora.
Previously, nginx would emit empty values in a header with multiple,
NULL-separated values.
This is forbidden by the SPDY specification, which requires headers to
have either a single (possibly empty) value or multiple, NULL-separated
non-empty values.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
When got multiple upstream IP addresses using DNS resolving, the number of
upstreams tries and the maxinum time spent for these tries were not affected.
This patch fixed it.
Spaces in Accept-Charset, Accept-Encoding, and Accept-Language headers
are now ignored. As per syntax of these headers spaces can only appear
in places where they are optional.
If a variant stored can't be used to respond to a request, the variant
hash is used as a secondary key.
Additionally, if we previously switched to a secondary key, while storing
a response to cache we check if the variant hash still apply. If not, we
switch back to the original key, to handle cases when Vary changes.
To cache responses with Vary, we now calculate hash of headers listed
in Vary, and return the response from cache only if new request headers
match.
As of now, only one variant of the same resource can be stored in cache.
Previous code resulted in transfer stalls when client happened
to read all the data in buffers at once, while all gzip buffers
were exhausted (but ctx->nomem wasn't set). Make sure to call
next body filter at least once per call if there are busy buffers.
Additionally, handling of calls with NULL chain was changed to follow
the same logic, i.e., next body filter is only called with NULL chain
if there are busy buffers. This is expected to fix "output chain is empty"
alerts as reported by some users after c52a761a2029 (1.5.7).
GetQueuedCompletionStatus() document on MSDN says the
following signature:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa364986.aspx
BOOL WINAPI GetQueuedCompletionStatus(
_In_ HANDLE CompletionPort,
_Out_ LPDWORD lpNumberOfBytes,
_Out_ PULONG_PTR lpCompletionKey,
_Out_ LPOVERLAPPED *lpOverlapped,
_In_ DWORD dwMilliseconds
);
In the latest specification, the type of the third argument
(lpCompletionKey) is PULONG_PTR not LPDWORD.
Due to the u->headers_in.last_modified_time not being correctly initialized,
this variable was evaluated to "Thu, 01 Jan 1970 00:00:00 GMT" for responses
cached without the "Last-Modified" header which resulted in subsequent proxy
requests being sent with "If-Modified-Since: Thu, 01 Jan 1970 00:00:00 GMT"
header.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
Previously, a value of the "send" variable wasn't properly adjusted
in a rare case when syscall was interrupted by a signal. As a result,
these functions could send less data than the limit allows.
The c->sent is reset to 0 on each request by server-side http code,
so do the same on client side. This allows to count number of bytes
sent in a particular request.
One intentional side effect of this change is that key is allowed only
in the first position. Previously, it was possible to specify the key
variable at any position, but that was never documented, and is contrary
with nginx configuration practice for positional parameters.
One intentional side effect of this change is that key is allowed only
in the first position. Previously, it was possible to specify the key
variable at any position, but that was never documented, and is contrary
to nginx configuration practice for positional parameters.
If a syslog daemon is restarted and the unix socket is used, further logging
might stop to work. In case of send error, socket is closed, forcing
a reconnection at the next logging attempt.
The ngx_cycle->log is used when sending the message. This allows to log syslog
send errors in another log.
Logging to syslog after its cleanup handler has been executed was prohibited.
Previously, this was possible from ngx_destroy_pool(), which resulted in error
messages caused by attempts to write into the closed socket.
The "processing" flag is renamed to "busy" to better match its semantics.
Previously, a file buffer start position was reset to the file start.
Now it's reset to the previous file buffer end. This fixes
reinitialization of requests having multiple successive parts of a
single file. Such requests are generated by fastcgi module.
This prevents inappropriate session reuse in unrelated server{}
blocks, while preserving ability to restore sessions on other servers
when using TLS Session Tickets.
Additionally, session context is now set even if there is no session cache
configured. This is needed as it's also used for TLS Session Tickets.
Thanks to Antoine Delignat-Lavaud and Piotr Sikora.
The new directives {proxy,fastcgi,scgi,uwsgi,memcached}_next_upstream_tries
and {proxy,fastcgi,scgi,uwsgi,memcached}_next_upstream_timeout limit
the number of upstreams tried and the maximum time spent for these tries
when searching for a valid upstream.
When memory allocation failed in ngx_http_upstream_cache(), the connection
would be terminated directly in ngx_http_upstream_init_request().
Return a INTERNAL_SERVER_ERROR response instead.
The ngx_init_setproctitle() function, as used on systems without
setproctitle(3), may fail due to memory allocation errors, and
therefore its return code needs to be checked.
Reported by Markus Linnala.
The etag->hash must be set to 0 to avoid an empty ETag header being
returned with the 500 Internal Server Error page after the memory
allocation failure.
Reported by Markus Linnala.
Some of the OpenSSL forks (read: BoringSSL) started removing unused,
no longer necessary and/or not really working bug workarounds along
with the SSL options and defines for them.
Instead of fixing nginx build after each removal, be proactive
and guard use of all SSL options for bug workarounds.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
The messages "ngx_slab_alloc() failed: no memory in cache keys zone"
from the file cache slab allocator are suppressed since the allocation
is likely to succeed after the forced expiration of cache nodes.
The second allocation failure is reported.
In theory, this can provide a bit better distribution of latencies.
Also it simplifies the code, since ngx_queue_t is now used instead
of custom implementation.
Previously, a configuration like
location / {
ssi on;
ssi_types *;
set $http_foo "bar";
return 200 '<!--#echo var="http_foo" -->\n';
}
resulted in NULL pointer dereference in ngx_http_get_variable() as
the variable was explicitly added to the variables hash, but its
get_handler wasn't properly set in the hash. Fix is to make sure
that get_handler is properly set by ngx_http_variables_init_vars().
The SPDY module doesn't expect timers can be set on stream events for reasons
other than delaying output. But ngx_http_writer() could add timer on write
event if the delayed flag wasn't set and nginx is waiting for AIO completion.
That could cause delays in sending response over SPDY when file AIO was used.
perl_parse() function expects argv/argc-style argument list,
which according to the C standard must be NULL-terminated,
that is: argv[argc] == NULL.
This change fixes a crash (SIGSEGV) that could happen because
of the buffer overrun during perl module initialization.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
LibreSSL developers decided that LibreSSL is OpenSSL-2.0.0, so tests
for OpenSSL-1.0.2+ are now passing, even though the library doesn't
provide functions that are expected from that version of OpenSSL.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
This change adds support for using BoringSSL as a drop-in replacement
for OpenSSL without adding support for any of the BoringSSL-specific
features.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
This is really just a prerequisite for building against BoringSSL,
which doesn't provide either of those features.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
Timeout may not be set on an upstream connection when we call
ngx_ssl_handshake() in ngx_http_upstream_ssl_init_connection(),
so make sure to arm it if it's not set.
Based on a patch by Yichun Zhang.
The ngx_http_geoip_city_float_variable and
ngx_http_geoip_city_int_variable functions did not always initialize
all variable fields like "not_found", which could lead to empty values
for those corresponding nginx variables randomly.
RFC3986 says that, for consistency, URI producers and normalizers
should use uppercase hexadecimal digits for all percent-encodings.
This is also what modern web browsers and other tools use.
Using lowercase hexadecimal digits makes it harder to interact with
those tools in case when use of the percent-encoded URI is required,
for example when $request_uri is part of the cache key.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
Previously, ngx_http_map_uri_to_path() errors were not checked in
ngx_http_upstream_store(). Moreover, in case of errors temporary
files were not deleted, as u->store was set to 0, preventing cleanup
code in ngx_http_upstream_finalize_request() from removing them. With
this patch, u->store is set to 0 only if there were no errors.
Reported by Feng Gu.
This ensures that debug logging and the $uri variable (if used in
400 Bad Request processing) will not try to access uninitialized
memory.
Found by Sergey Bobrov.