Instead of checking on each label if we need to place a dot or not,
now it always adds a dot after a label, and reduces the resulting
length afterwards.
Previously, anything with any of the two high bits set were interpreted
as compression pointers. This is incorrect, as RFC 1035 clearly states
that "The 10 and 01 combinations are reserved for future use". Further,
the 01 combination is actually allocated for EDNS extended label type
(see RFC 2671 and RFC 6891), not really used though.
Fix is to reject unrecognized label types rather than misinterpreting
them as compression pointers.
It is believed to be harmless, and in the worst case it uses some
uninitialized memory as a part of the compression pointer length,
eventually leading to the "name is out of DNS response" error.
When variables are used in ssl_certificate or ssl_certificate_key, a request
is created in the certificate callback to evaluate the variables, and then
freed. Freeing it, however, updates c->log->action to "closing request",
resulting in confusing error messages like "client timed out ... while
closing request" when a client times out during the SSL handshake.
Fix is to restore c->log->action after calling ngx_http_free_request().
Similarly to smtpd_hard_error_limit in Postfix and smtp_max_unknown_commands
in Exim, specifies the number of errors after which the connection is closed.
The change is mostly the same as the SMTP one (04e43d03e153 and 3f5d0af4e40a),
and ensures that nginx is able to properly handle or reject multiple IMAP
commands. The s->cmd field is not really used and set for consistency.
Non-synchronizing literals handling in invalid/unknown commands is limited,
so when a non-synchronizing literal is detected at the end of a discarded
line, the connection is closed.
Only "A-Za-z0-9-._" characters now allowed (which is stricter than what
RFC 3501 requires, but expected to be enough for all known clients),
and tags shouldn't be longer than 32 characters.
Previously, s->backslash was set if any of the arguments was a quoted
string with a backslash character. After successful command parsing
this resulted in all arguments being filtered to remove backslashes.
This is, however, incorrect, as backslashes should not be removed from
IMAP literals. For example:
S: * OK IMAP4 ready
C: a01 login {9}
S: + OK
C: user\name "pass\"word"
S: * BAD internal server error
resulted in "Auth-User: username" instead of "Auth-User: user\name"
as it should.
Fix is to apply backslash filtering on per-argument basis during parsing.
As discussed in the previous change, s->arg_start handling in the "done"
labels of ngx_mail_pop3_parse_command(), ngx_mail_imap_parse_command(),
and ngx_mail_smtp_parse_command() is wrong: s->arg_start cannot be
set there, as it is handled and cleared on all code paths where the
"done" labels are reached. The relevant code is dead and now removed.
Previously, s->arg_start was left intact after invalid IMAP commands,
and this might result in an argument incorrectly added to the following
command. Similarly, s->backslash was left intact as well, leading
to unneeded backslash removal.
For example (LFs from the client are explicitly shown as "<LF>"):
S: * OK IMAP4 ready
C: a01 login "\<LF>
S: a01 BAD invalid command
C: a0000000000\2 authenticate <LF>
S: a00000000002 aBAD invalid command
The backslash followed by LF generates invalid command with s->arg_start
and s->backslash set, the following command incorrectly treats anything
from the old s->arg_start to the space after the command as an argument,
and removes the backslash from the tag. If there is no space, s->arg_end
will be NULL.
Both things seem to be harmless though. In particular:
- This can be used to provide an incorrect argument to a command without
arguments. The only command which seems to look at the single argument
is AUTHENTICATE, and it checks the argument length before trying to
access it.
- Backslash removal uses the "end" pointer, and stops due to "src < end"
condition instead of scanning all the process memory if s->arg_end is
NULL (and arg[0].len is huge).
- There should be no backslashes in unquoted strings.
An obvious fix is to clear s->arg_start and s->backslash on invalid commands,
similarly to how it is done in POP3 parsing (added in 810:e3aa8f305d21) and
SMTP parsing.
This, however, makes it clear that s->arg_start handling in the "done"
label is wrong: s->arg_start cannot be legitimately set there, as it
is expected to be cleared in all possible cases when the "done" label is
reached. The relevant code is dead and will be removed by the following
change.
The change is mostly the same as the SMTP one (04e43d03e153 and 3f5d0af4e40a),
and ensures that nginx is able to properly handle or reject multiple POP3
commands, as required by the PIPELINING capability (RFC 2449). The s->cmd
field is not really used and set for consistency.
There is no need to scan buffer from s->buffer->pos, as we already scanned
the buffer till "p" and wasn't able to find an LF.
There is no real need for this change in SMTP, since it is at most a
microoptimization of a non-common code path. Similar code in IMAP, however,
will have to start scanning from "p" to be correct, since there can be
newlines in IMAP literals.
Previously, if an invalid SMTP command was split between reads, nginx failed
to wait for LF before returning an error, and interpreted the rest of the
command received later as a separate command.
The sw_invalid state in ngx_mail_smtp_parse_command(), introduced in
04e43d03e153, did not work, since ngx_mail_smtp_auth_state() clears
s->state when returning an error due to NGX_MAIL_PARSE_INVALID_COMMAND.
And not clearing s->state will introduce another problem: the rest
of the command would trigger duplicate error when rest of the command is
received.
Fix is to return NGX_AGAIN from ngx_mail_smtp_parse_command() until full
command is received.
Previously, if there were some pipelined SMTP data in the buffer when
a proxied connection with the backend was established, nginx called
ngx_mail_proxy_handler() to send these data, and not tried to send the
response to the last command. In most cases, this response was later sent
along with the response to the pipelined command, but if for some reason
client decides to wait for the response before finishing the next command
this might result in a connection hang.
Fix is to always call ngx_mail_proxy_handler() to send the response, and
additionally post an event to send the pipelined data if needed.
With this change, it is now possible to use ngx_conf_merge_ptr_value()
to merge complex values. This change follows much earlier changes in
ngx_conf_merge_ptr_value() and ngx_conf_set_str_array_slot()
in 1452:cd586e963db0 (0.6.10) and 1701:40d004d95d88 (0.6.22), and the
change in ngx_conf_set_keyval_slot() (7728:485dba3e2a01, 1.19.4).
To preserve compatibility with existing 3rd party modules, both NULL
and NGX_CONF_UNSET_PTR are accepted for now.
Due to structure's alignment, some uninitialized memory contents may have
been passed between processes.
Zeroing was removed in 0215ec9aaa8a.
Reported by Johnny Wang.
With SMTP pipelining, ngx_mail_read_command() can be called with s->buffer
without any space available, to parse additional commands received to the
buffer on previous calls. Previously, this resulted in recv() being called
with zero length, resulting in zero being returned, which was interpreted
as a connection close by the client, so nginx silently closed connection.
Fix is to avoid calling c->recv() if there is no free space in the buffer,
but continue parsing of the already received commands.
It turns out no browsers implement HTTP/2 GOAWAY handling properly, and
large enough number of resources on a page results in failures to load
some resources. In particular, Chrome seems to experience errors if
loading of all resources requires more than 1 connection (while it
is usually able to retry requests at least once, even with 2 connections
there are occasional failures for some reason), Safari if loading requires
more than 3 connections, and Firefox if loading requires more than 10
connections (can be configured with network.http.request.max-attempts,
defaults to 10).
It does not seem to be possible to resolve this on nginx side, even strict
limiting of maximum concurrency does not help, and loading issues seems to
be triggered by merely queueing of a request for a particular connection.
The only available mitigation seems to use higher keepalive_requests value.
The new default is 1000 and matches previously used default for
http2_max_requests. It is expected to be enough for 99.98% of the pages
(https://httparchive.org/reports/state-of-the-web?start=latest#reqTotal)
even in Chrome.
Similar to lingering_time, it limits total connection lifetime before
keepalive is switched off. The default is 1 hour, which is close to
the total maximum connection lifetime possible with default
keepalive_requests and keepalive_timeout.
Firefox uses several idle streams for PRIORITY frames[1], and
"http2_max_concurrent_streams 1;" results in "client sent too many
PRIORITY frames" errors when a connection is established by Firefox.
Fix is to relax the PRIORITY frames limit to use at least 100 as
the initial value (which is the recommended by the HTTP/2 protocol
minimum limit on the number of concurrent streams, so it is not
unreasonable for clients to assume that similar number of idle streams
can be used for prioritization).
[1] https://hg.mozilla.org/mozilla-central/file/32a9e6e145d6e3071c3993a20bb603a2f388722b/netwerk/protocol/http/Http2Stream.cpp#l1270
In FreeBSD 13, eventfd(2) was added, and this breaks build
with --test-build-epoll and without --with-file-aio. Fix is
to move eventfd(2) detection to auto/os/linux, as it is used
only on Linux as a notification mechanism for epoll().
In current versions (all versions based on zlib 1.2.11, at least
since 2018) it no longer uses 64K hash and does not force window
bits to 13 if it is less than 13. That is, it needs just 16 bytes
more memory than normal zlib, so these bytes are simply added to
the normal size calculation.
In limit_req, auth_delay, and upstream code to check for broken
connections, tests for possible connection close by the client
did not work if the connection was already closed when relevant
event handler was set. This happened because there were no additional
events in case of edge-triggered event methods, and read events
were disabled in case of level-triggered ones.
Fix is to explicitly post a read event if the c->read->ready flag
is set.
For new data to be reported with eventport on Solaris,
ngx_handle_read_event() needs to be called after reading response
headers. To do so, ngx_http_upstream_process_non_buffered_upstream()
now called unconditionally if there are no prepread data. This
won't cause any read() syscalls as long as upstream connection
is not ready for reading (c->read->ready is not set), but will result
in proper handling of all events.
If we need to be notified about further events, ngx_handle_read_event()
needs to be called after a read event is processed. Without this,
an event can be removed from the kernel and won't be reported again,
notably when using oneshot event methods, such as eventport on Solaris.
While here, error handling is also added, similar to one present in
ngx_resolver_tcp_read(). This is not expected to make a difference
and mostly added for consistency.
If an attempt is made to delete an event which was already reported,
port_dissociate() returns an error. Fix is avoid doing anything if
ev->active is not set.
Possible alternative approach would be to avoid calling ngx_del_event()
at all if ev->active is not set. This approach, however, will require
something else to re-add the other event of the connection, since both
read and write events are dissociated if an event is reported on a file
descriptor. Currently ngx_eventport_del_event() re-associates write
event if called to delete read event, and vice versa.
If, at the start of an event loop iteration, there are any timers
in the past (including timers expiring now), the ngx_process_events()
function is called with zero timeout, and returns immediately even
if there are no events. But the following code only calls
ngx_event_expire_timers() if time actually changed, so this results
in nginx spinning in the event loop till current time changes.
While such timers are not expected to appear under normal conditions,
as all such timers should be removed on previous event loop iterations,
they still can appear due to bugs, zero timeouts set in the configuration
(if this is not explicitly handled by the code), or due to external
time changes on systems without clock_gettime(CLOCK_MONOTONIC).
Fix is to call ngx_event_expire_timers() unconditionally. Calling
it on each event loop iteration is not expected to be significant from
performance point of view, especially compared to a syscall in
ngx_process_events().
Without explicit handling, a zero timer was actually added, leading to
multiple unneeded syscalls. Further, sending GOAWAY frame early might
be beneficial for clients.
Reported by Sergey Kandaurov.
Unlike in 75e908236701, which added the logic to ngx_http_finalize_request(),
this change moves it to a more generic routine ngx_http_finalize_connection()
to cover cases when a request is finalized with NGX_DONE.
In particular, this fixes unwanted connection transition into the keepalive
state after receiving EOF while discarding request body. With edge-triggered
event methods that means the connection will last for extra seconds as set in
the keepalive_timeout directive.
The response size check introduced in 39501ce97e29 did not take into
account possible padding on DATA frames, resulting in incorrect
"upstream sent response body larger than indicated content length" errors
if upstream server used padding in responses with known length.
Fix is to check the actual size of response buffers produced by the code,
similarly to how it is done in other protocols, instead of checking
the size of DATA frames.
Reported at:
http://mailman.nginx.org/pipermail/nginx-devel/2021-March/013907.html