| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Suppress warnings in resolvectl about --type=
|
| |
| |
| |
| |
| | |
People don't generally type the trailing dot by mistake, so let's treat this as
indication that they want to resolve this particular hostname.
|
| |
| |
| |
| | |
https://github.com/systemd/systemd/pull/17535#discussion_r534005801
|
| | |
|
| |
| |
| |
| |
| | |
As noted in https://github.com/systemd/systemd/pull/17535#discussion_r534129256,
"raw" is misleading in this context. Let's use a more descriptive term.
|
|\ \
| | |
| | | |
config files: recommend systemd-analyze cat-config
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds the same line to most of our .conf files.
Not for systemd/user.conf though, since we can't correctly display it right
now:
$ systemd-analyze cat-config --user systemd/user.conf
Option --user is not supported for cat-config right now.
For sysusers.d, tmpfiles.d, rules.d, etc, there is no single file. Maybe
we should short READMEs in /usr/lib/sysusers.d, /usr/lib/tmpfiles.d, etc.?
Inspired by #19118.
|
|/
|
|
|
|
|
|
|
|
|
|
| |
let's make sure we set the "aa" bit in the stub only if we answer with
fully authoritative data. For this ensure:
1. Either all data is synthetic, including all CNAME/DNAME redirects
2. Or all data comes from the local trust anchor or the local zones
(i.e. not the network or the cache)
Follow-up for 4ad017cda57b04b9d65e7da962806cfcc50b5f0c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When following CNAME/DNAME redirects in the stub we currently first
iterate through the packet and pick up what we can use (in
dns_stub_collect_answer_by_question() and friends), following all
CNAMEs/DNAMEs, and would then issue dns_query_process_cname() to move
the DnsQuery object forward too, where we'd then possibly restart
the query and pick things up again, as above.
There's one thought error in this though: dns_query_process_cname()
tries to be smart and will internally follow not just a single
CNAME/DNAME redirect, but a chain of them if they are contained inside
the same packet until we reach the point where the answer is not
included in the packet anymore, where we'd restart the query. This was
great as long as we only focussed on the D-Bus and Varlink resolver
APIs, since there the CNAME/DNAME chain in the middle doesn't actually
matter, we just return information about the final name of the RR and
its content, and aren't interested in the chain to it. For the DNS stub
this is different however: there we need to place the full CNAME/DNAME
chain (and all the appropriate metadata RRs) in the stub reply.
Hence rework this so that we build on the fact that the previous commit
split dns_query_process_cname() in two:
1. dns_query_process_cname_one() will do exactly one CNAME/DNAME
redirect step. This will be called by the stub, so that we can pick
up matching RRs for every single step along the way.
2. dns_query_process_cname_many() will follow a chain as long as that's
possible within the same packet. It's thus pretty much identical to
the old dns_query_process_cname() call. This is what we now use in
the D-Bus and Varlink APIs. dns_query_process_cname_many() is
basically just a loop around dns_query_process_cname_one().
Any logic to follow and pick up RRs manually in the stub along the
CNAME/DNAME path is now dropped (i.e.
dns_stub_collect_answer_by_question() becomes trivially simple again),
we solely rely on dns_query_process_cname_one() to follow CNAME/DNAME
now: each step followed by a full call of dns_stub_assign_sections() to
copy out the RRs that matter.
Net result: things are a bit simpler again, as the only place we follow
CNAME/DNAME redirects is DnsQuery again, and stub answers are always
complete: they contain all CNAME/DNAME RRs on the way including all
their metadata we might pick up in the other sections.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does some refactoring: the dns_query_process_cname() function
becomes two: dns_query_process_cname_one() and
dns_query_process_cname_many(). The former will process exactly one
CNAME chain element, the latter will follow a chain for as long as
possible within the current packet.
dns_query_process_cname_many() is mostly identical to the old
dns_query_process_cname(), and all existing code is moved over to using
that.
This is mostly preparation for the next commit, where we make direct use
of dns_query_process_cname_one().
This also renames the DNS_QUERY_RESTARTED return value to
DNS_QUERY_CNAME. That's because in the dns_query_process_cname_many()
case as before if we return this we restarted the query in case we
reached the end of the chain without a conclusive answer, as before. But
in dns_query_process_cname_one() we'll only go one step anyway, and
leave restarting if needed to the caller. Hence DNS_QUERY_RESTARTED is a
bit of a misnomer in that case.
This also gets rid of the weird tail recursion in
dns_query_process_cname() and replaces it with an explicit loop in
dns_query_process_cname_many(). The old recursion wasn't a security
issue since we put a limit on the number of CNAMEs we follow anyway, but
it's still icky to scale stack use by that.
|
|
|
|
|
|
|
|
| |
Previously we'd stick all answer sections RRs we acquired into
the authoritative section if we didn't find them directly answering our
question. Let's put them into additional instead. The authoritative
section should hence only include what comes from the upstream
authoritative section, and nothing else.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we'd iterate through the RRs of an mDNS reply and then find
exactly one matching transaction on our scope for it, and pass it as
reply to that. If multiple RRs of the same packet match we'd pas the
packet multiple times to the transaction even.
This all doesn't really work anymore since there can be multiple open
transactions for the same key (with different flags), and it's kinda
ugly anywy. Hence let's turn this around: let's iterate through the
transactions and check if any of the included RRs match it, and if so
pass the packet to that transaction exactly once.
This speeds up mDNS a bit, since previously we'd oftentimes fail to find
all suitable transactions for an mDNS reply (because there can be
multiple transactions for the same RR key with different flags, and we
checked exactly one flag combination). Which would then mean the
transaction would time out, and be retried – at which point the cache
would be populated and thus it would still succeed, but only after this
timeout. With this fix this is corrected: every transaction that matches
will get the reply, instantly as we get it.
|
|
|
|
|
|
|
|
|
|
| |
(or back)
This is inspired by a recent thread on fedora-devel: it's noteworthy
when we switch to the fallback servers, since it might (or might not)
indicate some configuration problem.
Fixes: #18788
|
|
|
|
|
| |
This is inspired by #18917. It suppresses a misleading log message about
suppressing OPT where we might not actually have OPT.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
questions
When we checking if the responses we collected for a DnsQuery are
sufficient to complete it we previously only check if one of the
collected response RRs matches at least one of the question RR keys.
This changes the logic to require that there must be at least one
response RR matched *each* of the question RR keys before considering
the answer complete.
Otherwise we might end up accepting an A reply as complete answer for an
A/AAAA query and vice versa, but we want to make sure we wait until we
get a reply on both types before returning this to the user in all
cases.
This has been broken for basically forever, but didn't surface until
b1eea703e01da1e280e179fb119449436a0c9b8e since until then we'd basically
ignore the auxiliary RRs included in CNAME/DNAME replies. Once that
commit was made we'd start using the auxiliary RRs included in
CNAME/DNAME replies but those typically included only A or only AAAA
which we then took for complete.
Fixe: #19049
|
| |
|
|
|
|
| |
Fixes #19065.
|
|\
| |
| | |
a bunch of small fixes and clenups based on initial RHEL-9 covscan run
|
| | |
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
| |
rr is asserted upon a few lines above, no need to check for null.
Coverity-found issue, CID 1450844
CID 1450844: Null pointer dereferences (REVERSE_INULL)
Null-checking "rr" suggests that it may be null, but it has already
been dereferenced on all paths leading to the check.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When doing a CNAME/DNAME redirect let's first check if the answer we
already have fully answers the redirected question already. If so, let's
use that. If not, let's properly restart things.
This simply removes one call to dns_answer_reset() that was placed too
early: instead of resetting when we detect a CNAME/DNAME redirect, do so
only after checking if the answer we already have doesn't match the
reply, and then decide to *actually* follow it. Or in other words: rely
on the dns_answer_reset() call in dns_query_go() which we'll call to
actually begin with the redirected question.
This fixes an optimization path which was broken back in 7820b320eaa608748f66f8105621640cf80e483a.
(This doesn't really matter as much as one might think, since our cache
stepped in anyway and answered the questions before going back to the
network. However, this adds noise if RRs with very short TTLs are cached
– which some CDNs do – and is of course relavant when people turn off
the local cache.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously by mistake we'd always match every single reply we get in a
CNAME chain to the original question from the stub client. That's
broken, we need to test it against the CNAME query we are currently
looking at.
The effect of this incorrect matching was that we'd assign the RRs to
the wrong section since we'd assume they'd be auxiliary answers instead
of primary answers.
Fixes: #18972
|
|
|
|
| |
DnsAnswer
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
When responding from DNS cache, let's slightly tweak how the TTL is
lowered: as before let's round down when converting from our internal µs
to the external seconds. (This is preferable, since records should
better be cached too short instead of too long.) Let's avoid rounding
down to zero though, since that has special semantics in many cases (in
particular mDNS). Let's just use 1s in that case.
|
|
|
|
|
|
|
|
| |
We nowadays cache full answer RRset combinations instead of just the
exact matching rrset. This means we should not cache RRs that are not
immediate answers to our question for longer then their own RRs. Or in
other words: let's determine the shortest TTL of all RRs in the whole
answer, and use that as cache lifetime.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
The flag enables "relaxed mode" for all kinds of unescaping, not just c-unescaping.
|
|
|
|
|
|
|
|
|
|
|
| |
let's avoid RR duplicates
If we synthesize a stub reply from multiple upstream packet (i.e. a
series of CNAME/DNAME redirects), it might happen that we add the same
RR to a different reply section at a different CNAME/DNAME redirect
chain element. Let's clean this up once we are about to send the reply
message to the client: let's remove sections from "lower-priority"
sections when they are already listed in a "higher-priority" section.
|
|
|
|
|
|
|
|
|
|
|
| |
In 2f4d8e577ca7bc51fb054b8c2c8dd57c2e188a41 I argued that following
CNAMEs in the stub is not necessary anymore. However, I think it' better
to revert to the status quo ante and follow it after all, given it is
easy for us and makes sure our D-Bus/varlink replies are more similar to
our DNS stub replies that way, and we save clients potential roundtrips.
Hence, whenever we hit a CNAME/DNAME redirect, let's restart the query
like we do for the D-Bus/Varlink case, and collect replies as we go.
|
|
|
|
| |
Just some refactoring, no actual code changes.
|
|
|
|
|
|
|
|
| |
www.netflix.com responds with a chain of CNAMEs in the same packet.
Let's handle that properly (so far we only followed CNAMEs a single step
when in the same packet)
Fixes: #18819
|
|
|
|
|
|
|
|
|
| |
Let's refuse to consider CNAME/DNAME replies matching for RR types where
that is not really conceptually allow (i.e. on CNAME/DNAME lookups
themselves).
(And add a similar check to dns_resource_key_match_cname_or_dname() too,
which implements a smilar match)
|
|
|
|
|
|
| |
Let's rename it a bit, to be more explanatory while exporting it.
(And let's bump the CNAME limit to 16 — 8 just sounded so little)
|
|
|
|
|
|
| |
while IPv6 is off in the kernel
Fixes: #18812
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We generally operate on the assumption that a source is "gone" as soon
as we unref it. This is generally true because we have the only reference.
But if something else holds the reference, our unref doesn't really stop
the source and it could fire again.
In particular, on_query_timeout() is called with DnsQuery* as userdata, and
it calls dns_query_stop() which invalidates that pointer. If it was ever
called again, we'd be accessing already-freed memory.
I don't see what would hold the reference. sd-event takes a temporary reference,
but on the sd_event object, not on the individual sources. And our sources
are non-floating, so there is no reference from the sd_event object to the
sources.
For #18427.
|
|
|
|
| |
It shouldn't matter because of all the refcounting, but it looks unclean.
|
| |
|
| |
|
|
|
|
|
| |
Even though RFC 6762 specifies these bits MUST be zero, it also says they MUST
be ignored on reception.
|
|
|
|
| |
ip6.arpa is also a valid domain name to put in mDNS packets.
|