| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
If we change the watchdog device we should disarm the old one first.
Similar, if we open the watchdog, but then fail setting it up, disarm it
before closing it again.
|
| |
|
|
|
|
|
|
|
|
| |
We default to quiet operation everywhere except for repart, where
we disable quiet and have the mkfs tools write to stdout.
We also make sure --quiet or equivalent is implemented for all mkfs
tools.
|
| |
|
|
|
|
|
|
|
|
| |
The previous error code -ERANGE is slightly ambiguous, and use more
specific one. This also drops unnecessary error handlings.
Follow-up for 754d8b9c330150fdb3767491e24975f7dfe2a203 and
e652663a043cb80936bb12ad5c87766fc5150c24.
|
|\
| |
| | |
Tpm2 policy session
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This retains the use of policy sessions instead of trial sessions
in most cases, based on the code comment that some TPMs do not
implement trial sessions correctly. However, it's likely that the
issue was not the TPMs, but our code's incorrect use of PolicyPCR
inside a trial session; we are not providing expected PCR values
with our call to PolicyPCR inside a trial session, but the spec
indicates that in a trial session, the TPM *may* return error if
the expected PCR value(s) are not provided. That may have been the
source of the original confusion about trial sessions.
More details:
https://github.com/systemd/systemd/pull/26357#pullrequestreview-1409983694
Also, future commits will replace the use of trial sessions with
policy calculations, which avoids the problem entirely.
|
| | |
|
|/
|
|
|
|
|
| |
Since we do `FD_TO_PTR(fd)` that expands to `INT_TO_PTR(fd) + 1` which
triggers an integer overflow.
Resolves: #27522
|
|\
| |
| | |
Always check parsed fds for validity
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's no need to fchdir() out of the rootfs and back into it around
the umount2(), hence don't.
This brings the logic closer to what the pivot_root() man page suggests.
While we are at it, always operate based on fds, once we opened the
original dir, and pass the path string along only for generating
messages (i.e. as "decoration").
Add tests for both code paths: the pivot_root() one and the MS_MOUNT.
|
|
|
|
|
|
|
| |
The error handling and fchmodat() invocation is pretty much the same in
the directory and symlink branches, hence make them the same.
No real change in behaviour. Just refactoring.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
an fd, instead of a path
This also changes the open flags from
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC|O_NOFOLLOW to
O_DIRECTORY|O_CLOEXEC. O_RDONLY is redundant, since O_RDONLY is zero
anyway, and O_DIRECTORY pins the acces mode enough: it doesn't allow
read()/write() anyway when specified. O_NONBLOCK is also pointless given
that O_DIRECTORY is specified, it has no meaning on directories. (It is
useful if we don't know much about the inode we are opening, and could
be a device node or fifo, but the O_DIRECTORY excludes that case.)
O_NOFOLLOW is dropped since there's really no point in blocking out the
initial entrypoint being a symlink. Once we pinned the the root of the
tree it might make sense to restrict symlink use below it, but for the
entrypoint itself it doesn't matter.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So far, we invoked pivot_root() specifying /mnt/ as second argument,
which then unmounted right-after. We'd create /mnt/ if needed. This
sucks, because it means /mnt/ must strictly be pre-created on immutable
images.
Remove this limitation, by using pivot_root() with "." as source and
target, which will result in two stacked mounts afterwards: the new one
underneath, the old one ontop. We can then simply unmount the top one,
and have what we want without needing any extra /mnt/ dir.
Since we don't need /mnt/ anymore we can get rid of the extra
unmount_old_root parameter and simply specify it as NULL if we don't
want the old mount to stick around.
|
|\
| |
| | |
test: add a simple fuzzer for manager serialization
|
| | |
|
| | |
|
|\ \
| | |
| | | |
base-filesystem: create /proc, /sys, /dev mount points as 555
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These inodes are going to be overmounted anyway, hence let's create them
with access mode 555, so that they are as close to being immutable as
regular UNIX access modes allow them to be. In other words: this takes
the "w" mode away for root. This of course usually has little effect --
unless CAP_DAC_OVERRIDE is dropped. But at the very least it makes the
point clear that inodes should be considered immutable.
(I intended to make this 0000 originally, but that doesn't work, as many
tools – including our own – have fallback paths that when they see
ENOENT in /proc/ they can handle this gracefully. But changing the mode
to 000 would turn this to EACCES - something they usually have no
fallback path for)
|
|/
|
|
|
| |
Journal corruption is not only indicated by EBADMSG but also by
EADDRNOTAVAIL so treat that as corruption in a few more cases.
|
| |
|
| |
|
|
|
|
|
| |
The btrfs name and the generic name have the same values, hence there's
no point in bothering with the former.
|
|\
| |
| | |
copy: follow ups for reflink()
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The commit b640e274a7c363a2b6394c9dce5671d9404d2e2a introduced reflink()
and reflink_full(). We usually name function xyz_full() for fully
parameterized version of xyz(), and xyz() is typically a inline alias of
xyz_full(). But in this case, reflink() and reflink_full() call
different ioctl().
Moreover, reflink_full() does partial reflink, while reflink() does full
file reflink. That's super confusing.
Let's rename reflink_full() to reflink_range(), the new name is
consistent with ioctl name, and should be fine.
|
|\ \
| |/
|/| |
More automatic cleanup
|
| | |
|
| |
| |
| |
| |
| |
| | |
The kernel has had filesystem independent reflink ioctls for a
while now, let's try to use them and fall back to the btrfs specific
ones if they're not supported.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds support for systematically destroying connections in
pam_sm_session_open() even on failure, so that under no circumstances
unserved dbus connection are around while the invoking process waits for
the session to end. Previously we'd only do this on success, now do it
in all cases.
This matters since so far we suggested people hook pam_systemd into
their pam stacks prefixed with "-", so that login proceeds even if
pam_systemd fails. This however means that in an error case our
cached connection doesn't get disconnected even if the session then is
invoked. This fixes that.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Let's systematically avoid sharing cached busses between processes (i.e.
from parent and child after fork()), by including the PID in the field
name.
With that we're never tempted to use a bus object the parent created in
the child.
(Note this is about *use*, not about *destruction*. Destruction needs to
be checked by other means.)
|
|\ \
| | |
| | | |
tmpfiles: add conditionalized execute permission (X) support
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
According to setfacl(1), "the character X stands for
the execute permission if the file is a directory
or already has execute permission for some user."
After this commit, parse_acl() would return 3 acl
objects. The newly-added acl_exec object contains
entries that are subject to conditionalized execute
bit mangling. In tmpfiles, we would iterate the acl_exec
object, check the permission of the target files,
and remove the execute bit if necessary.
Here's an example entry:
A /tmp/test - - - - u:test:rwX
Closes #25114
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
by the user delegated to
If we create a subcroup (regardless if the '.control' subgroup we
always created or one configured via DelegateSubgroup=) it's inside of
the delegated territory of the cgroup tree, hence it should be owned
fully by the unit's users. Hence do so.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This implements a minimal subset of #24961, but in a lot more
restrictive way: we only allow one level of subcgroup (as that's enough
to address the no-processes in inner cgroups rule), and does not change
anything about threaded cgroup logic or similar, or make any of this new
behaviour mandatory.
All this does is this: all non-control processes we invoke for a unit
we'll invoke in a subgroup by the specified name.
We'll later port all our current services that use cgroup delegation
over to this, i.e. user@.service, systemd-nspawn@.service and
systemd-udevd.service.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
flags
When encoding partition policy flags we allow parts of the flags to be
"unspecified" (i.e. entirely zeros), which when actually checking the
policy we'll automatically consider equivalent to "any" (i.e. entirely
ones). This "extension" of the flags was so far done as part of
partition_policy_normalized_flags(). Let's split this logic out into a
new function partition_policy_flags_extend() that simply sets all bits
in a specific part of the flags field if they were entirely zeroes so
far.
When comparing policy objects for equivalence we so far used
partition_policy_normalized_flags() to compare the per-designator flags,
which thus meant that "underspecified" flags, and fully specified ones
that are set to "any" were considered equivalent. Which is great.
However, we forgot to do that for the fallback policy flags, the flags
that apply to all partitions for which no explicit policy flags are
specified.
Let's use the new partition_policy_flags_extend() call to compare them
in extended form, so that there two we can hide the difference between
"underspecified" and "any" flags.
|
|\ \
| | |
| | | |
CoredumpFilter: fix stack overflow and invalid assignment with 'all'
|
| | |
| | |
| | |
| | |
| | |
| | | |
The kernel returns ERANGE when UINT64_MAX is passed. Create a mask
and use UINT32_max, which is accepted, so that future bits will also
be set.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We translate 'all' to UNIT64_MAX, which has a lot more 'f's. Use the
helper macro, since a decimal uint64_t will always be >> than a hex
representation.
root@image:~# systemd-run -t --property CoredumpFilter=all ls /tmp
Running as unit: run-u13.service
Press ^] three times within 1s to disconnect TTY.
*** stack smashing detected ***: terminated
[137256.320511] systemd[1]: run-u13.service: Main process exited, code=dumped, status=6/ABRT
[137256.320850] systemd[1]: run-u13.service: Failed with result 'core-dump'.
|
|/ / |
|
|\ \
| | |
| | | |
Adjust messages when credentials are missing
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Realistically, the only thing that the caller can do is ignore failures related
to missing credentials. If the caller requires some credentials to be present,
they should just check which output variables are not NULL. One of the callers
was already doing that, and the other wanted to, but missed -ENOENT. By
suppressing -ENOENT and -ENXIO, both callers are simplified.
Fixes a warning at boot:
systemd-vconsole-setup[221]: Failed to import credentials, ignoring: No such file or directory
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
aafeijoo-suse/systemd-network-generator-initrd-fix
network-generator: do not parse kernel command line more than once
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This function is like `generator_open_unit_file`, but if `ret_temp_path` is
passed, a temporary unit is created instead.
|
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | | |
This will make things a bit longer for now, but more powerful as we can
reuse the userns fd between calls to remount_idmap() if we need to
adjust multiple mounts.
No change in behaviour, just some minor refactoring.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When pam_end() is called after a fork, and it cleans up caches, it sets
PAM_DATA_SILENT in error_status. FDs will be shared with the parent, so
we do not want to attempt to close them from a child process, or we'll
hit assertions. Complain loudly and skip.
|
| | |
| | |
| | |
| | |
| | | |
If we only want to know if some user ID/user name is already allocated,
we don't care for the returned data.
|