| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With cgroupsv1 a zombie process is migrated to root cgroup in all
hierarchies. This was changed for unified hierarchy and /proc/PID/cgroup
reports cgroup to which process belonged before it exited.
Be more suspicious about cgroup path reported by the kernel and use
unit_id provided by the log client if the kernel reports that process is
running in the root cgroup.
Users tend to care the most about 'log->unit_id' mapping so systemctl
status can correctly report last log lines. Also we wouldn't be able to
infer anything useful from "/" path anyway.
See: https://github.com/torvalds/linux/commit/2e91fa7f6d451e3ea9fec999065d2fd199691f9d
(cherry picked from commit 672773b63a4ebf95242b27e63071b93073ebc1f5)
Resolves: #1658115
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Existing use of E2BIG is replaced with ENOBUFS (entry too long), and E2BIG is
reused for the new error condition (too many fields).
This matches the change done for systemd-journald, hence forming the second
part of the fix for CVE-2018-16865
(https://bugzilla.redhat.com/show_bug.cgi?id=1653861).
(cherry-picked from commit ef4d6abe7c7fab6cbff975b32e76b09feee56074)
Resolves: #1664977
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calling mhd_respond(), which ulimately calls MHD_queue_response() is
ineffective at point, becuase MHD_queue_response() immediately returns
MHD_NO signifying an error, because the connection is in state
MHD_CONNECTION_CONTINUE_SENT.
As Christian Grothoff kindly explained:
> You are likely calling MHD_queue_repsonse() too late: once you are
> receiving upload_data, HTTP forces you to process it all. At this time,
> MHD has already sent "100 continue" and cannot take it back (hence you
> get MHD_NO!).
>
> In your request handler, the first time when you are called for a
> connection (and when hence *upload_data_size == 0 and upload_data ==
> NULL) you must check the content-length header and react (with
> MHD_queue_response) based on this (to prevent MHD from automatically
> generating 100 continue).
If we ever encounter this kind of error, print a warning and immediately
abort the connection. (The alternative would be to keep reading the data,
but ignore it, and return an error after we get to the end of data.
That is possible, but of course puts additional load on both the
sender and reciever, and doesn't seem important enough just to return
a good error message.)
Note that sending of the error does not work (the connection is always aborted
when MHD_queue_response is used with MHD_RESPMEM_MUST_FREE, as in this case)
with libµhttpd 0.59, but works with 0.61:
https://src.fedoraproject.org/rpms/libmicrohttpd/pull-request/1
(cherry-picked from commit 7fdb237f5473cb8fc2129e57e8a0039526dcb4fd)
Related: #1664977
|
|
|
|
|
|
| |
(cherry-picked from commit d101fb24eb1c58c97f2adce1f69f4b61a788933a)
Related: #1664977
|
|
|
|
|
|
|
|
|
|
| |
We immediately read the whole contents into memory, making thigs much more
expensive. Sealed fds should be used instead since they are more efficient
on our side.
(cherry-picked from commit 6670c9de196c8e2d5e84a8890cbb68f70c4db6e3)
Related: #1664977
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
messages
We'd first parse all or most of the message, and only then consider if it
is not too large. Also, when encountering a single field over the limit,
we'd still process the preceding part of the message. Let's be stricter,
and check size limits early, and let's refuse the whole message if it fails
any of the size limits.
(cherry-picked from commit 964ef920ea6735d39f856b05fd8ef451a09a6a1d)
Related: #1664977
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We allocate a iovec entry for each field, so with many short entries,
our memory usage and processing time can be large, even with a relatively
small message size. Let's refuse overly long entries.
CVE-2018-16865
https://bugzilla.redhat.com/show_bug.cgi?id=1653861
What from I can see, the problem is not from an alloca, despite what the CVE
description says, but from the attack multiplication that comes from creating
many very small iovecs: (void* + size_t) for each three bytes of input message.
(cherry-picked from commit 052c57f132f04a3cf4148f87561618da1a6908b4)
Resolves: #1664977
|
|
|
|
|
|
|
|
| |
Fixes #9829.
(cherry-picked from commit a6aadf4ae0bae185dc4c414d492a4a781c80ffe5)
Resolves: #1664978
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allocate new string as a return value and free our "scratch pad"
buffer that is potentially much larger than needed (up to
_SC_ARG_MAX).
Fixes #11502
(cherry-picked from commit eb1ec489eef8a32918bbfc56a268c9d10464584d)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
| |
In normal use, this allow us to drop dead entries from the cache and reduces
the cache size so that we don't evict entries unnecessarily. The time limit is
there mostly to serve as a guard against malicious logging from many different
PIDs.
(cherry-picked from commit 91714a7f427a6c9c5c3be8b3819fee45050028f3)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is far from perfect, but should give mostly reasonable values. My
assumption is that if somebody has a few hundred MB of memory, they are
unlikely to have thousands of processes logging. A hundred would already be a
lot. So let's scale the cache size propritionally to the total memory size,
with clamping on both ends.
The formula gives 64 cache entries for each GB of RAM.
(cherry-picked from commit b12a480829c5ca8f4d4fa9cde8716b5f2f12a3ad)
Related: #1664976
|
|
|
|
|
|
| |
(cherry-picked from commit ef21b3b5bf824e652addf850bcfd9374c7b33ce8)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
|
| |
procfs_memory_get_current is renamed to procfs_memory_get_used, because
"current" can mean anything, including total memory, used memory, and free
memory, as long as the value is up to date.
No functional change.
(cherry-picked from commit c482724aa5c5d0b1391fcf958a9a3ea6ce73a085)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
| |
If creation of the message failed, we'd write a bogus entry:
systemd-coredump[1400]: Cannot store coredump of 416 (systemd-journal): No space left on device
systemd-coredump[1400]: MESSAGE=Process 416 (systemd-journal) of user 0 dumped core.
systemd-coredump[1400]: Coredump diverted to
(cherry-picked from commit f0136e09221364f931c3a3b715da4e4d3ee9f2ac)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This affects systemd-journald and systemd-coredump.
Example entry:
$ journalctl -o export -n1 'MESSAGE=Something logged'
__CURSOR=s=976542d120c649f494471be317829ef9;i=34e;b=4871e4c474574ce4a462dfe3f1c37f06;m=c7d0c37dd2;t=57c4ac58f3b98;x=67598e942bd23dc0
__REALTIME_TIMESTAMP=1544035467475864
__MONOTONIC_TIMESTAMP=858200964562
_BOOT_ID=4871e4c474574ce4a462dfe3f1c37f06
PRIORITY=6
_UID=1000
_GID=1000
_CAP_EFFECTIVE=0
_SELINUX_CONTEXT=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
_AUDIT_SESSION=1
_AUDIT_LOGINUID=1000
_SYSTEMD_OWNER_UID=1000
_SYSTEMD_UNIT=user@1000.service
_SYSTEMD_SLICE=user-1000.slice
_SYSTEMD_USER_SLICE=-.slice
_SYSTEMD_INVOCATION_ID=1c4a469986d448719cb0f9141a10810e
_MACHINE_ID=08a5690a2eed47cf92ac0a5d2e3cf6b0
_HOSTNAME=krowka
_TRANSPORT=syslog
SYSLOG_FACILITY=17
SYSLOG_IDENTIFIER=syslog-caller
MESSAGE=Something logged
_COMM=poc
_EXE=/home/zbyszek/src/systemd-work3/poc
_SYSTEMD_CGROUP=/user.slice/user-1000.slice/user@1000.service/gnome-terminal-server.service
_SYSTEMD_USER_UNIT=gnome-terminal-server.service
SYSLOG_PID=4108
SYSLOG_TIMESTAMP=Dec 5 19:44:27
_PID=4108
_CMDLINE=./poc AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA>
_SOURCE_REALTIME_TIMESTAMP=1544035467475848
$ journalctl -o export -n1 'MESSAGE=Something logged' --output-fields=_CMDLINE|wc
6 2053 2097410
2MB might be hard for some clients to use meaningfully, but OTOH, it is
important to log the full commandline sometimes. For example, when the program
is crashing, the exact argument list is useful.
(cherry-picked from commit 2d5d2e0cc5171c6795d2a485841474345d9e30ab)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a crash where we would read the commandline, whose length is under
control of the sending program, and then crash when trying to create a stack
allocation for it.
CVE-2018-16864
https://bugzilla.redhat.com/show_bug.cgi?id=1653855
The message actually doesn't get written to disk, because
journal_file_append_entry() returns -E2BIG.
(cherry-picked from commit 084eeb865ca63887098e0945fb4e93c852b91b0f)
Resolves: #1664976
|
|
|
|
|
|
| |
(cherry-picked from commit bc2762a309132a34db1797d8b5792d5747a94484)
Related: #1664976
|
|
|
|
|
|
|
|
|
|
|
|
| |
systemd-coredump[9982]: MESSAGE=Process 771 (systemd-journal) of user 0 dumped core.
systemd-coredump[9982]: Coredump diverted to /var/lib/systemd/coredump/core...
log_dispatch() calls log_dispatch_internal() which calls write_to_journal()
which appends MESSAGE= on its own.
(cherry-picked from commit 4f62556d71206ac814a020a954b397d4940e14c3)
Related: #1664976
|
|
|
|
|
|
|
|
| |
Let's better be safe than sorry, and put a limit on what we receive.
(cherry picked from commit 3eac1bcae9284fb8b18f4b82156c0e85ddb004e5)
Related: CVE-2018-15686
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should be much better than fgets(), as we can read substantially
longer lines and overly long lines result in proper errors.
Fixes a vulnerability discovered by Jann Horn at Google.
CVE-2018-15686
LP: #1796402
https://bugzilla.redhat.com/show_bug.cgi?id=1639071
(cherry picked from commit 8948b3415d762245ebf5e19d80b97d4d8cc208c1)
Resolves: CVE-2018-15686
|
|
|
|
|
|
|
|
|
|
| |
Docker's default capability set has the inherited flag already
set - that breaks tests which expect otherwise. Let's just
drop the check and run the test anyway.
Fixes #10663
Cherry-picked from: c446b8486d9ed18d1bc780948ae9ee8a53fa4c3f
|
|
|
|
|
| |
rhel-only
Resolves: #1619292
|
|
|
|
|
|
| |
Fixes: #10544
Cherry-picked from: 6619ad889da260cf83079cc74a85d571acd1df5a
|
|
|
|
|
|
|
|
|
|
| |
For example, <luks.uuid>=/keyfile:LABEL="KEYFILE FS" previously wouldn't
work, because we truncated label at the first whitespace character,
i.e. LABEL="KEYFILE".
(cherry-picked from commit 7949dfa73a44ae6524779689483d12243dfbcfdf)
Related: #1656869
|
|
|
|
|
|
| |
(cherry-picked from commit 579875bc4a59b917fa32519e3d96d56dc591ad1e)
Related: #1656869
|
|
|
|
|
|
|
|
|
|
| |
We are not the ones receiving an error here, but the ones generating it,
hence we shouldn't show it with %m, that's just confusing, as it
suggests we received an error from some other call.
(cherry-picked from commit 2abe64666e544be6499f870618185f8819b4c152)
Related: #1656869
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dracut has a support for unlocking encrypted drives with keyfile stored
on the external drive. This support is included in the generated initrd
only if systemd module is not included.
When systemd is used in initrd then attachment of encrypted drives is
handled by systemd-cryptsetup tools. Our generator has support for
keyfile, however, it didn't support keyfile on the external block
device (keydev).
This commit introduces basic keydev support. Keydev can be specified per
luks.uuid on the kernel command line. Keydev is automatically mounted
during boot and we look for keyfile in the keydev
mountpoint (i.e. keyfile path is prefixed with the keydev mount point
path). After crypt device is attached we automatically unmount
where keyfile resides.
Example:
rd.luks.key=70bc876b-f627-4038-9049-3080d79d2165=/key:LABEL=KEYDEV
(cherry-picked from commit 70f5f48eb891b12e969577b464de61e15a2593da)
Resolves: #1656869
|
|
|
|
|
|
|
|
|
| |
Fixes a SIGSEGV introduced by commit 38a5315a3a6fab745d8c86ff9e486faaf50b28d1.
The same problem doesn't exist upstream, as the container structure
there is initialized using a compound literal, which is zeroed out by
default.
Related: #1635435
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We didn't free one of the fields in two of the places.
$ valgrind --show-leak-kinds=all --leak-check=full \
build/fuzz-bus-message \
test/fuzz/fuzz-bus-message/leak-c09c0e2256d43bc5e2d02748c8d8760e7bc25d20
...
==14457== HEAP SUMMARY:
==14457== in use at exit: 3 bytes in 1 blocks
==14457== total heap usage: 509 allocs, 508 frees, 51,016 bytes allocated
==14457==
==14457== 3 bytes in 1 blocks are definitely lost in loss record 1 of 1
==14457== at 0x4C2EBAB: malloc (vg_replace_malloc.c:299)
==14457== by 0x53AFE79: strndup (in /usr/lib64/libc-2.27.so)
==14457== by 0x4F52EB8: free_and_strndup (string-util.c:1039)
==14457== by 0x4F8E1AB: sd_bus_message_peek_type (bus-message.c:4193)
==14457== by 0x4F76CB5: bus_message_dump (bus-dump.c:144)
==14457== by 0x108F12: LLVMFuzzerTestOneInput (fuzz-bus-message.c:24)
==14457== by 0x1090F7: main (fuzz-main.c:34)
==14457==
==14457== LEAK SUMMARY:
==14457== definitely lost: 3 bytes in 1 blocks
(cherry picked from commit 6d1e0f4fcba8d6f425da3dc91805db95399b3c8b)
Resolves: #1635435
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Quoting https://github.com/systemd/systemd/issues/10074:
> detect_vm_uml() reads /proc/cpuinfo with read_full_file()
> read_full_file() has a file max limit size of READ_FULL_BYTES_MAX=(4U*1024U*1024U)
> Unfortunately, the size of my /proc/cpuinfo is bigger, approximately:
> echo $(( 4* $(cat /proc/cpuinfo | wc -c)))
> 9918072
> This causes read_full_file() to fail and the Condition test fallout.
Let's just read line by line until we find an intersting line. This also
helps if not running under UML, because we avoid reading as much data.
(cherry picked from commit 6058516a14ada1748313af6783f5b4e7e3006654)
Resolves: #1631532
|
|
|
|
|
|
|
|
| |
[msekleta: I removed call to log_test_skipped() and replaced it with older construct log_info() + return EXIT_TEST_SKIP]
(cherry-picked from commit cb9e44db36caefcbb8ee7a12e14217305ed69ff2)
Related: #1643368
|
|
|
|
|
|
| |
(cherry-picked from commit cd6b7d50c337b3676a3d5fc2188ff298dcbdb939)
Related: #1643368
|
|
|
|
|
|
|
|
| |
Let's better be safe than sorry and also drop ACLs.
(cherry-picked from commit f89bc84f3242449cbc308892c87573b131f121df)
Related: #1643368
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
That way we can pin a specific inode and analyze it and manipulate it
without it being swapped out beneath our hands.
Fixes a vulnerability originally found by Jann Horn from Google.
CVE-2018-15687
LP: #1796692
https://bugzilla.redhat.com/show_bug.cgi?id=1639076
(cherry-picked from commit 5de6cce58b3e8b79239b6e83653459d91af6e57c)
Resolves: #1643368
|
|
|
|
|
|
| |
(cherry picked from commit a7dd6d04b07f58df5c0294743d76df0be0b4b928)
Resolves: #1643429
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our current set of flags allows an option to be either
use just in initrd or both in initrd and normal system.
This new flag is intended to be used in the case where
you want apply some settings just in initrd or just
in normal system.
(cherry picked from commit ed58820d7669971762dd887dc117d922c23f2543)
Related: #1643429
|
|
|
|
|
|
|
|
|
| |
pending
Fixes: #10627
(cherry picked from commit b8d381c47776ea0440af175cbe0c02cb743bde08)
Resolves: #1647359
|
|
|
|
|
|
|
|
|
|
| |
warning we log
No change in behaviour, just better wording.
(cherry picked from commit 4b66bccab004221b903b43b4c224442bfa3e9ac7)
Resolves: #1647359
|
|
|
|
|
|
|
|
|
|
|
| |
This field is only used for pending Reload() replies, hence let's rename
it to be more descriptive and precise.
No change in behaviour.
(cherry picked from commit 209de5256b7ba8600c3e73a85a43b86708998d65)
Resolves: #1647359
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes a vulnerability originally discovered by Felix Wilhelm from
Google.
CVE-2018-15688
LP: #1795921
https://bugzilla.redhat.com/show_bug.cgi?id=1639067
(cherry-picked from commit 4dac5eaba4e419b29c97da38a8b1f82336c2c892)
Resolves: #1643363
|
|
|
|
|
|
|
|
|
| |
This can happen if journal_file_close is called from the failure
handling code of journal_file_open before f->fd was established.
(cherry picked from commit c52368509f48e556be5a4c7a171361b656a25e02)
Resolves: #1602706
|
|
|
|
|
|
|
|
|
| |
... like commit f28501279d2c28fdbb31d8273b723e9bf71d3b98 does for
out_interface.
(cherry picked from commit 0b777d20e9a3868b12372ffce8040d1be063cec7)
Resolves: #1602706
|
|
|
|
|
|
| |
(cherry picked from commit e99742ef3e9d847da04e71fec0eb426063b25068)
Resolves: #1602706
|
|
|
|
|
|
|
|
| |
fstype can be NULL here.
(cherry picked from commit 4db1879acdc0b853e1a7e6e650b6feb917175fac)
Resolves: #1602706
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the symlink doesn't exists, and we are being started, let's
create it to provie name resolution.
If it exists, do nothing. In particular, if it is a broken symlink,
we cannot really know if the administator configured it to point to
a location used by some service that hasn't started yet, so we
don't touch it in that case either.
https://bugzilla.redhat.com/show_bug.cgi?id=1313085
|
|
|
|
| |
Related: #1635428
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2: fix error in free_and_strndup()
When the orignal and copied message were the same, but shorter than specified
length l, memory read past the end of the buffer would be performed. A test
case is included: a string that had an embedded NUL ("q\0") is used to replace
"q".
v3: Fix one more bug in free_and_strndup and add tests.
v4: Some style fixed based on review, one more use of free_and_replace, and
make the tests more comprehensive.
(cherry picked from commit 7f546026abbdc56c453a577e52d57159458c3e9c)
Resolves: #1635428
|
|
|
|
|
|
|
|
|
| |
We'd calculate the "real" length of the string as 'item_size - 1', which does
not work out well when item_size == 0.
(cherry picked from commit 81b6e63029eefcb0ec03a3a7c248490e38106073)
Resolves: #1635439
|
|
|
|
|
|
|
|
| |
Follow-up for #9936.
(cherry picked from commit 645461f0cf6ec91e5b0b571559fb4cc4898192bc)
Related: #1572563
|
|
|
|
|
|
|
|
|
|
| |
Bug-Ubuntu: https://launchpad.net/bugs/1776626
Closes #8881.
(cherry picked from commit a9fc640671ef60ac949f1ace6fa687ff242fc233)
Resolves: #1572563
|