| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Hide some false positives.
Note: there must not a be blank line after coverity hiding comment.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
syscall 186 is specific to x86 64bit. As this is different from arch
to arch and between same arch different arch size we will only grab
thread ID using built-in python support if it is supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2166931
|
|
|
|
|
|
| |
Reduce the lock time and include the flush in the lock.
Reported by: Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
|
|
|
|
| |
Signed-off-by: Adam Williamson <awilliam@redhat.com>
|
|
|
|
| |
Signed-off-by: Vojtech Trefny <vtrefny@redhat.com>
|
|
|
|
|
|
|
|
| |
When the daemon is starting we do an initial fetch of lvm state. If we
happened to get some type of failure with lvm during this time we would
exit. During error injection testing this happened enough that
the unit tests were unable to finish. Add retries to ensure we can get
started during error injection testing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we sort the LVs, we can stumble on a missing key, protect against
this as well.
Seen in error injection testing:
Traceback (most recent call last):
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/fetch.py", line 198, in update_thread
num_changes = load(*_load_args(queued_requests))
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/fetch.py", line 83, in load
rc = MThreadRunner(_main_thread_load, refresh, emit_signal).done()
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/utils.py", line 726, in done
raise self.exception
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/utils.py", line 732, in _run
self.rc = self.f(*self.args)
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/fetch.py", line 40, in _main_thread_load
(lv_changes, remove) = load_lvs(
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/lv.py", line 148, in load_lvs
return common(
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/loader.py", line 37, in common
objects = retrieve(search_keys, cache_refresh=False)
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/lv.py", line 72, in lvs_state_retrieve
lvs = sorted(cfg.db.fetch_lvs(selection), key=get_key)
File "/home/tasleson/projects/lvm2/daemons/lvmdbusd/lv.py", line 35, in get_key
pool = i['pool_lv']
KeyError: 'pool_lv'
|
|
|
|
| |
Instead of using an assert we will raise an LvmBug exception
|
|
|
|
|
| |
Hard to see if fclose calls were correct, and coverity couldn't figure
it out either, so make it clear.
|
|
|
|
| |
When running lvmdb.py by itself for testing we need these.
|
|
|
|
|
| |
Move this to the cfg file itself, so that initialization runs when it
gets processed.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a window of time where the following can occur.
1. An API request is in process to the lvm shell, we have written some
command to the lvm shell and we are blocked on that thread waiting
2. A signal arrives to the daemon which causes us to exit. The signal
handling code path goes directly to the lvm shell and writes
"exit\n". This causes the lvm shell to simply exit.
3. The thread that was waiting for a response gets an EIO as the child
process has exited. This bubbles up a failure.
This is addressed by placing a lock in the lvm shell to prevent
concurrent access to the shell. We also gather additional debug data
when we get an error in the lvm shell read path. This should help if
the lvm shell exits/crashes on its own.
|
|
|
|
| |
Add support for specif STATIC_LDFLAGS when linking static binaries.
|
|
|
|
|
|
| |
Check for pkg-config --libs libdlm_lt and test if the returned value
contains word 'pthread' - if so, it's likely a buggy result from
incorrect config file and use directly -ldlm_lt for this case.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Convert lvmlockd to use configure _LIBS and _CFLAGS for
discovered libraries.
TODO: ATM we ignore discovered libdlm and use libdlm_lt instead.
Also libseagate_ilm is hard to find unicorn for testing.
|
|
|
|
|
|
|
| |
Convert naming SYSTEMD_CFLAGS/LIB -> LIBSYSTEMD_CFLAGS/LIBS
to better fit library check for libsystemd.
Build lvmlockd with SD_NOTIFY when we have defined LIBSYSTEMD_LIBS.
|
|
|
|
| |
Use discovered/selected systemd library from configure.
|
|
|
|
|
|
| |
Patch aec5e573afe610070eb2c6bed675d2a7c0efc7e8 was fixing some
of typos only in generated file, but they need to be fixed in
the source files.
|
|
|
|
|
|
|
|
| |
Keep the conversion 64bit as on x32 arch time_t is 64bit value
and we may loose precision (y2038).
TODO: like use universal string for time printing as in log/log.c
_set_time_prefix()
|
|
|
|
| |
Some for y38k - calculations can handle 64b time_t.
|
|
|
|
|
| |
in get_local_nodeid from recent lock purge feature:
lvmlockd: purge the lock resources left in previous lockspace
|
|
|
|
| |
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2145114
|
|
|
|
| |
Moving this so we can re-use outside of lvm_shell_proxy.
|
|
|
|
|
|
|
|
|
|
|
| |
We can't assume that strerror_r returns char* just because _GNU_SOURCE is
defined. We already call the appropriate autoconf test, so let's use its
result (STRERROR_R_CHAR_P).
Note that in configure, _GNU_SOURCE is always set, but we add a defined
guard just in case for futureproofing.
Bug: https://bugs.gentoo.org/869404
|
|
|
|
|
| |
If something manually copies a PV signature to a block device we will
miss it. Handle this case too.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we utilized udev until we got a dbus notification from lvm
command line tools. This however misses the case where something outside
of lvm clears the signatures on a block device and we fail to refresh the
state of the daemon. Change the behavior so we always monitor udev events,
but ignore those udev events that pertain to lvm members.
Note: --udev command line option no longer does anything and simply
outputs a message that it's no longer used.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1967171
|
| |
|
|
|
|
|
|
|
|
|
|
| |
If lvmlockd in cluster is killed accidently or any other reason, the
lock resources will become orphaned in the VG lockspace. When the
cluster manager tries to restart this daemon, the LVs will probably
become inactive because of resource schedule policy and thus the lock
resouce will be omited during the adoption process. This patch will
try to purge the lock resources left in previous lockspace, so the
following actions can work again.
|
|
|
|
|
| |
Make this match the unit test expectation and the form we use for
other env. variables.
|
|
|
|
|
|
|
|
|
|
| |
Previously when the __del__ method ran on LVMShellProxy we would blindly
call terminate(). This was a race condition as the underlying process
may/maynot be present. When the process is still present the SIGTERM will
end up being seen by lvmdbusd too. Re-work the code so that we
first try to wait for the child process to exit and only then if it hasn't
exited will we send it a SIGTERM. We also ensure that when this is
executed we will briefly ignore a SIGTERM that arrives for the daemon.
|
|
|
|
| |
Ensure we log that we are exiting on this signal too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When checking to see if the PV is missing we incorrectly checked that the
path_create was equal to PV creation. However, there are cases where we
are doing a lookup where the path_create == None. In this case, we would
fail to set lvm_id == None which caused a problem as we had more than 1
PV that was missing. When this occurred, the second lookup matched the
first missing PV that was added to the object manager. This resulted in
the following:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/lvmdbusd/utils.py", line 667, in _run
self.rc = self.f(*self.args)
File "/usr/lib/python3.9/site-packages/lvmdbusd/fetch.py", line 25, in _main_thread_load
(changes, remove) = load_pvs(
File "/usr/lib/python3.9/site-packages/lvmdbusd/pv.py", line 46, in load_pvs
return common(
File "/usr/lib/python3.9/site-packages/lvmdbusd/loader.py", line 55, in common
del existing_paths[dbus_object.dbus_object_path()]
Because we expect to find the object in existing_paths if we found it in
the lookup.
resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2085078
|
|
|
|
|
|
|
|
| |
Latest upstream build of lvm results in the following error when
trying to use lvmshell.
"Argument --reportformat cannot be used in interactive mode.,
Error during parsing of command line."
|
|
|
|
|
|
| |
Move the option to add the debug file into lvm_full_report_json so that
we collect the debug data when we fork & exec lvm and when we use lvm
shell.
|
|
|
|
|
| |
Better to drain everything we have now that our IO is line orientated
when using a ptty.
|
|
|
|
|
| |
We end up in a bad state if we simply eat IOErrors here. Exit the lvmshell
process and raise the IOError.
|
| |
|
|
|
|
| |
Bubble up a LvmBug if we get a KeyError on a lvm column name.
|
| |
|
| |
|
|
|
|
| |
Useful for testing `exit_shell` when running interactively.
|
|
|
|
|
|
|
| |
When lvm is compiled with editline, if the file descriptors don't look like
a tty, then no "lvm> " prompt is done. Having lvm output the shell prompt
when consuming JSON on a report file descriptor is very useful in
determining if lvm command is complete.
|
|
|
|
|
| |
Previously the daemon would output PID:TID. If it's running under systemd
it skips outputting PID as systemd already does this.
|