| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Use a more descriptive name.
|
|
|
|
| |
toollib rework hadn't enabled "all vgs" for vgimport.
|
|
|
|
| |
to avoid conflict with any global definition.
|
| |
|
| |
|
|
|
|
|
|
| |
Process pvs by iterating through vgs, then iterating through
devs if the command wants to process non-pv devices. The
process_single function can always use the vg and pv args.
|
|
|
|
|
|
|
| |
The ENABLE_ALL_DEVS flag is added to the command structure
for commands that should process all devs (pvs and non-pvs)
when they call process_each_pv and the command includes the
--all arg. This will be used in a later process_each_pv patch.
|
|
|
|
|
| |
The failed_lvnames arg is no longer used since the
cmd_vg replicator wrapper was removed.
|
|
|
|
| |
Include in the error message the lv name args that were not found.
|
|
|
|
|
|
| |
- Copy the same form as the new process_each_vg.
- Replace unused struct cmd_vg and cmd_vg_read() replicator
code with struct vg and vg_read() directly.
|
|
|
|
|
|
|
|
| |
- Split the collecting of arguments from processing them.
- The split allows the two different loops through vgs to
be replaced by a single loop.
- Replace unused struct cmd_vg and cmd_vg_read() replicator
code with struct vg and vg_read() directly.
|
|
|
|
|
|
|
| |
The ENABLE_ALL_VGS flag is added to the command structure
for commands that should process all vgs when they call
process_each_vg or process_each_lv with no args.
This will be used in later patches to process_each functions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
only)
This is sort of info we always ask people to retrieve when
inspecting problems in systemd environment so let's have this
as part of lvmdump directly.
The -s option does not need to be bound to systemd only. We could
add support for initscripts or any other system-wide/service tracking
info that can help us with debugging problems.
|
|
|
|
|
|
|
|
|
|
|
| |
By default, the thin_pool_chunk_size is automatically calculated.
When defined, it disables the automatic calculation. So to be more
precise here, we should comment it out for the default.profile.
Also, "lvm dumpconfig --type profilable" was used here to generate
the default.profile content. This will be done automatically in the
future once we have the infrastructure for this in (see also
https://bugzilla.redhat.com/show_bug.cgi?id=1073415).
|
|
|
|
|
|
|
|
|
| |
Perform two allocation attempts with cling if maximise_cling is set,
first with then without positional fill.
Avoid segfaults from confusion between positional and sorted sequential
allocation when number of stripes varies as reported here:
https://www.redhat.com/archives/linux-lvm/2014-March/msg00001.html
|
|
|
|
|
|
|
| |
Set A_POSITIONAL_FILL if the array of areas is being filled
positionally (with a slot corresponding to each 'leg') rather
than sequentially (with all suitable areas found, to be sorted
and selected from).
|
|
|
|
| |
alloc_parms is constant while allocating.
|
|
|
|
| |
Abort loop when PIDFILE is gone
|
|
|
|
|
|
|
|
|
|
|
| |
Since the kill may take various amount of time,
(especially when running with valgrind)
check it's really pvmoved LV.
Restore initial restart of clvmd - it's currently
broken at various moments - basically killed lvm2
command may leave clvmd and confusing state leading
to reports of internal errors.
|
|
|
|
| |
Add easy check function for cheking lv_attr bits
|
| |
|
| |
|
|
|
|
|
| |
Move !node_up check in front and reindent
rest of the function to the left.
|
|
|
|
|
| |
Use thread friendly version of ctime
TODO:should be probably replaced with strftime()
|
|
|
|
|
|
|
| |
Prior adding new reply to the list, check
if the reply thread is not already finished.
In that case discard adding message
(which would otherwise be leaked).
|
|
|
|
|
|
|
|
| |
Use mutex to access localsock values, so check
num_replies when the thread is not yet finished.
Check for threadid prior the mutex taking
(though this check is probably not really needed)
|
|
|
|
|
|
|
| |
Added complexity with extra reply mutex is not worth the troubles.
The only place which may slightly benefit from this mutex is timeout
and since this is rather error case - let's convert it to
localsock.mutex and keep it simple.
|
|
|
|
| |
Setting this variable needs to be protected with mutex.
|
|
|
|
|
|
|
|
|
|
| |
Move the pthread mutex and condition creation and destroy
to correct place right after client memory is allocatedd
or is going to be released.
In the original place it's been in race with lvm thread
which could have still unlock mutex while it's been already
destroyed.
|
|
|
|
|
|
|
|
|
|
| |
When TEST_MODE flag is passed around the cluster,
it's been use in thread unprotected way, so it may have
influenced behaviour of other running parallel lvm commands
(activation/deactivation/suspend/resume).
Fix it by set/query function only under lvm mutex.
For hold_un/lock function calls check lock_flags bits directly.
|
|
|
|
|
| |
Extend the list of ignored libraries. Since we do not
use those libraries during suspend, skip their locking.
|
|
|
|
|
|
|
|
| |
When pvmove0 is finished, it replaces temporarily pvmove0
with error segment, however in this case, pvmove0 remains
unremovable in case pvmove --abort is interrupted in this
moment - since it's not a pvmove anymore and normal
lvremove can't be used to remove LOCKED lv.
|
|
|
|
| |
Negative intervals are not supported.
|
|
|
|
| |
No functional changes intended to be included in this patch.
|
| |
|
| |
|
|
|
|
|
|
|
| |
--restorefile compatibility
Also, avoid division by zero in the pvcreate's param validation
in case someone supplies "pvcreate --dataalignment 0".
|
|
|
|
|
| |
Enahnce bootloader area test to check whether restoring values from
backup works correctly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
--restorefile
There were two bugs before when using pvcreate --restorefile together
with data alignment and its offset specified:
- the --dataalignment was always ignored due to missing braces in the
code when validating the divisibility of supplied --dataalignment
argument with pe_start which we're just restoring:
if (pp->rp.pe_start % pp->data_alignment)
log_warn("WARNING: Ignoring data alignment %" PRIu64
" incompatible with --restorefile value (%"
PRIu64").", pp->data_alignment, pp->rp.pe_start);
pp->data_alignment = 0
The pp->data_alignment should be zeroed only if the pe_start is not
divisible with data_alignment.
- the check for compatibility of restored pe_start was incorrect too
since it did not properly count with the dataalignmentoffset that
could be supplied together with dataalignment
The proper formula is:
X * dataalignment + dataalignmentoffset == pe_start
So it should be:
if ((pp->rp.pe_start % pp->data_alignment) != pp->data_alignment_offset) {
...ignore supplied dataalignment and dataalignment offset...
}
|
| |
|
|
|
|
|
|
| |
The refactoring made by 732859d21f3b41bdb188f92b60f25d5c94dcee8a
caused this. The former "ea" was not renamed to "ba" and we used
incorrect tree node name to search for the value.
|
| |
|
|
|
|
| |
Drop zeroing of zalloc-ed memory.
|
|
|
|
| |
Since the return here is the only path, reindent for readability.
|
|
|
|
| |
Fix cut&paste comments
|
|
|
|
|
| |
Relocate some defines from lvm headers to those
few shared between libdm and lib code.
|
|
|
|
| |
More use of libdevmapper macro
|
|
|
|
|
|
|
|
| |
Split apply_lvname_restrictions into 2 internal
function:
_lvname_has_reserved_prefix()
_lvname_has_reserved_string()
|
|
|
|
|
|
| |
In general for non-toplevel LVs we shouldn't allow any _tree_action.
For now error on request for cache_pool activation which
doesn't even exist in dm-table.
|