| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To write a new/repaired pv_header and label_header:
pvck --repairtype pv_header --file <file> <device>
This uses the metadata input file to find the PV UUID,
device size, and data offset.
To write new/repaired metadata text and mda_header:
pvck --repairtype metadata --file <file> <device>
This requires a good pv_header which points to one or two
metadata areas. Any metadata areas referenced by the
pv_header are updated with the specified metadata and
a new mda_header. "--settings mda_num=1|2" can be used
to select one mda to repair.
To combine all header and metadata repairs:
pvck --repair --file <file> <device>
It's best to use a raw metadata file as input, that was
extracted from another PV in the same VG (or from another
metadata area on the same PV.) pvck will also accept a
metadata backup file, but that will produce metadata that
is not identical to other metadata copies on other PVs
and other areas. So, when using a backup file, consider
using it to update metadata on all PVs/areas.
To get a raw metadata file to use for the repair, see
pvck --dump commands.
List all instances of metadata from the metadata area
(by default mda1 is searched, --settings "mda_num=2"
will search the second):
pvck --dump metadata_search <device>
Save one instance of metadata at the given offset to
the specified file (this file can be used for repair):
pvck --dump metadata_search --file <file>
--settings "metadata_offset=<off>" <device>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
using --settings:
mda_offset=<offset> mda_size=<size> can be used
in place of the offset/size that normally come
from headers.
metadata_offset=<offset> prints/saves one instance
of metadata text at the given offset, in
metadata_all or metadata_search.
|
| |
|
| |
|
|
|
|
|
| |
Add cmd/fmt args to import functions so that
they can be used without the fid arg which.
|
| |
|
|
|
|
|
|
|
| |
Avoid making more dbus calls to get information we already have. This
also avoids us getting an error where a dbus object representation is
being deleted by another process while we are trying to gather information
about it across the wire.
|
|
|
|
|
| |
Filter out LVs too, so that we can run more than 1 instance of the
unit test at the same time.
|
|
|
|
|
| |
Add tests for all the different LV types with the standard LV dbus
interface. These tests shook out a couple of new bugs.
|
| |
|
|
|
|
|
|
| |
When a LV loses an interface it ends up getting removed and recreated.
This happens after the VGs have been processed and updated. Thus when
this happens we need to re-check the VGs.
|
|
|
|
| |
Prevent the daemon from stalling when it gets stuck on a y/n prompt.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This allows us to fully verify introspection data matches what we are
getting.
|
|
|
|
|
|
|
| |
VDO pool LVs are represented by a new dbus interface VgVdo. Currently
the interface only has additional VDO properties, but when the
ability to support additional LV creation is added we can add a method
to the interface.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
When VDO support is available we will create VG object instances
which will allow the API user to create VDO pool LVs.
|
|
|
|
| |
Will be used to add vdo interfaces on demand.
|
|
|
|
| |
This issue has been resolved, sizes > 2**32-1 not supported.
|
| |
|
| |
|
| |
|
|
|
|
| |
This is needed in a number of places.
|
| |
|
| |
|
|
|
|
|
| |
Remove the same copy & pasted code which simply creates a VG to
use.
|
| |
|
| |
|
|
|
|
| |
Added for vdo support.
|
|
|
|
|
|
| |
When developing and testing on a local system, to get the unit
test to pass the test_nesting test, editing the test conf will achieve
the success too.
|
|
|
|
| |
These were added for vdo integration.
|
|
|
|
|
| |
We can use tuple expansion from the command handler functions
directly.
|
|
|
|
|
| |
vg, lv, pv code had the same function for handling command execution.
Move to utility function and abstract the difference.
|
| |
|
|
|
|
| |
It broke some unit tests, for v. little benefit
|
|
|
|
| |
Author: Heming Zhao
|
|
|
|
| |
abort-forces-read
|
|
|
|
|
|
|
| |
The return value from bcache_invalidate_fd() was not being checked.
So I've introduced a little function, _invalidate_fd() that always
calls bcache_abort_fd() if the write fails.
|
|
|
|
| |
This gives us a way to cope with write failures.
|
| |
|
|
|
|
| |
Check merging of old snapshot of thin LV.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The resume of 'released' 'COW' should preceed the resume of origin.
The fact we need to do the sequence differently for merge was
cause by bugs fixed in 2 previous commits - so we no longer need
to recognize 'merging' and we should always go with single
sequence.
The importance of this order is - to properly remove '-real' device
from origin LV. When COW is activated as 2nd. '-real' device is
kept in table as it cannot be removed during 1st. resume of origin,
and later activation of COW LV no longer builds tree associated
with origin LV.
|