summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* writecache: allow snapshot of LV with writecachedev-dct-writecache83-1David Teigland2020-05-211-2/+0
|
* tests: cachevol-cachedeviceDavid Teigland2020-05-211-0/+121
|
* fix bad result from _cache_min_metadata_sizeDavid Teigland2020-05-211-0/+4
| | | | | | fixes regression from switching to use _cache_min_metadata_size (commit c08704cee7e34a96fdaa453faf900683283e8691) which returns a bogus value when the cachevol size is 8MB.
* lvcreate: new cache or writecache lv with single commandDavid Teigland2020-05-218-127/+341
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To create a new cache or writecache LV with a single command: lvcreate --type cache|writecache -n Name -L Size --cachedevice PVfast VG [PVslow ...] - A new main linear|striped LV is created as usual, using the specified -n Name and -L Size, and using the optionally specified PVslow devices. - Then, a new cachevol LV is created internally, using PVfast specified by the cachedevice option. - Then, the cachevol is attached to the main LV, converting the main LV to type cache|writecache. Include --cachesize Size to specify the size of cache|writecache to create from the specified --cachedevice PVs, otherwise the entire cachedevice PV is used. The --cachedevice option can be repeated to create the cache from multiple devices, or the cachedevice option can contain a tag name specifying a set of PVs to allocate the cache from. To create a new cache or writecache LV with a single command using an existing cachevol LV: lvcreate --type cache|writecache -n Name -L Size --cachevol LVfast VG [PVslow ...] - A new main linear|striped LV is created as usual, using the specified -n Name and -L Size, and using the optionally specified PVslow devices. - Then, the cachevol LVfast is attached to the main LV, converting the main LV to type cache|writecache. In cases where more advanced types (for the main LV or cachevol LV) are needed, they should be created independently and then combined with lvconvert. Example ------- user creates a new VG with one slow device and one fast device: $ vgcreate vg /dev/slow1 /dev/fast1 user creates a new 8G main LV on /dev/slow1 that uses all of /dev/fast1 as a writecache: $ lvcreate --type writecache --cachedevice /dev/fast1 -n main -L 8G vg /dev/slow1 Example ------- user creates a new VG with two slow devs and two fast devs: $ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2 user creates a new 8G main LV on /dev/slow1 and /dev/slow2 that uses all of /dev/fast1 and /dev/fast2 as a writecache: $ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2 -n main -L 8G vg /dev/slow1 /dev/slow2 Example ------- A user has several slow devices and several fast devices in their VG, the slow devs have tag @slow, the fast devs have tag @fast. user creates a new 8G main LV on the slow devs with a 2G writecache on the fast devs: $ lvcreate --type writecache -n main -L 8G --cachedevice @fast --cachesize 2G vg @slow
* lvconvert: single step cachevol creation and attachmentDavid Teigland2020-05-214-53/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To add a cache or writecache to a main LV with a single command: lvconvert --type cache|writecache --cachedevice /dev/ssd vg/main A cachevol LV will be allocated from the specified cache device, then attached to the main LV. Include --cachesize to specify the size of cachevol to create, otherwise the entire cachedevice is used. The cachedevice option can be repeated to create a cachevol from multiple devices. Example ------- A user has an existing main LV that they want to speed up using a new ssd. user adds the new ssd to the VG: $ vgextend vg /dev/ssd user attaches the new ssd their main LV: $ lvconvert --type writecache --cachedevice /dev/ssd vg/main Example ------- A user has two existing main LVs that they want to speed up with a new ssd. user adds the new 16G ssd to the VG: $ vgextend vg /dev/ssd user attaches some of the new ssd to the first main LV, using half of the space: $ lvconvert --type writecache --cachedevice /dev/ssd --cachesize 8G vg/main1 user attaches some of the new ssd to the second main LV, using the other half of the space: $ lvconvert --type writecache --cachedevice /dev/ssd --cachesize 8G vg/main2 Example ------- A user has an existing main LV that they want to speed up using two new ssds. user adds the new two ssds the VG: $ vgextend vg /dev/ssd1 $ vgextend vg /dev/ssd2 user attaches both ssds their main LV: $ lvconvert --type writecache --cachedevice /dev/ssd1 --cachedevice /dev/ssd2 vg/main
* tests: writecache-blocksizeDavid Teigland2020-05-211-0/+343
|
* writecache: use two stage detachDavid Teigland2020-05-218-130/+734
| | | | | | | | | | | | Avoid flushing, and potentially blocking for a long time, in suspend by using the cleaner setting. To detach the writecache, first set the cleaner option on the writecache LV without detaching the writecache. Then return to the top level of the command, releasing the VG and the VG lock. From there, periodically check the progress of the cleaner by locking/reading the VG and checking kernel status. Once the cleaner has finished flushing, detach the writecache from the LV.
* writecache: add settings cleaner and max_ageDavid Teigland2020-05-217-1/+98
| | | | available in dm-writecache 1.2
* writecache: attach while active using fs block sizeDavid Teigland2020-05-211-13/+164
| | | | | | Use libblkid to detect sector/block size of the fs on the LV. Use this to choose a compatible writecache block size. Enable attaching writecache to an active LV.
* writecache: cachesettings in lvchange and lvsDavid Teigland2020-05-218-153/+311
| | | | | lvchange --cachesettings lvs -o+cache_settings
* writecache: show error in lv_health_status and lv_attrDavid Teigland2020-05-213-0/+13
| | | | | lv_attr is 'E' and lv_health_status is 'error' when dm-writecache status reports error.
* writecache: remove from an active lvDavid Teigland2020-05-215-91/+294
|
* tests: also udev wait on clean-up pathZdenek Kabelac2020-05-211-4/+10
|
* test: Use printf to generate dataMarian Csontos2020-05-214-20/+12
| | | | ...to avoid unnecessary dependency on python
* tests: Use python single liner to generate dataMarian Csontos2020-05-214-12/+20
|
* build: make generateMarian Csontos2020-05-212-0/+116
|
* tests: add wait on udev processingZdenek Kabelac2020-05-201-1/+3
| | | | | Trying to avoid collision with udev watch rule preventing to succeed 'dmsetup remove' becuase it keeps device open.
* list: use container_ofZdenek Kabelac2020-05-202-6/+4
| | | | Reuse macro
* pvck: set dump on one callZdenek Kabelac2020-05-201-6/+2
| | | | | | | arg_str_value() has built-in arg_is_set(). Also this makes it obvious to coverity 'dump != NULL' & 'repair != NULL' at the branch code path.
* cov: lvconvert: missing check for function failureZdenek Kabelac2020-05-201-1/+2
|
* cov: check strdup for NULLZdenek Kabelac2020-05-201-4/+8
|
* cov: check for deactivation failureZdenek Kabelac2020-05-201-2/+7
|
* lvmcache: free vginfo lock_typeDavid Teigland2020-05-141-0/+2
|
* hints: free hint structs on exitDavid Teigland2020-05-132-0/+4
| | | | and free on a couple error paths.
* devs: add some checks for a dev with no path nameDavid Teigland2020-05-132-0/+6
| | | | | | | | It's possible for a dev-cache entry to remain after all paths for it have been removed, and other parts of the code expect that a dev always has a name. A better fix may be to remove a device from dev-cache after all paths to it have been removed.
* lvmlockd: use 4K sector size when any dev is 4KDavid Teigland2020-05-111-10/+4
| | | | | | | | | | When either logical block size or physical block size is 4K, then lvmlockd creates sanlock leases based on 4K sectors, but the lvm client side would create the internal lvmlock LV based on the first logical block size it saw in the VG, which could be 512. This could cause the lvmlock LV to be too small to hold all the sanlock leases. Make the lvm client side use the same sizing logic as lvmlockd.
* spec: Enable integrityMarian Csontos2020-05-051-0/+1
|
* lvmlockd: replace lock adopt info sourceDavid Teigland2020-05-045-187/+207
| | | | | | | The lock adopt feature was disabled since it had used lvmetad as a source of info. This replaces the lvmetad info with a local file and enables the adopt feature again (enabled with lvmlockd --adopt 1).
* remove vg_read_errorDavid Teigland2020-04-244-18/+2
| | | | Once converted results to error numbers but is now just a null check.
* use refresh_filters only where neededDavid Teigland2020-04-222-11/+3
| | | | | | Filters are changed and need refresh in only one place (vgimportclone), so avoid doing the refresh for every other command that doesn't need it.
* Fix scripts/lvmlocks.service.in using nonexistent --lock-opt autowaitMaxim Plotnikov2020-04-211-1/+1
| | | | | | The --lock-opt autowait was dropped back in 9ab6bdce01, and attempting to specify it has quite an opposite effect: no waiting is done, which makes the unit almost useless.
* lvmcache: rework handling of VGs with duplicate vgnamesDavid Teigland2020-04-218-327/+1264
| | | | | | The previous method of managing duplicate vgnames prevented vgreduce from working if a foreign vg with the same name existed.
* pass cmd struct through more functionsDavid Teigland2020-04-2110-33/+39
| | | | no functional change
* lvmcache_get_mda: remove unused functionDavid Teigland2020-04-212-35/+0
|
* vgrename: fix error value when name existsDavid Teigland2020-04-211-1/+1
|
* WHATS_NEW: integrity with raidDavid Teigland2020-04-151-0/+1
|
* Allow dm-integrity to be used for raid imagesDavid Teigland2020-04-1545-37/+3790
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Example: create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
* move pv_list code into libDavid Teigland2020-04-135-279/+296
|
* blkdeactivate: add support for VDO in blkdeactivate scriptPeter Rajnoha2020-04-093-1/+59
| | | | | | Make it possible to tear down VDO volumes with blkdeactivate if VDO is part of a device stack (and if VDO binary is installed). Also, support optional -o|--vdooptions configfile=file.
* WHATS_NEWS: updateZdenek Kabelac2020-04-081-0/+1
|
* test: repair of thin-pool used by foreign appsZdenek Kabelac2020-04-081-0/+72
|
* lvconvert: no validation for thin-pools not used by lvm2Zdenek Kabelac2020-04-081-1/+2
| | | | | | | | lvm2 supports thin-pool to be later used by other tools doing virtual volumes themself (i.e. docker) - in this case we shall not validate transaction Id - is this is used by other tools and lvm2 keeps value 0 - so the transationId validation need to be skipped in this case.
* post-releaseMarian Csontos2020-03-264-2/+8
|
* pre-releasev2_03_09Marian Csontos2020-03-264-6/+6
|
* vdo: make vdopool wrapping device is read-onlyZdenek Kabelac2020-03-231-1/+1
| | | | | | | When vdopool is activated standalone - we use a wrapping linear device to hold actual vdo device active - for this we can set-up read-only device to ensure there cannot be made write through this device to actual pool device.
* test: Fix previous commitMarian Csontos2020-03-181-1/+1
|
* test: Can not attach writecache to active volumeMarian Csontos2020-03-181-1/+4
|
* reduce device path error messsagesDavid Teigland2020-03-123-6/+13
| | | | | | When /dev entries or sysfs entries are changing due to concurrent lvm commands, it can cause warning/error messages about missing paths.
* man: lvm2-activation-generator fix vgchange commentDavid Teigland2020-03-101-1/+1
| | | | generated services use vgchange -aay (not -ay)
* lvmlockd: use transient LV lock when creating snapshotDavid Teigland2020-03-091-1/+1
| | | | | | | | | Creating a snapshot was using a persistent LV lock on the origin, so if the origin LV was inactive at the time of the snapshot the LV lock would remain. (Running lvchange -an on the inactive LV would clear the LV lock.) Use a transient LV lock so it will be dropped if it was not locked previously.