summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Allow dm-integrity to be used for raid imagesdev-dct-integrity33David Teigland2020-03-3045-39/+3761
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Examples create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m create a raid1 LV, then add integrity: $ lvcreate --type raid1 -m1 -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-aor--- 1.00g 0.00 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 17.58 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 15.62 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m $ lvconvert --raidintegrity y foo/rr Logical volume foo/rr has added integrity. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 9.18 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 65.62 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 64.84 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
* move pv_list code into libDavid Teigland2020-03-175-279/+296
|
* reduce device path error messsagesDavid Teigland2020-03-123-6/+13
| | | | | | When /dev entries or sysfs entries are changing due to concurrent lvm commands, it can cause warning/error messages about missing paths.
* man: lvm2-activation-generator fix vgchange commentDavid Teigland2020-03-101-1/+1
| | | | generated services use vgchange -aay (not -ay)
* lvmlockd: use transient LV lock when creating snapshotDavid Teigland2020-03-091-1/+1
| | | | | | | | | Creating a snapshot was using a persistent LV lock on the origin, so if the origin LV was inactive at the time of the snapshot the LV lock would remain. (Running lvchange -an on the inactive LV would clear the LV lock.) Use a transient LV lock so it will be dropped if it was not locked previously.
* writecache: require inactive LV to attachDavid Teigland2020-03-091-9/+10
| | | | | | | | Prevent attaching writecache to an active LV until we can determine the block size of the fs on the LV, and use that to enforce an appropriate writecache block size. Changing the block size under a mounted fs can cause panic/corruption.
* WHATS_NEW_DM: updateZdenek Kabelac2020-03-051-0/+2
|
* container_of: use offsetof from stddefZdenek Kabelac2020-03-052-10/+5
| | | | | Use standardized offsetof() macro from stddef. Helps to build valid code with latest gcc10 with -O2.
* libdm: fix dm_list pointer arithmentic for new gcc 10 optimizationZdenek Kabelac2020-03-052-4/+7
|
* dmeventd: enhance time waiting loopZdenek Kabelac2020-03-051-1/+10
| | | | | | | | | | | | | | | dmeventd is 'scanning' statuses in loop (most usually in 10sec intervals) - and meanwhile it sleeps within: pthread_cond_timedwait() However this function call tends to wakeup sometimes a short amount of time sooner - and our code still believe the 'right time' has not yet arrived and basically for a moment 'busy-looped' on calling this function - so for systems with 'clock_gettime()' present we obtain time and we go 10ms to the future second - this avoids unneeded repeated invocation of our time scheduling loop. TODO: monitoring during 1 hour 'time-change'...
* pvck: use dm_config_parse_without_dup_node_checkDavid Teigland2020-03-041-2/+2
| | | | | | instead of dm_config_parse. Some strange case could cause dm_config_parse to print duplicate warnings about all the metadata fileds.
* tests: reduce sizes in pvck-dump and improve checksDavid Teigland2020-03-041-16/+50
| | | | | Smaller devs can be used so tests can be run on small vms. Improve checks.
* tests: pvck dump from larger metadata areasDavid Teigland2020-03-031-1/+34
|
* pvck: allow dump from fileDavid Teigland2020-03-031-43/+111
|
* pvck: fix reading large mda1David Teigland2020-03-031-0/+7
| | | | | | When mda_size is larger than io_memory_size, reading the entire mda fails unless the previous read of the label has been invalidated.
* pvck: improve mda_offset mda_size choicesDavid Teigland2020-03-031-17/+182
| | | | | | | Attempt to calculate an offset or size if one only value was specified in the settings. Use header values when available.
* pvck: print longer command descriptionDavid Teigland2020-03-031-8/+11
|
* pvck: ensure text lines are terminatedDavid Teigland2020-03-031-7/+7
|
* hints: free hint list in error exit pathDavid Teigland2020-03-033-12/+16
|
* man: lvmcache raid1 referencesJonathan Brassow2020-02-271-4/+4
|
* tests: validate vdo slab_sizeZdenek Kabelac2020-02-263-5/+10
| | | | | New vdoformat can print this size - so check we pass proper bit count matching preset value.
* vdo: fix slab size bits calculationZdenek Kabelac2020-02-251-1/+1
| | | | | | | | | When formating VDO volume, the calculated amound of bits for 'vdoformat --slab-bits' parameter was shifted by 2 bits (calculated size was making 2MiB vdo_slab_size_mb value appear like if user would be specifying only 512KiB) Fixed by properly converting internal size_mb value to KiB.
* writecache: check watermark valueDavid Teigland2020-02-251-0/+4
|
* writecache: allow removing wcorig lvDavid Teigland2020-02-211-1/+1
| | | | like removing corig
* writecache: fix watermark error messageDavid Teigland2020-02-211-1/+1
|
* writecache: working real dm uuid suffix for wcorig lvDavid Teigland2020-02-206-10/+17
|
* writecache: drop real dm suffixDavid Teigland2020-02-173-2/+37
| | | | fixes the problem of adding writecache to an active LV
* thin: don't use writecache for poolmetadataDavid Teigland2020-02-131-1/+2
|
* writecache: check if cachevol is writableDavid Teigland2020-02-111-0/+5
| | | | | before trying to initialize it (since wipe_lv does not return an error if it fails to write.)
* cachevol: stop dm errors with uncaching cache with cachevolZdenek Kabelac2020-02-112-7/+8
| | | | | | | | | | | | | | Fix the anoying kernel message reported: device-mapper: cache: 253:2: metadata operation 'dm_cache_commit' failed: error = -5 which has been reported while cachevol has been removed. Happened via confusing variable - so switch the variable to commonly user '_size' which presents a value in sector units and avoid 'scaling' this as extent length by vg extent size when placing 'error' target on removal path. Patch shouldn't have impact on actual users data, since at this moment of removal all date should have been already flushed to origin device. m
* post-releaseMarian Csontos2020-02-114-2/+8
|
* pre-releasev2_03_08Marian Csontos2020-02-114-4/+8
|
* vdo: fix vdoformat when -V is specifiedZdenek Kabelac2020-02-101-8/+8
| | | | | | The previous patch improved read of pipe when lvm2 was looking for default logical size, but we clearly must read pipe also for -V case, when the logical size is already defined.
* writecache: skip zeroing in test modeDavid Teigland2020-02-071-0/+3
|
* writecache: check for invalid cachevolDavid Teigland2020-02-071-0/+5
|
* writecache: fix return valueDavid Teigland2020-02-071-4/+4
|
* raid: better place for blocking reshapesZdenek Kabelac2020-02-071-6/+7
| | | | | | | | Still the place can be better to block only particular reshape operations which ATM cause kernel problems. We check if the new number of images is higher - and prevent to take conversion if the volume is in use (i.e. thin-pool's data LV).
* writecache: prevent snapshotsDavid Teigland2020-02-061-1/+4
| | | | | | there appear to be problems with taking a snapshot of an LV with a writecache, so block it until that is understood or fixed.
* writecache: fix splitcache when origin is raidDavid Teigland2020-02-042-4/+26
|
* WHATS_NEW: updateZdenek Kabelac2020-02-041-0/+4
|
* generate: remakeZdenek Kabelac2020-02-041-27/+82
| | | | Regen man page.
* lv_manip: add extra check for existin origin_lvZdenek Kabelac2020-02-041-1/+2
| | | | | | clang: it's supposedly impossible path to hit, as we should always have origin_lv defined when running this path, but adding protection isn't a big issue to make this obvious to analyzer.
* raid: add internal error for no segmentZdenek Kabelac2020-02-041-0/+5
| | | | | clang: capture internal error when data_seg would not be defined. (invalid LV with no areas)
* lv_manip: add error handling for _reserve_areaZdenek Kabelac2020-02-041-10/+19
| | | | | | | Since _reserve_area() may fail due to error allocation failure, add support to report this already reported failure upward. FIXME: it's log_error() without causing direct command failure.
* command: validate reporting of previous argumentZdenek Kabelac2020-02-041-8/+8
| | | | | When reporting parsing error, report 'previous' argument only when there is one.
* dmeventd: nicer error path for reading pipeZdenek Kabelac2020-02-042-35/+39
| | | | | | | When _daemon_read()/_client_read() fails during the read, ensure memory allocated withing function is also release here (so caller does not need to care). Also improve code readbility a bit a for same functionality use more similar code.
* lvmlockctl: use inline initilizersZdenek Kabelac2020-02-041-4/+3
| | | | clang: ensure r_name[] is in all possible paths defined.
* lvmlockctl: ensure result value is always definedZdenek Kabelac2020-02-041-1/+3
| | | | | Ensure passed pointer gets predefined value (instead of random stack value).
* lvmlockd: move eval of ENOENTZdenek Kabelac2020-02-041-5/+2
| | | | | To avoid logging 'errors' for no real error state (ENOENT), move this evaluation upward in the code.
* cov: check error code from mutex initZdenek Kabelac2020-02-041-1/+2
|