summaryrefslogtreecommitdiff
path: root/tools/lvcreate.c
Commit message (Collapse)AuthorAgeFilesLines
* vdo: fix and enhance vdo constain checkingZdenek Kabelac2023-01-161-2/+6
| | | | | | | | Enhance checking vdo constains so it also handles changes of active VDO LVs where only added difference is considered now. For this also the reported informational message about used memory was improved to only list consuming RAM blocks.
* lvcreate: fix error path return valuesZdenek Kabelac2022-11-081-3/+3
| | | | | Return failing error code for return path, as 'return 0' in this case was returning success.
* gcc: eliminate warningsZdenek Kabelac2022-09-071-1/+1
| | | | | Gcc starts to show new warning - although unlikely to be able to hit initialize variables to 0.
* vdo: enhance lvcreate validationZdenek Kabelac2022-07-111-8/+40
| | | | | | | | | | | When creating VDO pool based of % values, lvm2 is now more clever and avoids to create 'unsupportable' sizes of physical backend volumes as 16TiB is maximum size supported by VDO target (and also limited by maximum supportable slabs (8192) based on slab size. If the requested virtual size is approaching max supported size 4PiB, switch header size to 0.
* vdo: check vdo memory constrainsZdenek Kabelac2022-07-111-0/+4
| | | | | | | | | | | | | | | | | | | Add function to check for avaialble memory for particular VDO configuration - to avoid unnecessary machine swapping for configs that will not fit into memory (possibly in locked section). Formula tries to estimate RAM size machine can use also with swapping for kernel target - but still leaving some amount of usable RAM. Estimation is based on documented RAM usage of VDO target. If the /proc/meminfo would be theoretically unavailable, try to use 'sysinfo()' function, however this is giving only free RAM without the knowledge about how much RAM could be eventually swapped. TODO: move _get_memory_info() into generic lvm2 API function used by other targets with non-trivial memory requirements.
* vdo: support --vdosettingsZdenek Kabelac2022-05-031-14/+20
| | | | | | | | Allow to use --vdosettings with lvcreate,lvconvert,lvchange. Support settings currenly only configurable via lvm.conf. With lvchange we require inactivate LV for changes to be applied. Settings block_map_era_length has supported alias block_map_period.
* lvcreate: code moveZdenek Kabelac2022-01-261-26/+22
|
* lvcreate: cachesettings works also with writecacheZdenek Kabelac2022-01-261-2/+2
|
* lvcreate: fix crash for unspecified LV name for writecacheZdenek Kabelac2022-01-261-1/+5
| | | | Fix aplication crash when creating writecached LV with 'automatic' name.
* lvcreate: include recent optionsDavid Teigland2021-12-131-0/+4
| | | | | The permitted option list in lvcreate has not kept up with command-lines.in.
* cleanup: use first parameter uintZdenek Kabelac2021-09-271-1/+1
| | | | Easier with struct zeroing and matching assing of type uint.
* vdo: support vdo_pool_header_sizeZdenek Kabelac2021-06-281-1/+1
| | | | | | | | | | | | Add profilable configurable setting for vdo pool header size, that is used as 'extra' empty space at the front and end of vdo-pool device to avoid having a disk in the system the may have same data is real vdo LV. For some conversion cases however we may need to allow using '0' header size. TODO: in this case we may eventually avoid adding 'linear' mapping layer in future - but this requires further modification over lvm code base.
* Add metadata-based autoactivation property for VG and LVDavid Teigland2021-04-071-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The autoactivation property can be specified in lvcreate or vgcreate for new LVs/VGs, and the property can be changed by lvchange or vgchange for existing LVs/VGs. --setautoactivation y|n enables|disables autoactivation of a VG or LV. Autoactivation is enabled by default, which is consistent with past behavior. The disabled state is stored as a new flag in the VG metadata, and the absence of the flag allows autoactivation. If autoactivation is disabled for the VG, then no LVs in the VG will be autoactivated (the LV autoactivation property will have no effect.) When autoactivation is enabled for the VG, then autoactivation can be controlled on individual LVs. The state of this property can be reported for LVs/VGs using the "-o autoactivation" option in lvs/vgs commands, which will report "enabled", or "" for the disabled state. Previous versions of lvm do not recognize this property. Since autoactivation is enabled by default, the disabled setting will have no effect in older lvm versions. If the VG is modified by older lvm versions, the disabled state will also be dropped from the metadata. The autoactivation property is an alternative to using the lvm.conf auto_activation_volume_list, which is still applied to to VGs/LVs in addition to the new property. If VG or LV autoactivation is disabled either in metadata or in auto_activation_volume_list, it will not be autoactivated. An autoactivation command will silently skip activating an LV when the autoactivation property is disabled. To determine the effective autoactivation behavior for a specific LV, multiple settings would need to be checked: the VG autoactivation property, the LV autoactivation property, the auto_activation_volume_list. The "activation skip" property would also be relevant, since it applies to both normal and auto activation.
* cache: reuse code for metadata min_maxZdenek Kabelac2021-02-011-0/+1
| | | | | | | Use update_pool_metadata_min_max() which is shared with thin-pool metadata min-max updating. Gives improved messages when converting volumes to metadata.
* thin: improve 16g support for thin pool metadataZdenek Kabelac2021-02-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initial support for thin-pool used slightly smaller max size 15.81GiB for thin-pool metadata. However the real limit later settled at 15.88GiB (difference is ~64MiB - 16448 4K blocks). lvm2 could not simply increase the size as it has been using hard cropping of the loaded metadata device to avoid warnings printing warning of kernel when the size was bigger (i.e. due to bigger extent_size). This patch adds the new lvm.conf configurable setting: allocation/thin_pool_crop_metadata which defaults to 0 -> no crop of metadata beyond 15.81GiB. Only user with these sizes of metadata will be affected. Without cropping lvm2 now limits metadata allocation size to 15.88GiB. Any space beyond is currently not used by thin-pool target. Even if i.e. bigger LV is used for metadata via lvconvert, or allocated bigger because of to large extent size. With cropping enabled (=1) lvm2 preserves the old limitation 15.81GiB and should allow to work in the evironement with older lvm2 tools (i.e. older distribution). Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA flag within lvm2 metadata, so older lvm2 recognizes an incompatible thin-pool and cannot activate such pool! Users should use uncropped version as it is not suffering from various issues between thin_repair results and allocated metadata LV as thin_repair limit is 15.88GiB Users should use cropping only when really needed! Patch also better handles resize of thin-pool metadata and prevents resize beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically switches pool to no-crop version. Even with existing bigger thin-pool metadata command 'lvextend -l+1 vg/pool_tmeta' does the change. Patch gives better controls 'coverted' metadata LV and reports less confusing message during conversion. Patch set also moves the code for updating min/max into pool_manip.c for better sharing with cache_pool code.
* cleanup: user force_t enums instead of intsZdenek Kabelac2020-09-011-2/+2
|
* lvcreate: new cache or writecache lv with single commandDavid Teigland2020-06-161-1/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To create a new cache or writecache LV with a single command: lvcreate --type cache|writecache -n Name -L Size --cachedevice PVfast VG [PVslow ...] - A new main linear|striped LV is created as usual, using the specified -n Name and -L Size, and using the optionally specified PVslow devices. - Then, a new cachevol LV is created internally, using PVfast specified by the cachedevice option. - Then, the cachevol is attached to the main LV, converting the main LV to type cache|writecache. Include --cachesize Size to specify the size of cache|writecache to create from the specified --cachedevice PVs, otherwise the entire cachedevice PV is used. The --cachedevice option can be repeated to create the cache from multiple devices, or the cachedevice option can contain a tag name specifying a set of PVs to allocate the cache from. To create a new cache or writecache LV with a single command using an existing cachevol LV: lvcreate --type cache|writecache -n Name -L Size --cachevol LVfast VG [PVslow ...] - A new main linear|striped LV is created as usual, using the specified -n Name and -L Size, and using the optionally specified PVslow devices. - Then, the cachevol LVfast is attached to the main LV, converting the main LV to type cache|writecache. In cases where more advanced types (for the main LV or cachevol LV) are needed, they should be created independently and then combined with lvconvert. Example ------- user creates a new VG with one slow device and one fast device: $ vgcreate vg /dev/slow1 /dev/fast1 user creates a new 8G main LV on /dev/slow1 that uses all of /dev/fast1 as a writecache: $ lvcreate --type writecache --cachedevice /dev/fast1 -n main -L 8G vg /dev/slow1 Example ------- user creates a new VG with two slow devs and two fast devs: $ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2 user creates a new 8G main LV on /dev/slow1 and /dev/slow2 that uses all of /dev/fast1 and /dev/fast2 as a writecache: $ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2 -n main -L 8G vg /dev/slow1 /dev/slow2 Example ------- A user has several slow devices and several fast devices in their VG, the slow devs have tag @slow, the fast devs have tag @fast. user creates a new 8G main LV on the slow devs with a 2G writecache on the fast devs: $ lvcreate --type writecache -n main -L 8G --cachedevice @fast --cachesize 2G vg @slow
* Allow dm-integrity to be used for raid imagesDavid Teigland2020-04-151-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Example: create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
* vdo: avoid running initialization of cache pool varsZdenek Kabelac2020-01-131-7/+9
| | | | | | | | Since VDO is also pool, the old if() case missed to know about this, and executed unnecesserily initialization of cache pool variables. This was usually harmless when using 'smaller' sizes of VDO pools, but for big VDO pool size, we were reporting senseless messages about big cache chunk sizes.
* lvcreate: ensure striped raid region size is at least stripe sizeHeinz Mauelshagen2019-11-261-0/+7
| | | | | | | | | | | | | The kernel MD runtime requires region size to be larger than stripe size on striped raid layouts, thus the dm-raid target's constructor rejects such request. This causes e.g. an 'lvcreate --type raid10 -i3 -I4096 -R2048 -n lv vg' to fail. Avoid failing late in the kernel by enforcing region size to be larger or equal to stripe size. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1698225
* vdo: complete matching with thin syntaxZdenek Kabelac2019-01-281-1/+2
| | | | | | | | | | | | | Just like we support for thin-pool syntax: lvcreate --thinpool new_tpoolname -L Size vg add same support logic with for vdo-poo: lvcreate --vdopool new_vpoolname -L Size vg Also move description of syntax bellow thin-pool, so it's correctly ordered in generated man page.
* lv_manip: better work with PERCENT_VG modifierZdenek Kabelac2019-01-211-0/+6
| | | | | | | | | | | | | | | | | | When using 'lvcreate -l100%VG' and there is big disproportion between real available space and requested setting - automatically fallback to 100%FREE. Difference can be seen when VG is big and already most space was allocated, so the requestion 100%VG can end (and by spec for % modifier it's correct) as LV with size of 1%VG. Usually this is not a big problem - buit in some cases - like cache-pool allocation, this can result a big difference for chunksize selection. With this patch it's more closely match common-sense logic without the need of reitteration of too big changes in lvm2 core ATM. TODO: in the future there should be allocator solving all allocations in a single call.
* lvcreate: vdo supportZdenek Kabelac2018-07-091-6/+70
| | | | | Supports basic: 'lvcreate --vdo -LXXXG -VYYYG vg/vdoname -n lvname' Allows to create basic VDO pool volume and virtual VDO volume.
* Remove unused clvm variations for active LVsDavid Teigland2018-06-071-2/+2
| | | | | | | | | | | | | | | | | | Different flavors of activate_lv() and lv_is_active() which are meaningful in a clustered VG can be eliminated and replaced with whatever that flavor already falls back to in a local VG. e.g. lv_is_active_exclusive_locally() is distinct from lv_is_active() in a clustered VG, but in a local VG they are equivalent. So, all instances of the variant are replaced with the basic local equivalent. For local VGs, the same behavior remains as before. For shared VGs, lvmlockd was written with the explicit requirement of local behavior from these functions (lvmlockd requires locking_type 1), so the behavior in shared VGs also remains the same.
* lvmlockd: primarily use vg_is_sharedDavid Teigland2018-06-011-2/+2
| | | | | to check if a vg uses an lvmlockd lock_type, instead of the equivalent but longer is_lockd_type.
* lvmlockd: enable lvcreate -H -L LVDavid Teigland2018-05-311-2/+9
| | | | | Allow this command in a shared VG which had previously been disallowed.
* lvmlockd: enable lvcreate of new LV plus existing cache poolDavid Teigland2018-05-301-2/+1
| | | | | | In this command, lvcreate creates a new LV and then combines it with an existing cache pool, producing a cache LV. This command was previously not allowed in in a shared VG.
* lvmlockd: enable creation of cache pool with lvcreateDavid Teigland2018-05-301-2/+1
| | | | Previously, cache pools needed to be created with lvconvert.
* lvmlockd: enable lvcreate of thin pool and thin lv in one commandDavid Teigland2018-05-301-2/+1
| | | | | | Previously, thin pools and thin lvs need needed to be created with separate commands, now the combined command is permitted.
* lvcreate: fix activation of cached LVZdenek Kabelac2018-03-061-0/+2
| | | | | Since LV for caching can be already a stacked LV, proper activation needs to use lock holding LV.
* pool: drop create spare on error pathZdenek Kabelac2017-10-301-0/+7
| | | | | When thin/cache pool creation fails and command created _pmspare, such volume is now removed on error path.
* lvcreate: error message with dot.Heinz Mauelshagen2017-10-261-1/+1
|
* lvcreate: skip checking for name restriction for cachingZdenek Kabelac2017-10-231-1/+1
| | | | | | | | | | | | | lvcreate supports a 'conversion' when caching LV. This normally worked fine, however in case passed LV was thin-pool's data LV with suffix _tdata we have failed to early. As the easiest fix looks dropping validation of name when caching type is select - such name check will happen later once the VG is opened again and properly detect if the LV with protected name already exists and can be converted, or will be rejected as ambigiuous operation requiring user to specify --type cache | --type cache-pool.
* lvcreate: use cmd defs to deny unspported lockd casesDavid Teigland2017-09-141-1/+11
| | | | | | | | In a shared VG, lvconvert must be used to create thin pools and cache pools, not the lvcreate variants of those commands. Deny these cases early in lvcreate using the new command defs. Denying these cases deeper in the code was missing some cleanup of the partially completed command.
* tidy: prefer not using else after returnZdenek Kabelac2017-07-201-1/+3
| | | | | clang-tidy: avoid using 'else' after return - give more readable code, and also saves indention level.
* cache: lvcreate --cachepool checks for cache poolZdenek Kabelac2017-06-091-0/+7
| | | | | | | Code path missed validation of lvcreate --cachepool argument. If the non cache-pool LV was passed in, code has still continued further work and failed later on internal error. Validate this condition at right place now.
* lvcreate: Fix last commit for virtual sizes.Alasdair G Kergon2017-05-121-1/+1
| | | | | Don't stop when extents is 0 if a virtual size parameter was supplied instead.
* lvcreate: Fix mirror percentage size calculations.Alasdair G Kergon2017-05-121-7/+30
| | | | | | | | | | | | | | | Trap cases where the percentage calculation currently leads to an empty LV and the message: Internal error: Unable to create new logical volume with no extents Additionally convert the calculated number of extents from physical to logical when creating a mirror using a percentage that is based on Physical Extents. Otherwise a command like 'lvcreate -m3 -l80%FREE' can never leave any free space. This brings the behaviour closer to that of lvresize. (A further patch is needed to cover all the raid types.)
* cache: enable usage of --cachemetadataformatZdenek Kabelac2017-03-101-0/+2
| | | | | lvcreate and lvconvert may select cache metadata format when caching LV. By default lvm2 picks best available format.
* pool: rework handling of passed argsZdenek Kabelac2017-03-101-6/+17
| | | | | | | | | | | As now we can properly recognize all paramerters for pool creation, we may drop PASS_ARG_ defines and rely on '_UNSELECTED' or 0 entries as being those without user given args. When setting are not given on command line - 'update' function fill them from profiles or configuration. For this 'profile' arg was needed to be passed around and since 'VG' itself is not needed, it's been all replaced with 'cmd, profile, extents_size' args.
* lvcreate: respecting profile settingsZdenek Kabelac2017-03-101-3/+0
|
* cache: get and set cache paramsZdenek Kabelac2017-03-101-0/+1
|
* thin: add new ZERO/DISCARDS_UNSELECTEDZdenek Kabelac2017-03-101-1/+1
| | | | | | | | To more easily recognize unselected state from select '0' state add new 'THIN_ZERO_UNSELECTED' enum. Same applies to THIN_DISCARDS_UNSELECTED. For those we no longer need to use PASS_ARG_ZERO or PASS_ARG_DISCARDS.
* lvcreate: avoid rejecting --metadataprofileZdenek Kabelac2017-03-101-0/+1
| | | | | | | Likely user normaly have used 'shortcut' --profile option which is (for lvcreate) decoded as metadataprofile. However full option was rejected.
* lvcreate: fix "striped" limitHeinz Mauelshagen2017-03-101-1/+3
| | | | | | | Fix regression limiting number of stripes to 8. Raise back to 128 as before. Resolves: rhbz1389546
* args: use arg parsing function for region sizeDavid Teigland2017-02-131-14/+0
| | | | | Consolidate the validation of the region size arg in a new arg parsing function.
* cleanup: add some dots and use display_lvnameZdenek Kabelac2016-11-251-1/+1
| | | | Just some more VG/LV printing.
* debug: more stacktrace correctionsZdenek Kabelac2016-11-251-1/+1
| | | | | Continue previous patch dropping some unneeded stack traces after printed log_error/warn messages.
* lvchange/vgchange/lvconvert: prevent raid4 creation/activation/conversion on ↵Heinz Mauelshagen2016-10-271-0/+6
| | | | | | | | | | | | non-supporting raid targets Check for dm-raid target version with non-standard raid4 mapping expecting the dedicated parity device in the last rather than the first slot and prohibit to create, activate or convert to such LVs from striped/raid0* or vice-versa in order to avoid data corruption. Add related tests to lvconvert-raid-takeover.sh Resolves: rhbz1388962
* raid10: Fix #stripes in lvcreate msg when too many.Alasdair G Kergon2016-08-301-1/+1
|