summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* apply obtain_device_list_from_udev to all libudev usage2018-06-01-stableDavid Teigland2019-02-051-0/+6
| | | | | | | | | | | udev_dev_is_md_component and udev_dev_is_mpath_component are not used for obtaining the device list, but they still use libudev for device info. When there are problems with udev, these functions can get stuck. So, use the existing obtain_device_list_from_udev config setting to also control whether these "is component" functions are used, which gives us a way to avoid using libudev entirely when it's causing problems.
* lvmlockd: fix make lockstart waitDavid Teigland2019-01-311-1/+1
| | | | when building without lvmlockd
* lvmlockd: make lockstart wait for existing startDavid Teigland2019-01-315-9/+21
| | | | | | | | | | | | | | | | | | If there are two independent scripts doing: vgchange --lockstart vg lvchange -ay vg/lv The first vgchange to do the lockstart will wait for the lockstart to complete before returning. The second vgchange to do the lockstart will see that the start is already in progress (from the first) and will do nothing. This means the second does not wait for any lockstart to complete, and moves on to the lvchange which may find the lockspace still starting and fail. To fix this, make the vgchange lockstart command wait for any lockstart's in progress to complete.
* spec: Use python3 setuptools with python3Marian Csontos2019-01-031-1/+1
|
* lvmanip: uninitialized members in struct pv_list (#10)Ming-Hung Tsai2018-12-192-1/+2
| | | | | | | | | | | | | | | Scenario: Given an existed LV `lvol0`, I want to create another LV on the PVs used by `lvol0`. I use `build_parallel_areas_from_lv()` to obtain the `pv_list` of each segments. However, the returned `pv_list` is not properly initialized, which causes segfault in subsequent operations. (cherry picked from commit 859feb81e5b61ac2109b1d7850844ccf1ce3e5bf) (cherry picked from commit 219ba4f54a462c175f5e9acaa0558afac94d5ff7) Conflicts: WHATS_NEW
* post-releaseMarian Csontos2018-12-074-2/+8
|
* pre-releasev2_02_183Marian Csontos2018-12-074-6/+9
|
* build: make generateMarian Csontos2018-12-071-0/+5
|
* libdm: do not add params for resume and removeZdenek Kabelac2018-12-062-0/+3
| | | | | | | | | | | | | | | | | | | DM_DEVICE_CREATE with table is doing several ioctl operations, however only some of then takes parameters. Since _create_and_load_v4() reused already existing dm task from DM_DEVICE_RELOAD it has also kept passing its table parameters to DM_DEVICE_RESUME ioctl - but this ioctl is supposed to not take any argument and thus there is no wiping of passed data - and since kernel returns buffer and shortens dmi->data_size accordingly, anything past returned data size remained uncleared in zfree() function. This has problem if the user used dm_task_secure_data (i.e. cryptsetup), as in this case binary expact secured data are erased from main memory after use, but they may have been left in place. This patch is also closing the possible hole for error path, which also reuse same dm task structure for DM_DEVICE_REMOVE.
* pvscan lvmetad: use udev info to improve md component detectionDavid Teigland2018-12-035-15/+71
| | | | | | | When no md devs are started, pvscan will only scan the start of an md component, and if it has a superblock at the end may not exclude it. udev may already have info identifying it as an md component, so use that.
* lvmetad: fix disabling in previous commitDavid Teigland2018-11-301-4/+14
| | | | it broke the case where a connection already exists.
* lvmetad: only disable if repair will do somethingDavid Teigland2018-11-303-6/+42
| | | | | | | | | lvconvert --repair would disable lvmetad at the start of the command. This would leave lvmetad disabled even if the command did nothing. Move the step to disable lvmetad until later, just before some actual repair is done. There are now numerous cases where nothing is actually done and lvmetad is not disabled.
* pvscan lvmetad: use full md filter when md 1.0 devices are presentDavid Teigland2018-11-291-0/+19
| | | | | | | | | | | Apply the same logic to pvscan/lvmetad that was added to the non-lvmetad label_scan in commit 3fd75d1b: scan: use full md filter when md 1.0 devices are present Before scanning, check if any of the devs on the system are md 0.90/1.0, and if so make the scan read both the start and the end of the device so that the components of those md versions can be ignored.
* scan: md metadata version 0.90 is at the end of diskPeter Rajnoha2018-11-292-4/+4
| | | | | | | | | commit de28637 scan: use full md filter when md 1.0 devices are present missed the fact that md superblock version 0.90 also puts metadata at the end of the device, so the full md filter needs to be used when either 0.90 or 1.0 is present.
* WHATS_NEW: sync ioDavid Teigland2018-11-201-0/+1
|
* bcache: sync io fixesDavid Teigland2018-11-201-22/+47
| | | | | | | | fix lseek error check fix read/write error checks handle zero return from read and write don't return an error for short io fix partial read/write loop
* io: use sync io if aio failsDavid Teigland2018-11-207-4/+74
| | | | | | | | | | | | | | | | | | io_setup() for aio may fail if a system has reached the aio request limit. In this case, fall back to using sync io. Also, lvm use of aio can be disabled entirely with config setting global/use_aio=0. The system limit for aio requests can be seen from /proc/sys/fs/aio-max-nr The current usage of aio requests can be seen from /proc/sys/fs/aio-nr The system limit for aio requests can be increased by setting fs.aio-max-nr using sysctl. Also add last-byte limit to the sync io code.
* update WHATS_NEWDavid Teigland2018-11-061-0/+1
|
* devices: reuse bcache fd when getting block sizeDavid Teigland2018-11-061-8/+19
| | | | This avoids an unnecessary open() on the device.
* dmsetup: fix stats report command outputBryn M. Reeves2018-11-011-7/+3
| | | | | | | | | | | | | | Since the stats handle is neither bound nor listed before the attempt to call dm_stats_get_nr_regions(), it will always return zero: this prevents reporting of any dmstats regions on any device. Remove the dm_stats_get_nr_regions() check and instead rely on the correct return status from dm_stats_populate() which only returns 0 in the case that there are regions to inspect (and which logs a specific error for all other cases). Reported-by: Bryan Gurney <bgurney@redhat.com>
* libdm-stats: move no regions warning after dm_stats_list()Bryn M. Reeves2018-11-011-5/+5
| | | | | | | It doesn't make sense to test or warn about the region count until the stats handle has been listed: at this point it may or may not contain valid information (but is guaranteed to be correct after the list).
* post-releaseMarian Csontos2018-10-304-2/+8
|
* pre-releasev2_02_182Marian Csontos2018-10-304-6/+6
|
* Update WHATS_NEWMarian Csontos2018-10-301-0/+2
|
* metadata: prevent writing beyond metadata areaDavid Teigland2018-10-296-3/+130
| | | | | | | | | | | | | | | | | lvm uses a bcache block size of 128K. A bcache block at the end of the metadata area will overlap the PEs from which LVs are allocated. How much depends on alignments. When lvm reads and writes one of these bcache blocks to update VG metadata, it can also be reading and writing PEs that belong to an LV. If these overlapping PEs are being written to by the LV user (e.g. filesystem) at the same time that lvm is modifying VG metadata in the overlapping bcache block, then the user's updates to the PEs can be lost. This patch is a quick hack to prevent lvm from writing past the end of the metadata area.
* spec: Fix python and applib interactionsMarian Csontos2018-10-291-2/+4
| | | | When python3 is not present, macro expends to --disable-applib.
* tests: add new test for lvm on md devicesDavid Teigland2018-10-181-0/+87
|
* scan: enable full md filter when md 1.0 devices are presentDavid Teigland2018-10-182-27/+19
| | | | | | | | | | | | | | | The previous commit de2863739f2ea17d89d0e442379109f967b5919d scan: use full md filter when md 1.0 devices are present needs the use_full_md_check flag in the md filter, but the cmd struct is not available when the filter is run, so that commit wasn't working. Fix this by setting the flag in a global variable. (This was fixed in the master branch with commit 8eab37593 in which the cmd struct was passed to the filters, but it was an intrusive change, so this commit is using the less intrusive global variable.)
* scan: use full md filter when md 1.0 devices are presentDavid Teigland2018-10-176-46/+75
| | | | | | | | | | | | | | | | | | | | | The md filter can operate in two native modes: - normal: reads only the start of each device - full: reads both the start and end of each device md 1.0 devices place the superblock at the end of the device, so components of this version will only be identified and excluded when lvm uses the full md filter. Previously, the full md filter was only used in commands that could write to the device. Now, the full md filter is also applied when there is an md 1.0 device present on the system. This means the 'pvs' command can avoid displaying md 1.0 components (at the cost of doubling the i/o to every device on the system.) (The md filter can operate in a third mode, using udev, but this is disabled by default because there have been problems with reliability of the info returned from udev.)
* lvconvert: fix interim segtype regression on raid6 conversionsHeinz Mauelshagen2018-09-103-7/+16
| | | | | | | | | | | | | | | | | | | | | | When converting from striped/raid0/raid0_meta to raid6 with > 2 stripes, allow possible direct conversion (to raid6_n_6). In case of 2 stripes, first convert to raid5_n to restripe to at least 3 data stripes (the raid6 minimum in lvm2) in a second conversion before finally converting to raid6_n_6. As before, raid6_n_6 then can be converted to any other raid6 layout. Enhance lvconvert-raid-takeover.sh to test the 2 stripes conversions to raid6. Resolves: rhbz1624038 (cherry picked from commit e2e30a64ab10602951443dfbd3481bd6b32f5459) Conflicts: WHATS_NEW
* lvconvert: avoid superfluous interim raid typeHeinz Mauelshagen2018-09-051-5/+4
| | | | | | | | When converting striped/raid0*/raid6_n_6 <-> raid4, avoid superfluous interim raid5_n layout. Related: rhbz1447809 (cherry picked from commit 22a13043683a5647e8cc4e3aead911e5269ffd2f)
* scripts: add After=rbdmap.service to ↵Peter Rajnoha2018-09-053-2/+3
| | | | | | | | | | | | | | | | | {lvm2-activation-net,blk-availability}.service We need to have Ceph RBD devices mapped first before use in a stack where LVM is on top so make sure rbdmap.service is called before generated lvm2-activation-net.service. On shutdown, we need to stop blk-availability first before we stop the rbdmap.service. Resolves: rhbz1623479 (cherry picked from commit cb17ef221bdefea3625a22c19c6d8f5504441771) Conflicts: WHATS_NEW
* tests: check activation of many thin-poolZdenek Kabelac2018-09-051-0/+64
| | | | | Artifitical testing of monitoring of many thin-pools with low number of resources in use (need only few pools to actually hit the race).
* dmeventd: lvm2 plugin uses envvar registryZdenek Kabelac2018-09-052-11/+39
| | | | | | | | | | | | | | | | | | | | | | Thin plugin started to use configuble setting to allow to configure usage of external scripts - however to read this value it needed to execute internal command as dmeventd itself has no access to lvm.conf and the API for dmeventd plugin has been kept stable. The call of command itself was not normally 'a big issue' until users started to use higher number of monitored LVs and execution of command got stuck because other monitored resource already started to execute some other lvm2 command and become blocked waiting on VG lock. This scenario revealed necesity to somehow avoid calling lvm2 command during resource registration - but this requires bigger changes - so meanwhile this patch tries to minimize the possibility to hit this race by obtaining any configurable setting just once - such patch is small and covers majority of problem - yet better solution needs to be introduced likely with bigger rework of dmeventd. TODO: Avoid blocking registration of resource with execution of lvm2 commands since those can get stuck waiting on mutexes.
* Update WHATS_NEWMarian Csontos2018-08-281-0/+1
|
* WHATS_NEW: recent fixesDavid Teigland2018-08-271-1/+3
|
* lvmetad: fix pvs for many devicesDavid Teigland2018-08-271-1/+59
| | | | | | | | | | | | | | | | | | | | | | | | When using lvmetad, 'pvs' still evaluates full filters on all devices (lvmetad only provides info about PVs, but pvs needs to report info about all devices, at least sometimes.) Because some filters read the devices, pvs still reads every device, even with lvmetad (i.e. lvmetad is no help for the pvs command.) Because the device reads are not being managed by the standard label scan layer, but only happen incidentally through the filters, there is nothing to control and limit the bcache content and the open file descriptors for the devices. When there are a lot of devs on the system, the number of open fd's excedes the limit and all opens begin failing. The proper solution for this would be for pvs to really use lvmetad and not scan devs, or for pvs to do a proper label scan even when lvmetad is enabled. To avoid any major changes to the way this has worked, just work around this problem by dropping bcache and closing the fd after pvs evaluates the filter on each device.
* lvmetad: improve scan for pvscan allDavid Teigland2018-08-273-17/+91
| | | | | | | | | | For 'pvscan --cache' avoid using dev_iter in the loop after the label_scan by passing the necessary devs back from the label_scan for the continued pvscan. The dev_iter functions reapply the filters which will trigger more io when we don't need or want it. With many devs, incidental opens from the filters (not controlled by the label scan) can lead to too many open files.
* spec: Disable python bindings on newer versionsMarian Csontos2018-08-272-11/+12
|
* bcache: reduce MAX_IO to 256David Teigland2018-08-242-1/+10
| | | | | | | | | This is the number of concurrent async io requests that the scan layer will submit to the bcache layer. There will be an open fd for each of these, so it is best to keep this well below the default limit for max open files (1024), otherwise lvm may get EMFILE from open(2) when there are around 1024 devices to scan on the system.
* test: add striped -> raid0 test scriptHeinz Mauelshagen2018-08-231-0/+25
| | | | (cherry picked from commit 3c966e637fe1bec587ceb9ad13aa009db64b4f8e)
* lvconvert: fix conversion attempts to linearHeinz Mauelshagen2018-08-233-80/+68
| | | | | | | | | | | | | | | "lvconvert --type linear RaidLV" on striped and raid4/5/6/10 have to provide the convenient interim layouts. Fix involves a cleanup to the convenience type function. As a result of testing, add missing sync waits to lvconvert-raid-reshape-linear_to_raid6-single-type.sh. Resolves: rhbz1447809 (cherry picked from commit e83c4f07ca4a84808178d5d22cba655e5e370cd8) Conflicts: WHATS_NEW
* spec: Add vdo plugin for dmeventdMarian Csontos2018-08-231-0/+2
|
* lvconvert: fix regression preventing direct striped conversionHeinz Mauelshagen2018-08-213-0/+29
| | | | | | | | | | | | | | | Conversion to striped from raid0/raid0_meta is directly possible. Fix a regression setting superfluous interim raid5_n conversion type introduced by commit bd7cdd0b09ba123b064937fddde08daacbed7dab. Add new test script lvconvert-raid0-striped.sh. Resolves: rhbz1608067 (cherry picked from commit 4578411633a40c8c9068ff439ef3c33cbe78d25a) Conflicts: WHATS_NEW
* tests: check policy mq can be used with format2Zdenek Kabelac2018-08-071-0/+6
|
* tests: splitmirror for mirror typeZdenek Kabelac2018-08-071-0/+32
|
* mirror: fix splitmirrors for mirror typeZdenek Kabelac2018-08-073-1/+7
| | | | | | With improved mirror activation code --splitmirror issue poppedup since there was missing proper preload code and deactivation for splitted mirror leg.
* cache: drop metadata_format validationZdenek Kabelac2018-08-072-5/+1
| | | | Allow to use any combination of cache metadata format for policy.
* mirrors: fix read_only_volume_listDavid Teigland2018-08-021-0/+2
| | | | | | | | | | | | If a mirror LV is listed in read_only_volume_list, it would still be activated rw. The activation would initially be readonly, but the monitoring function would immediately change it to rw. This was a regression from commit fade45b1d14c mirror: improve table update The monitoring function needs to copy the read_only setting into the new set of mirror activation options it uses.
* Merge branch '2018-06-01-stable' of git://sourceware.org/git/lvm2 into ↵Marian Csontos2018-08-022-3/+10
|\ | | | | | | | | | | | | 2018-06-01-stable * '2018-06-01-stable' of git://sourceware.org/git/lvm2: vgcreate: close exclusive fd after pvcreate