| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A reporting command that is run concurrently with another
command modifying a VG may report either the old or new
VG state. This flexibility means the reporting command
could be optimized to report metadata that was read prior
to taking the VG lock.
Using lock file timestamps, that window can be closed so
that metadata reported is always consistent with the held
VG lock. In some cases, this additional consistency will
avoid warnings that could be produced when the command
compares the metadata with the dm kernel state.
The end result is that the optimization is used (to read
disks only once) and the reported metadata is consistent
with the dm kernel state, even if a concurrent command
is making changes.
A reporting command will now save the VG lock file
timestamps prior to scanning disks. The VG metadata that
is read while scanning disks is saved in memory.
After the scan, when reporting each VG, the command will
lock the VG, and then check the lock file timestamp again.
If the timestamp is unchanged, then the metadata saved
from the scan is unchanged and is reused to report the VG.
If the timestamp has changed, then another command has
modified the metadata since the scan, and the metadata is
reread from disk prior to reporting it.
Changes to lock file handling to support this:
- lock files are no longer unlinked and recreated by every
lvm command, but are left in place.
- a command modifying a VG (holding an exclusive flock)
will update the lock file timestamp before unlocking it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The scanning optimization can produce warnings from
'lvs' when run concurrently with commands modifying LVs,
so disable the optimization until it can be improved.
Without the scanning optimization, lvs will always
read all PVs twice:
1. read metadata from all PVs, saving it in memory
2. for each VG
3. lock VG
4. reread metadata from all PVs in VG, replacing metadata
saved from step 1
5. run command on VG
6. unlock VG
The optimization would usually cause step 4 to be skipped,
and PVs would be read only once.
Running the command in step 5 using metadata that was not
read under the VG lock is usually fine, except for the
fact that lvs attempts to validate the metadata by comparing
it to current dm state. If other commands are modifying dm
state while lvs is running, lvs may see differences between
metadata from step 1 and dm state checked during step 5,
and print warnings.
(A better fix may be to detect the concurrent change and
fall back to rereading metadata in step 4 only when needed.)
|
| |
|
|
|
|
| |
Since we reduced created LV to 4M - dd also just 4M.
|
|
|
|
|
|
| |
This reverts commit cbabdf2fca6131660cfb5525ed9edb3f7a41525a.
and add extra comment why this code may look unused, but
in runtime is necessary.
|
|
|
|
| |
This reverts commit 70fb31b5d6863248b5adfb2581b706cbb158b30e.
|
|
|
|
| |
This reverts commit e92d3bd1f75d335fba5303c433516ea4ebe5cab1.
|
| |
|
| |
|
|
|
|
| |
strncpy will zero buffer itself.
|
|
|
|
| |
free() itself checks for NULL.
|
|
|
|
| |
Instead of malloc() memset() -> zalloc()
|
|
|
|
| |
We don't need to check for any error result codes here.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Since we fixed linking of proper version of 'libdevmapper' with
linking lvm2 plugin correctly - we already have correct function
available linked with internal lvm library.
So drop unneeded include of parsing function.
|
|
|
|
|
|
| |
Embed function into the code, since the function is actually
simpler written this as there are no memleak troubles
with failing allocation error path.
|
|
|
|
| |
Exit when !_touch_hints().
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Avoid mem leaking hint on every loop continue and
allocate hint only when it's going to be added into list.
Switch to use 'dm_strncpy()' and validate sizes.
|
| |
|
|
|
|
| |
Don't use garbage value for later computations.
|
|
|
|
|
| |
Free allocated buffer on function's exit.
Also check for fwrite() results.
|
|
|
|
| |
When 'str1' would be NULL, there is no point to run 2nd. strstr().
|
| |
|
|
|
|
|
|
| |
For dev_in_device_list() != 0 allocated 'devl' was
actually leaking - so instead allocate 'devl' only
when !dev_in_device_list() and indent code around.
|
|
|
|
|
|
|
|
|
|
|
| |
Since we check for NULL pointers earlier we need
to be consistent across function - since the NULL
would applies across whole function.
When dropping 'mda' check - we are actually
already dereferencing it before - so it can't
be NULL at that places (and it's validated
before entering _read_mda_header_and_metadata).
|
|
|
|
| |
Update code with simpler form and check for fclose().
|
|
|
|
|
| |
Reapply 23cc7ddc50e2800a6dc248de897a4c88c1514160 to internal version
of libdm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dev_unset_last_byte() must be called while the fd is still valid.
After a write error, dev_unset_last_byte() must be called before
closing the dev and resetting the fd.
In the write error path, dev_unset_last_byte() was being called
after label_scan_invalidate() which meant that it would not unset
the last_byte values.
After a write error, dev_unset_last_byte() is now called in
dev_write_bytes() before label_scan_invalidate(), instead of by
the caller of dev_write_bytes().
In the common case of a successful write, the sequence is still:
dev_set_last_byte(); dev_write_bytes(); dev_unset_last_byte();
Signed-off-by: Zhao Heming <heming.zhao@suse.com>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When resizing 2 volumes like thin-pool and it's metadata and they
would be of a different type - command would be actually expecting
both LVs being of a same segtype - and would throw an error in
case they are different.
This patch fixes is by setting a new segtype from last segment of
2nd. extented device.
Also it fixes the possible 'percentage' extension setup that
might have been used for 'primary' volume - while the 'secondary'
LV always goes with direct size - as we do not support 'percentage'
setup for them
This affects maily usage of thin-pool where the extension of
thin-pool data size may also lead to extension of metadata size.
|
|
|
|
|
| |
To avoid removing, while 'add' might not have been processed yet.
(when emulating reboot in pvmove-restart)
|
|
|
|
| |
If 'remove' was succesful - we can break loop immediatelly.
|
|
|
|
| |
Do not call pthread_join if thread_id would be 0.
|
|
|
|
|
| |
Report errors for open in better order.
Ensure descriptors are not leaked.
|
|
|
|
| |
Make sure read_ahead pointer is not NULL when quering for RA.
|
|
|
|
| |
Check for sigprocmask errors
|
|
|
|
| |
dev_name is global in device.h
|
| |
|
| |
|
|
|
|
|
|
| |
Thin metadata evolve between kernel version, so it's not always
precisely predictible its usage - so let's met test happy,
when it gets bellow 90%.
|
| |
|
|
|
|
| |
Slowdown 'delay' more.
|
|
|
|
| |
We can 'cache' only exclusively active LV in cluster.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running cluster test with clvmd, the actual 'monitoring'
happens in cluster - so the 'already monitored' message
is also logged within clvmd code and the command cannot
see such effect.
clvmd was incapable to report this information back to command
so it cannot be displayed this way.
Add 'lvs -o+seg_monitor' validation which also works in clustered mode.
|
| |
|