| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
New reporting fields related to cache device status:
- cache_total_blocks
- cache_used_blocks
- cache_dirty_blocks
- cache_read_hits
- cache_read_misses
- cache_write_hits
- cache_write_misses
|
|
|
|
| |
requested, add lv_info_with_seg_status fn
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to LVSINFO type which gathers LV + its DM_DEVICE_INFO, the
new LVSSTATUS/SEGSSTATUS report type will gather LV/segment + its
DM_DEVICE_STATUS.
Since we can report status only for certain segment, in case
of LVSSTATUS we need to choose which segment related to the LV
should be processed that represents the "LV status". In case of
SEGSSTATUS type it's clear - the status is reported for the
segment just processed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lv_with_info_and_seg_status
The former struct lv_with_info is renamed to lv_with_info_and_seg_status as it can
hold more than just "info", there's lv's segment status now in addition:
struct lv_with_info_and_seg_status {
struct logical_volume *lv;
struct lvinfo *info;
struct lv_seg_status *seg_status;
}
Where struct lv_seg_status is:
struct lv_seg_status {
struct dm_pool *mem;
struct lv_segment lv_seg;
lv_seg_status_type_t type;
void *status; /* struct dm_status_* */
}
Where lv_seg points to lv's segment that is being reported or
processed in general.
New struct lv_seg_status keeps the information about segment status -
the status retrieved via DM_DEVICE_STATUS ioctl. This information will
be used for reporting dm device target status for the LV segment
specified.
So this patch introduces third level of LV information that is
kept for reuse while reporting fields within one reporting line,
causing only one DM_DEVICE_STATUS ioctl call per LV segment line
reported (otherwise we'd need to call the DM_DEVICE_STATUS for each
segment status field in one LV segment/reporting line which is not
efficient).
This is following exactly the same principle as already introduced
by commit ecb2be5d1642aa0142d216f9e52f64fd3e8c3fc8.
So currently we have three levels of information that can be used
to report an LV/LV segment:
- LV metadata itself (struct logical_volume *lv)
- LV's DM_DEVICE_INFO ioctl result (struct lvinfo *info)
- LV's segment DM_DEVICE_STATUS ioctl result (this status must be
bound to a segment, not the whole LV as the whole LV may be
composed of several segments of course)
(this is the new struct lv_seg_status *seg_status)
|
|
|
|
|
| |
Calculate dm_list_size only when there is not just a single
ont segment in list - so it's only counted on error path.
|
|
|
|
|
|
|
|
|
| |
When deactivating origin, we may have possibly left table in broken state,
where origin is not active, but snapshot volume is still present.
Let's ensure deactivation of origin detects also all associated
snapshots are inactive - otherwise do not skip deactivation.
(so i.e. 'vgchange -an' would detect errors)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's use this function for more activations in the code.
'needs_exlusive' will enforce exlusive type for any given LV.
We may want to activate LV in exlusive mode, even when we know
the LV (as is) supports non-exlusive activation as well.
lvcreate -ay -> exclusive & local
lvcreate -aay -> exclusive & local
lvcreate -aly -> exclusive & local
lvcreate -aey -> exclusive (might be on any node).
|
|
|
|
| |
Unsupported as of now.
|
|
|
|
| |
understand this properly
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
LVSINFO is just a subtype of LVS report type with extra "info" ioctl
called for each LV reported (per output line) so include its processing
within "case LVS" switch, not as completely different kind of reporting
which may be misleading when reading the code.
There's already the "lv_info_needed" flag set in the _report fn, so
call the approriate reporting function based on this flag within the
"case LVS" switch line.
Actually the same is already done for LV is reported per segments
within the "case SEGS" switch line. So this patch makes the code more
consistent so it's processed the same way for all cases.
Also, this is a preparation for another and new subtype that will
be introduced later - the "LVSSTATUS" and "SEGSSTATUS" report type.
|
|
|
|
| |
Only with -DDEBUG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When responding to DM_EVENT_CMD_GET_REGISTERED_DEVICE no longer
ignore threads that have already been unregistered but which
are still present.
This means the caller can unregister a device and poll dmeventd
to ensure the monitoring thread has gone away before removing
the device. If a device was registered and unregistered in quick
succession and then removed, WAITEVENT could run in parallel with
the REMOVE.
Threads are moved to the _thread_registry_unused list when they
are unregistered.
|
|
|
|
|
|
|
| |
The status of threads in _thread_registry is always DM_THREAD_RUNNING
(zero).
DM_EVENT_REGISTRATION_PENDING is never stored in thread->events.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Activate of new/unused/empty thin pool volume skips
the 'overlay' part and directly provides 'visible' thin-pool LV to the user.
Such thin pool still gets 'private' -tpool UUID suffix for easier
udev detection of protected lvm2 devices, and also gets udev flags to
avoid any scan.
Such pool device is 'public' LV with regular /dev/vgname/poolname link,
but it's still 'udev' hidden device for any other use.
To display proper active state we need to do few explicit tests
for this condition.
Before it's used for any lvm2 thin volume, deactivation is
now needed to avoid any 'race' with external usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Call check_new_thin_pool() to detect in-use thin-pool.
Save extra reactivation of thin-pool when thin pool is not active.
(it's now a bit more expensive to invoke thin_check for new pools.)
For new pools:
We now active locally exclusively thin-pool as 'public' LV.
Validate transaction_id is till 0.
Deactive.
Prepare create message for thin-pool and exclusively active pool.
Active new thin LV.
And deactivate thin pool if it used to be inactive.
|
|
|
|
| |
Function tests, that given new thin pool is still unused.
|
|
|
|
|
|
|
|
| |
Allowing 'external' use of thin-pools requires to validate even
so far 'unused' new thin pools.
Later we may have 'smarter' way to resolve which thin-pools are
owned by lvm2 and which are external.
|
|
|
|
|
| |
Recognize 'new' (and never used) lvm2 thin pool - it has 'transaction_id' == 0
(lv_is_used_thin_pool() has slightly different meaning).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When transaction_id is set 0 for thin-pool, libdm avoids validation
of thin-pool, unless there are real messages to be send to thin-pool.
This relaxes strict policy which always required to know
in front transaction_id for the kernel target.
It now allows to activate thin-pool with any transaction_id
(when transaction_id is passed in)
It is now upto application to validate transaction_id from life
thin-pool volume with transaction_id within it's own metadata.
|
|
|
|
|
|
|
| |
After initial 'size' usage converted to extents, continue to use
only extents.
(in-release fix).
|
| |
|
|
|
|
|
| |
Test -m0 passed with types.
Check --readahead and thins.
|
|
|
|
| |
Use lv_is_pool() to detect both pool versions.
|
| |
|
| |
|
| |
|
|
|
|
| |
Make more clear dm_info type.
|
|
|
|
| |
Pass lvconvert_params as last arg.
|
|
|
|
| |
Use struct initializer instead of memset().
|
|
|
|
| |
Use log_error for real error.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Show some stats with 'lvs'
Display same info for active cache volume and cache-pool.
data% - #used cache blocks/#total cache blocks
meta% - #used metadata blocks/#total metadata blocks
copy% - #dirty/#used cache blocks
TODO: maybe there is a better mapping
- should be seen as first-try-and-see.
|
|
|
|
|
| |
Before we reuse cache-pool - we need to ensure metadata volume
has wiped header.
|
|
|
|
|
|
|
|
| |
When the cache pool is unused, lvm2 code will internally
allow to activate such cache-pool.
Cache-pool is activate as metadata LV, so lvm2 could easily
wipe such volume before cache-pool is reused.
|
|
|
|
|
|
|
|
|
|
| |
Replace lv_cache_block_info() and lv_cache_policy_info()
with lv_cache_status() which directly returns
dm_status_cache structure together with some calculated
values.
After use mem pool stored inside lv_status_cache structure
needs to be destroyed.
|
|
|
|
|
|
|
|
| |
Add init of no_open_count into _setup_task().
Report problem as warning (cannot happen anyway).
Also drop some duplicated debug messages - we have already
printed the info about operation so make log a bit shorter.
|
|
|
|
|
| |
Use standard 'virtual_extents' naming.
Move virtual_size into 'lcp' struct out of lvcreate_params.
|
|
|
|
| |
Lib takes sizes in extens - do the same for pool_metadata.
|
|
|
|
|
| |
Add function for wiping cache pool volume.
Only unused cache-pool could be wiped.
|
|
|
|
|
|
|
|
| |
Tool will use internal activation of unused cache pool to
clear metadata area before next use of cache-pool.
So allow to deactivation unused pool in case some error
case happend and we were not able to deactivation pool
right after metadata wipe.
|
|
|
|
|
|
|
| |
Support caching of thin-pool.
lvresize needs to be resolved - so far, user
has to manually drop cache-pool before resizing.
|
|
|
|
| |
When pool is not used, allow to change its chunksize.
|
|
|
|
|
|
|
| |
Simplify reporting of percentage.
Allows easier support for more types.
Move testing of device availability into activate.c
|
|
|
|
| |
0 size are not supported as well as negative.
|
|
|
|
|
|
| |
No data for writing should be seen as 'dump' success.
(reduces one <bactrace> in the log) - it has no other
effect.
|
|
|
|
|
| |
lvcreate -m0 and -Mn goes with anything.
Read ahead works either with pools or thin/cache, but not with both.
|
|
|
|
|
| |
When non-root uses dm_check_version() it's been printing some unit
values from stack. So always init those vars.
|
| |
|
| |
|