| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The VG lock is a simple read/write lock that protects a VG's
metadata. The VG lock is used in shared mode to read the VG
and in exclusive mode to modify the VG.
The function to acquire/release the VG lock is dlock_vg():
. To acquire the vg lock in ex mode for writing:
dlock_vg(cmd, vg_name, "ex", 0);
. To acquire the vg lock in sh mode for reading:
dlock_vg(cmd, vg_name, "sh", 0);
. To release the vg lock:
dlock_vg(cmd, vg_name, "un", 0);
The dlock_vg() function sends a message to lvmlockd, asking for
the lock in the specified mode. lvmlockd acquires the lock from
the underlying lock manager, and sends the result back to the command.
When the command exits, or calls dlock_vg("un"), lvmlockd releases
the lock in the underlying lock manager.
When lvm is compiled without lvmlockd support, all the dlock_vg calls
simply compile to success (1).
Using the vg lock in commands is simple:
. A command that wants to read the VG should acquire the vg lock
in the sh mode, then read the VG:
dlock_vg(cmd, vg_name, "sh", 0);
vg = vg_read(cmd, vg_name);
dlock_vg(cmd, vg_name, "un", 0);
. A command that wants to write the VG should acquire the vg lock
in the ex mode, then make changes to the VG and write it:
dlock_vg(cmd, vg_name, "ex", 0);
vg = vg_read(cmd, vg_name);
...
vg_write(vg);
vg_commit(vg);
dlock_vg(cmd, vg_name, "un", 0);
When a command processes multiple VGs, e.g. using toollib process_each,
then VG lock should be explicitly unlocked when the command is done
processing the VG, i.e. dlock_vg(cmd, vg_name, "un", 0) as shown above.
When a command processes a single VG or pair of VGs, then the command can
simply exit and lvmlockd will automatically unlock the VG lock(s) that the
command had acquired.
Locking conflicts:
When a command calls dlock_vg(), the lock request is passed to lvmlockd.
lvmlockd makes the corresponding lock request in the lock manager using a
non-blocking request. If another command on another host holds the vg
lock in a conflicting mode, the lock request fails, and lvmlockd returns
the failure to the command. The command reports the lock conflict and
fails.
A future option may enable lvmlockd to automatically retry lock requests
that fail due to conflicts with locks of commands running concurrently on
other hosts. (These retries could be disabled or limited to a certain
number of a command or config option.) This way, simple, transient
command locking conflicts would be hidden.
Caching:
lvmlockd uses the lvb in the VG lock to hold the VG seqno. When a command
writes a VG with a new seqno (under an ex lock), it sends lvmlockd the new
VG seqno by calling lvmlockd_vg_update(vg). When lvmlockd unlocks the ex
VG lock, it saves the latest seqno in the vg lock's lvb. When other hosts
next acquire the VG lock, they will read the lvb, see that the seqno is
higher than the last seqno they saw, and know that their cached copy of
the VG is stale. When lvmlockd sees this, it invalidates the cached copy
of the VG in lvmetad. When a command next reads the VG from lvmetad, it
will see that it's stale, will reread the latest VG from disk, and update
the cached copy in lvmetad.
These commands do not yet work with lvmlockd lock_types (sanlock|dlm):
. vgsplit
. vgmerge
. vgrename
. lvrename
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The global lock (gl) is a simple read/write lock that protects
global lvm metadata, i.e. metadata or information that is not
isolated to a single VG. This global information includes:
A) The VG name space.
B) PVs or devices that do not belong to a VG.
The function to acquire/release the global lock is dlock_gl():
. To acquire the gl in exclusive mode for writing/changing A or B:
dlock_gl(cmd, "ex", 0);
. To acquire the gl in shared mode for reading/listing A or B:
dlock_gl(cmd, "sh", 0);
. To release the gl:
dlock_gl(cmd, "un", 0);
The dlock_gl() function sends a message to lvmlockd, asking for
the lock in the specified mode. lvmlockd acquires the lock from
the underlying lock manager, and sends the result back to the command.
When the command exits, or calls dlock_gl("un"), lvmlockd releases
the lock in the underlying lock manager.
When lvm is compiled without lvmlockd support, all the dlock_gl() calls
simply compile to success (1).
Using the global lock in commands is simple:
. A command that wants to get an accurate list of all VG names
should acquire the gl in the sh mode, then read the list.
. A command that wants to add or remove a VG name should acquire
the gl in the ex mode, then add/remove the name.
. A command that wants to get an accurate list of all PVs/devices
should acquire the gl in the sh mode, then read the list.
. A command that wants to change a device to/from a PV or add/remove
a PV to/from a VG should acquire the gl in the ex mode, then make
the change.
. A command that wants to read the properties of an orphan PV
should acquire the gl in the sh mode, then read the properties.
. A command that wants to change the properties of an orphan PV
should acquire the gl in the ex mode, then change the properties.
. The gl is acquired at the start of a command before any processing.
This is necessary so that the cached information used by the command
is valid and up to date (see caching below).
. A command generally knows at the outset which of the things above
it is going to do, so it knows which lock mode to acquire.
. If a command is given a tag, the tag matching requires a complete
and accurate search of all VGs, and therefore implies that the global
shared lock is needed.
Locking conflicts:
When a command calls dlock_gl(), the lock request is passed to lvmlockd.
lvmlockd makes the corresponding lock request in the lock manager using a
non-blocking request. If another command on another host holds the gl in
a conflicting mode, the lock request fails, and lvmlockd returns the
failure to the command. The command reports the lock conflict and fails.
If a reporting command (sh lock), conflicts with another command using an
ex lock (like vgextend), then the reporting command can simply be rerun.
A future option may enable retrying sh lock requests within lvmlockd,
making simple, incidental conflicts invisible. (These retries could be
disabled or limited to a certain number via a command or config option.)
If a command is changing global state (using an ex lock), the conflict
could be with another sh lock (e.g. reporting command) or another ex lock
(another command changing global state.) The lock manager does not say
which type of conflict it was. In the case of ex/sh conflict, a retry of
the ex request could be automatic, but an ex/ex conflict would generally
want inspection before retrying. Uncoordinated commands concurrently
changing the same global state would be uncommon, and a warning of the
conflict with failure is probably preferred, so the state can be
inspected. Still, the same retry options as above could be applied if
needed.
Caching:
In addition to ex/sh mutual exclusion, the lock manager allows an
application to stash a chunk of its own data within a lock. This chunk of
data is called the "lock value block" (lvb). This opaque data does not
affect the locking behavior, but offers a convenient way for applications
to pass around extra information related to the lock. The lvb can be read
and written by any host using the lock (the application can read the lvb
content when acquiring the lock, and write it when releasing an ex lock.)
(The dlm lvb is only 32 bytes. The sanlock lvb is up to 512 bytes.)
An application can use the lvb for anything it wishes, and it's often used
to help manage the cache associated with the lock. lvmlockd stores a
counter/version number in the lvb. It's incremented when anything
protected by the gl (A or B above) is changed. When other hosts see the
version number has increased, they know that A or B have changed.
lvmlockd refines this further by using a second version number that is
incremented only when the VG namespace (A) changes. The dlock_gl() flag
UPDATE_NAMES tells lvmlockd that the VG namespace is being changed.
When lvmlockd acquires the gl and sees that the counters in the lvb have
been incremented, it knows that the objects protected by the gl have been
changed by another host. This implies that the local host's cache of this
global information is likely out of date, and needs to be refreshed from
disk. When this happens, lvmlockd sends a message to lvmetad to
invalidate the cached global information in lvmetad. When a command sees
that the data from lvmetad is stale, it reads from disk and updates
lvmetad.
All users of the global lock want to use valid information from lvmetad,
so they always check that the lvmetad cache is valid before using it, and
refresh it if needed. This check and refresh is done by
lvmetad_validate_global_cache(). Instead of always calling
dlock_gl()+lvmetad_validate_global_cache() back to back, dlock_gl() calls
lvmetad_validate_global_cache() as the final step before returning.
In the future, more optimizations can be made related to global cache
updating. Similar to the UPDATE_NAMES method, commands can tell lvmlockd
more details about what they are chaning under the global lock. lvmlockd
can propagate these details to others using the gl lvb. Knowing these
details, other hosts can limit their rescanning to only what's necessary
given the specific changes.
vgcreate with sanlock:
With the sanlock lock_type, the sanlock locks are stored on disk, on a
hidden LV within the VG. The first sanlock VG created will hold the
global lock. Creating the first sanlock VG is a special case because no
global lock will exist until after the VG is created on disk. vgcreate
calls dlock_gl_create() to handle this special case. The comments in that
function explain the details of how it works.
command gl requirements:
Some commands are listed twice if they have two different
behaviors (depending on args) that need different gl usage.
As listed above (in A,B), the reasons for using the gl are:
A) reading or changing the VG name space
B) reading or changing PV orphans
(orphan properties or assignment to VGs)
command: gl mode used, reason(s) gl is needed
vgsplit: ex, add vg name
vgrename: ex, add vg name
vgcreate: ex, add vg name, rem pv orphan
vgremove: ex, rem vg name, add pv orphan
vgmerge: ex, rem vg name
vgextend: ex, rem pv orphan
vgreduce: ex, add pv orphan
vgscan: sh, get vg names
vgs: sh, get vg names (only if tags used or no args)
vgchange: sh, get vg names (only if tags used or no args)
vgchange: ex, change vg system_id/uuid/lock_type (equivalent to name)
pvcreate: ex, add pv orphan
pvremove: ex, rem pv orphan
pvdisplay: sh, get vg names
pvscan: sh, get vg names
pvresize: sh, get vg names
pvresize: ex, change pv orphan (only if pv is an orphan)
pvchange: sh, get vg names
pvchange: ex, change pv orphan (only if pv is an orphan)
lvchange: sh, get vg names (only if tags used)
lvscan: sh, get vg names
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and update the lvmetad copy after it is reread from disk.
To test this:
- Create VG foo that is visible to two hosts and usable by both.
- On both run 'lvmeta vg_lookup_name foo' to see the cached copy of
foo and its seqno. Say the seqno is 8.
- On host1 run 'lvcreate -n lv1 -L1G foo'.
- On host1 run 'lvmeta vg_lookup_name foo' to see the new version
of foo in lvmetad. It should have seqno 9.
- On host2 run 'lvmeta vg_lookup_name foo' to see the old cached
version of foo. It should have seqno 8.
- On host2 run 'lvmeta set_vg_version <uuid of foo> 9'.
- On host2 run 'lvmeta vg_lookup_name foo' to see that the vg_invalid
config node is reported along with the old cached version of foo.
- On host2 run 'lvs foo'. It should reread foo from disk and display lv1.
- On host2 run 'lvmeta vg_lookup_name foo' to see that the cached version
of foo is now updated to seqno 9, and the vg_invalid node is not reported.
|
|
|
|
|
| |
Will be used in later patches to check and update the
local lvmetad global cache when needed.
|
|
|
|
| |
Useful for debugging.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to invalidate global or individual VG metadata.
The invalid state is returned to lvm commands along with the metadata.
This allows lvm commands to detect stale metadata from the cache and
reread the latest metadata from disk (in a subsequent patch.)
These changes do not change the protocol or compatibility between
lvm commands and lvmetad.
Global information
------------------
Global information refers to metadata that is not isolated
to a single VG , e.g. the list of vg names, or the list of pvs.
When an external system, e.g. a locking system, detects that global
information has been changed from another host (e.g. a new vg has been
created) it sends lvmetad the message: set_global_info: global_invalid=1.
lvmetad sets the global invalid flag to indicate that its cached data is
stale.
When lvm commands request information from lvmetad, lvmetad returns the
cached information, along with an additional top-level config node called
"global_invalid". This new info tells the lvm command that the cached
information is stale.
When an lvm command sees global_invalid from lvmated, it knows it should
rescan devices and update lvmetad with the latest information. When this
is complete, it sends lvmetad the message: set_global_info:
global_invalid=0, and lvmetad clears the global invalid flag. Further lvm
commands will use the lvmetad cache until it is invalidated again.
The most common commands that cause global invalidation are vgcreate and
vgextend. These are uncommon compared to commands that report global
information, e.g. vgs. So, the percentage of lvmetad replies containing
global_invalid should be very small.
VG information
--------------
VG information refers to metadata that is isolated to a single VG,
e.g. an LV or the size of an LV.
When an external system determines that VG information has been changed
from another host (e.g. an lvcreate or lvresize), it sends lvmetad the
message: set_vg_info: uuid=X version=N. X is the VG uuid, and N is the
latest VG seqno that was written. lvmetad checks the seqno of its cached
VG, and if the version from the message is newer, it sets an invalid flag
for the cached VG. The invalid flag, along with the newer seqno are saved
in a new vg_info struct.
When lvm commands request VG metadata from lvmetad, lvmetad includes the
invalid flag along with the VG metadata. The lvm command checks for this
flag, and rereads the VG from disk if set. The VG read from disk is sent
to lvmetad. lvmetad sees that the seqno in the new version matches the
seqno from the last set_vg_info message, and clears the vg invalid flag.
Further lvm commands will use the VG metadata from lvmetad until it is
next invalidated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A lock manager requires an application to "start" or "join"
a lockspace before using locks from it. Start is the point
at which the lock manager on a host begins interacting with
other hosts to coordinate access to locks in the lockspace.
Similarly, an application needs to "stop" or "leave" a
lockspace when it's done using locks from the lockspace so
the lock manager can shut down and clean up the lockspace.
lvmlockd uses a lockspace for each sanlock|dlm VG, and the
lockspace for each VG needs to be started before lvm can use
it. These commands tell lvmlockd to start or stop the
lockspace for a VG:
vgchange --lock-start vg_name
vgchange --lock-stop vg_name
To start the lockspace for a VG, lvmlockd needs to know which
lock manager (sanlock or dlm) to use, and this is stored in the
VG metadata as lock_type = "sanlock|dlm", along with data that
is specific to the lock manager for the VG, saved as lock_args.
For sanlock, lock_args is the location of the locks on disk.
For dlm, lock_args is the name of the cluster the dlm should use.
So, the process for starting a VG includes:
- Reading the VG without a lock (no lock can be acquired
because the lockspace is not started).
- Taking the lock_type and lock_args strings from the
VG metadata.
- Asking lvmlockd to start the VG lockspace, providing
the lock_type and lock_args strings which tell lvmlockd
exactly which lock manager is needed.
- lvmlockd will ask the specific lock manager to join the
lockspace.
The VG read in the first step, without a lock, is not used for
for anything except getting the lock information needed to start
the lockspace. Subsequent use of the VG would use the VG lock.
In the case of a sanlock VG, there is an additional step in the
sequence. Between the second and third steps, the vgchange
lock-start command needs to activate the internal LV in the VG
that holds the sanlock locks. This LV must be active before
sanlock can join the lockspace.
Starting and stopping VG's would typically be done automatically
by the system, similar to the way LV's are automatically activated
by the system. But, it is always possible to directly start/stop VG
lockspaces, as it is always possible to directly activate/deactivate
LVs. Automatic VG start/stop will be added by a later patch, using
the basic functionality from this patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vgcreate calls lvmlockd_init_vg() to do any create/initialize
steps that are needed in lvmlockd for the given lock_type.
vgcreate calls lvmlockd_free_vg_before() to do any removal/freeing
steps that are needed in lvmlockd for the given lock_type
before the VG is removed on disk.
vgcreate calls lvmlockd_free_vg_final() to do any removal/freeing
steps that are needed in lvmlockd for the given lock_type
after the VG is removed on disk.
When the lock_type is sanlock, the init/free also include
lvm client side steps to create/remove an internal LV on
which sanlock will store the locks for the VG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The locking required to access a VG is a property of the VG,
and is specified in the VG metadata as the "lock_type".
When lvm sees a VG, it looks at the VG's lock_type to determine
if locks are needed and from where:
- If the VG has no lock_type, or lock_type "none", then no locks
are needed.
This a "local VG". If the VG is visible to multiple hosts,
the VG system_id provides basic protection. A VG with an
unmatching system_id is inaccessible.
- If the VG has lock_type "sanlock" or "dlm", then locks
are needed from lvmlockd, which acquires locks from either
sanlock or dlm respectively.
This is a "dlock VG". If lvmlockd or the supporting lock
manager are not running, then the dlock VG is inaccessible.
- If the VG has the CLUSTERED status flag (or lock_type "clvm"),
then locks are needed from clvmd.
This is a "clvm VG". If clvmd or the supporting clustering or
locking are not running, then the clvm VG is inaccessible.
Settings in lvm.conf tell lvm commands which locking daemon to use:
- global/use_lvmlockd=1: tells lvm to use lvmlockd when accessing
VGs with lock_type sanlock|dlm.
- global/locking_type=3: tells lvm to use clvmd when accessing
VGs with CLUSTERED flag (or lock_type clvm).
LVM commands cannot use both lvmlockd and clvmd at the same time:
- use_lvmlockd=1 should be combined with locking_type=1
- locking_type=3 (clvmd) should be combined with use_lvmlockd=0
So, different configurations allow access to different VG's:
- When configured to use lvmlockd, lvm commands can access VG's
with lock_type sanlock|dlm, and VG's with CLUSTERED are ignored.
- When configured to use clvmd (locking_type 3), lvm commands
can access VG's with the CLUSTERED flag, and VG's with
lock_type sanlock|dlm are ignored.
- When configured to use neither lvmlockd nor clvmd, lvm commands
can access only local VG's. lvm will ignore VG's with lock_type
sanlock|dlm, and will ignore VG's with CLUSTERED (or lock_type clvm).
A VG is created with a specific lock_type:
- vgcreate --lock_type <arg> is a new syntax that can specify the
lock_type directly. <arg> may be: none, clvm, sanlock, dlm.
sanlock|dlm require lvmlockd to be configured (in lvm.conf) and running.
clvm requires clvmd to be configured (in lvm.conf) and running.
- vgcreate --clustered y (or -cy) is the old syntax that still works,
but it is not preferred because the lock_type is not explicit.
When clvmd is configured, -cy creates a VG with lock_type clvm.
When lvmlockd is configured, -cy creates a VG with lock_type sanlock,
but this can be changed to dlm with lvm.conf vgcreate_cy_lock_type.
Notes:
The LOCK_TYPE status flag is not strictly necessary, but is an
attempt to prevent old versions of lvm (pre-lvmlockd) from using
a VG with a lock_type.
In the VG metadata, the lock_type string is accompanied by
a lock_args string. The lock_args string is lock-manager-specific
data associated with the VG. For sanlock, the location on disk
of the locks, or for dlm, the cluster name.
In a VG with lock_type sanlock|dlm, each LV also has a lock_type
and lock_args in the metadata. The LV lock_type currently always
matches the lock_type of the VG. For sanlock, the LV lock_args
specify the disk location of the LV lock.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a foreign VG is ignored when it's included by "all vgs",
then the command shouldn't fail.
If a foreign VG is ignored when it's named explicitly as
a command arg, then the command should fail.
Also, remove ignore_vg from reporter functions because it
repeats what has been done in process_each given the recent
new version of process_each.
|
|
|
|
| |
See included lvmsystemid(7) for full description.
|
| |
|
| |
|
|
|
|
|
| |
Calculate dm_list_size only when there is not just a single
ont segment in list - so it's only counted on error path.
|
|
|
|
|
|
|
|
|
| |
When deactivating origin, we may have possibly left table in broken state,
where origin is not active, but snapshot volume is still present.
Let's ensure deactivation of origin detects also all associated
snapshots are inactive - otherwise do not skip deactivation.
(so i.e. 'vgchange -an' would detect errors)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's use this function for more activations in the code.
'needs_exlusive' will enforce exlusive type for any given LV.
We may want to activate LV in exlusive mode, even when we know
the LV (as is) supports non-exlusive activation as well.
lvcreate -ay -> exclusive & local
lvcreate -aay -> exclusive & local
lvcreate -aly -> exclusive & local
lvcreate -aey -> exclusive (might be on any node).
|
|
|
|
| |
Unsupported as of now.
|
|
|
|
| |
understand this properly
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
LVSINFO is just a subtype of LVS report type with extra "info" ioctl
called for each LV reported (per output line) so include its processing
within "case LVS" switch, not as completely different kind of reporting
which may be misleading when reading the code.
There's already the "lv_info_needed" flag set in the _report fn, so
call the approriate reporting function based on this flag within the
"case LVS" switch line.
Actually the same is already done for LV is reported per segments
within the "case SEGS" switch line. So this patch makes the code more
consistent so it's processed the same way for all cases.
Also, this is a preparation for another and new subtype that will
be introduced later - the "LVSSTATUS" and "SEGSSTATUS" report type.
|
|
|
|
| |
Only with -DDEBUG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When responding to DM_EVENT_CMD_GET_REGISTERED_DEVICE no longer
ignore threads that have already been unregistered but which
are still present.
This means the caller can unregister a device and poll dmeventd
to ensure the monitoring thread has gone away before removing
the device. If a device was registered and unregistered in quick
succession and then removed, WAITEVENT could run in parallel with
the REMOVE.
Threads are moved to the _thread_registry_unused list when they
are unregistered.
|
|
|
|
|
|
|
| |
The status of threads in _thread_registry is always DM_THREAD_RUNNING
(zero).
DM_EVENT_REGISTRATION_PENDING is never stored in thread->events.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Activate of new/unused/empty thin pool volume skips
the 'overlay' part and directly provides 'visible' thin-pool LV to the user.
Such thin pool still gets 'private' -tpool UUID suffix for easier
udev detection of protected lvm2 devices, and also gets udev flags to
avoid any scan.
Such pool device is 'public' LV with regular /dev/vgname/poolname link,
but it's still 'udev' hidden device for any other use.
To display proper active state we need to do few explicit tests
for this condition.
Before it's used for any lvm2 thin volume, deactivation is
now needed to avoid any 'race' with external usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Call check_new_thin_pool() to detect in-use thin-pool.
Save extra reactivation of thin-pool when thin pool is not active.
(it's now a bit more expensive to invoke thin_check for new pools.)
For new pools:
We now active locally exclusively thin-pool as 'public' LV.
Validate transaction_id is till 0.
Deactive.
Prepare create message for thin-pool and exclusively active pool.
Active new thin LV.
And deactivate thin pool if it used to be inactive.
|
|
|
|
| |
Function tests, that given new thin pool is still unused.
|
|
|
|
|
|
|
|
| |
Allowing 'external' use of thin-pools requires to validate even
so far 'unused' new thin pools.
Later we may have 'smarter' way to resolve which thin-pools are
owned by lvm2 and which are external.
|
|
|
|
|
| |
Recognize 'new' (and never used) lvm2 thin pool - it has 'transaction_id' == 0
(lv_is_used_thin_pool() has slightly different meaning).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When transaction_id is set 0 for thin-pool, libdm avoids validation
of thin-pool, unless there are real messages to be send to thin-pool.
This relaxes strict policy which always required to know
in front transaction_id for the kernel target.
It now allows to activate thin-pool with any transaction_id
(when transaction_id is passed in)
It is now upto application to validate transaction_id from life
thin-pool volume with transaction_id within it's own metadata.
|
|
|
|
|
|
|
| |
After initial 'size' usage converted to extents, continue to use
only extents.
(in-release fix).
|
| |
|
|
|
|
|
| |
Test -m0 passed with types.
Check --readahead and thins.
|
|
|
|
| |
Use lv_is_pool() to detect both pool versions.
|
| |
|
| |
|
| |
|
|
|
|
| |
Make more clear dm_info type.
|
|
|
|
| |
Pass lvconvert_params as last arg.
|
|
|
|
| |
Use struct initializer instead of memset().
|
|
|
|
| |
Use log_error for real error.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Show some stats with 'lvs'
Display same info for active cache volume and cache-pool.
data% - #used cache blocks/#total cache blocks
meta% - #used metadata blocks/#total metadata blocks
copy% - #dirty/#used cache blocks
TODO: maybe there is a better mapping
- should be seen as first-try-and-see.
|
|
|
|
|
| |
Before we reuse cache-pool - we need to ensure metadata volume
has wiped header.
|
|
|
|
|
|
|
|
| |
When the cache pool is unused, lvm2 code will internally
allow to activate such cache-pool.
Cache-pool is activate as metadata LV, so lvm2 could easily
wipe such volume before cache-pool is reused.
|
|
|
|
|
|
|
|
|
|
| |
Replace lv_cache_block_info() and lv_cache_policy_info()
with lv_cache_status() which directly returns
dm_status_cache structure together with some calculated
values.
After use mem pool stored inside lv_status_cache structure
needs to be destroyed.
|
|
|
|
|
|
|
|
| |
Add init of no_open_count into _setup_task().
Report problem as warning (cannot happen anyway).
Also drop some duplicated debug messages - we have already
printed the info about operation so make log a bit shorter.
|
|
|
|
|
| |
Use standard 'virtual_extents' naming.
Move virtual_size into 'lcp' struct out of lvcreate_params.
|
|
|
|
| |
Lib takes sizes in extens - do the same for pool_metadata.
|
|
|
|
|
| |
Add function for wiping cache pool volume.
Only unused cache-pool could be wiped.
|
|
|
|
|
|
|
|
| |
Tool will use internal activation of unused cache pool to
clear metadata area before next use of cache-pool.
So allow to deactivation unused pool in case some error
case happend and we were not able to deactivation pool
right after metadata wipe.
|