summaryrefslogtreecommitdiff
path: root/ironic/drivers/modules/irmc
diff options
context:
space:
mode:
authorJulia Kreger <juliaashleykreger@gmail.com>2021-09-13 09:46:08 -0700
committerJulia Kreger <juliaashleykreger@gmail.com>2021-09-13 10:05:37 -0700
commit4fc1abf91fe622b3eca52547dca086396acf9436 (patch)
treefbc6b126bf8909e3300a6fb4770874c626648cb9 /ironic/drivers/modules/irmc
parentfa1c60cbce725bb113472a79d9d594e0f6584f94 (diff)
downloadironic-4fc1abf91fe622b3eca52547dca086396acf9436.tar.gz
Fix driver task pattern to reduce periodic db load
Previously, a pattern of periodic tasks was created where nodes, and in many cases, all nodes not actively locked nor those in maintenance state, were pulled in by a periodic task. These periodic tasks would then create tasks which generated additional database queries in order to populate the task object. With the task object populated, the driver would then evaluate if the driver in question was for the the driver interface in question and *then* evaluate if work had to be performed. However, that field containing a pointer to if work needed to be performed as often already queried from the database on the very initial query to generate the list of nodes to evaluate. In essence, we've moved this up in the sequence so we evaluate that field in question prior to creating the task, potentially across every conductor, depending on the query, and ultimately which drivers are enabled. This saves potentially saves hundreds of thousands of needless database queries on a medium size deployment per single day, depending on which drivers and driver interfaces are in use. Change-Id: I409e87de2808d442d39e4d0ae6e995668230cbba
Diffstat (limited to 'ironic/drivers/modules/irmc')
-rw-r--r--ironic/drivers/modules/irmc/raid.py9
1 files changed, 7 insertions, 2 deletions
diff --git a/ironic/drivers/modules/irmc/raid.py b/ironic/drivers/modules/irmc/raid.py
index 901695632..34d1c3f38 100644
--- a/ironic/drivers/modules/irmc/raid.py
+++ b/ironic/drivers/modules/irmc/raid.py
@@ -434,6 +434,13 @@ class IRMCRAID(base.RAIDInterface):
node_list = manager.iter_nodes(fields=fields, filters=filters)
for (node_uuid, driver, conductor_group, raid_config) in node_list:
try:
+ # NOTE(TheJulia): Evaluate based upon presence of raid
+ # configuration before triggering a task, as opposed to after
+ # so we don't create excess node task objects with related
+ # DB queries.
+ if not raid_config or raid_config.get('fgi_status'):
+ continue
+
lock_purpose = 'checking async RAID configuration tasks'
with task_manager.acquire(context, node_uuid,
purpose=lock_purpose,
@@ -444,8 +451,6 @@ class IRMCRAID(base.RAIDInterface):
continue
if task.node.target_raid_config is None:
continue
- if not raid_config or raid_config.get('fgi_status'):
- continue
task.upgrade_lock()
if node.provision_state != states.CLEANWAIT:
continue