diff options
author | Huang Ying <ying.huang@intel.com> | 2017-04-05 09:21:02 +1000 |
---|---|---|
committer | Stephen Rothwell <sfr@canb.auug.org.au> | 2017-04-05 09:21:02 +1000 |
commit | 1f1b7dc4acb469c145631c136453e01d00a8a91f (patch) | |
tree | fd52a1f5d41f61cd3c556f5d6d051bdf4700d965 /mm/swapfile.c | |
parent | b988b7693347b8ea32bfc26b3d43e35451fe7429 (diff) | |
download | linux-next-1f1b7dc4acb469c145631c136453e01d00a8a91f.tar.gz |
mm, swap: avoid lock swap_avail_lock when held cluster lock
Cluster lock is used to protect the swap_cluster_info and corresponding
elements in swap_info_struct->swap_map[]. But it is found that now in
scan_swap_map_slots(), swap_avail_lock may be acquired when cluster lock
is held. This does no good except making the locking more complex and
improving the potential locking contention, because the
swap_info_struct->lock is used to protect the data structure operated in
the code already. Fix this via moving the corresponding operations in
scan_swap_map_slots() out of cluster lock.
Link: http://lkml.kernel.org/r/20170317064635.12792-3-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Acked-by: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/swapfile.c')
-rw-r--r-- | mm/swapfile.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/swapfile.c b/mm/swapfile.c index 42fd620dcf4c..53b5881ee0d6 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -672,6 +672,9 @@ checks: else goto done; } + si->swap_map[offset] = usage; + inc_cluster_info_page(si, si->cluster_info, offset); + unlock_cluster(ci); if (offset == si->lowest_bit) si->lowest_bit++; @@ -685,9 +688,6 @@ checks: plist_del(&si->avail_list, &swap_avail_head); spin_unlock(&swap_avail_lock); } - si->swap_map[offset] = usage; - inc_cluster_info_page(si, si->cluster_info, offset); - unlock_cluster(ci); si->cluster_next = offset + 1; slots[n_ret++] = swp_entry(si->type, offset); |