diff options
author | Jan Kara <jack@suse.cz> | 2013-10-04 09:29:12 -0400 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2014-01-03 04:33:19 +0000 |
commit | 434700a29d1e1b0a31dd03dc8b5d0f5424a09bca (patch) | |
tree | 73639844c3083d8c41f9ab7076e935d789db9243 /drivers/infiniband/hw | |
parent | 0ecfc95da4f159e3a773bf1c0554e5f064bda491 (diff) | |
download | linux-rt-434700a29d1e1b0a31dd03dc8b5d0f5424a09bca.tar.gz |
IB/qib: Convert qib_user_sdma_pin_pages() to use get_user_pages_fast()
commit 603e7729920e42b3c2f4dbfab9eef4878cb6e8fa upstream.
qib_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all. Even
more interestingly the function qib_user_sdma_queue_pkts() (and also
qib_user_sdma_coalesce() called somewhat later) call copy_from_user()
which can hit a page fault and we deadlock on trying to get mmap_sem
when handling that fault.
So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and
leave mmap_sem locking for mm.
This deadlock has actually been observed in the wild when the node
is under memory pressure.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Roland Dreier <roland@purestorage.com>
[bwh: Backported to 3.2:
- Adjust context
- Adjust indentation and nr_pages argument in qib_user_sdma_pin_pages()]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'drivers/infiniband/hw')
-rw-r--r-- | drivers/infiniband/hw/qib/qib_user_sdma.c | 6 |
1 files changed, 1 insertions, 5 deletions
diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 82442085cbe6..573b4601d5b9 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -284,8 +284,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, int j; int ret; - ret = get_user_pages(current, current->mm, addr, - npages, 0, 1, pages, NULL); + ret = get_user_pages_fast(addr, npages, 0, pages); if (ret != npages) { int i; @@ -830,10 +829,7 @@ int qib_user_sdma_writev(struct qib_ctxtdata *rcd, while (dim) { const int mxp = 8; - down_write(¤t->mm->mmap_sem); ret = qib_user_sdma_queue_pkts(dd, pq, &list, iov, dim, mxp); - up_write(¤t->mm->mmap_sem); - if (ret <= 0) goto done_unlock; else { |