summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorArjun Roy <arjunroy@google.com>2020-03-27 09:02:41 +1100
committerStephen Rothwell <sfr@canb.auug.org.au>2020-03-27 17:45:51 +1100
commit7c08d73739eb5b48ff0d12006b5504fb93aa6819 (patch)
tree04d6562d70b390cb7cc56c9501e17c5445f2aebc /mm
parent7b0b8ab44ed454c4aac67c6aaa08d1bc0e7c2570 (diff)
downloadlinux-next-7c08d73739eb5b48ff0d12006b5504fb93aa6819.tar.gz
add missing page_count() check to vm_insert_pages().
Add missing page_count() check to vm_insert_pages(), specifically inside insert_page_in_batch_locked(). This was accidentally forgotten in the original patchset. See: https://marc.info/?l=linux-mm&m=158156166403807&w=2 The intention of this patch-set is to reduce atomic ops for tcp zerocopy receives, which normally hits the same spinlock multiple times consecutively. Link: http://lkml.kernel.org/r/20200214005929.104481-1-arjunroy.kdev@gmail.com Signed-off-by: Arjun Roy <arjunroy@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: David Miller <davem@davemloft.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index bb0d3ebcde1f..6c1bd65af03f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1475,8 +1475,11 @@ static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte,
static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
unsigned long addr, struct page *page, pgprot_t prot)
{
- const int err = validate_page_before_insert(page);
+ int err;
+ if (!page_count(page))
+ return -EINVAL;
+ err = validate_page_before_insert(page);
return err ? err : insert_page_into_pte_locked(
mm, pte_offset_map(pmd, addr), addr, page, prot);
}