summaryrefslogtreecommitdiff
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
* Merge branch 'akpm' (patches from Andrew)Linus Torvalds2021-11-061-46/+63
|\
| * mm: remove HARDENED_USERCOPY_FALLBACKStephen Kitt2021-11-061-14/+0
| * mm, slub: use prefetchw instead of prefetchHyeonggon Yoo2021-11-061-1/+1
| * mm/slub: increase default cpu partial list sizesVlastimil Babka2021-11-061-4/+4
| * mm, slub: change percpu partial accounting from objects to pagesVlastimil Babka2021-11-061-30/+59
| * slub: add back check for free nonslab objectsKefeng Wang2021-11-061-1/+3
* | Merge tag 'printk-for-5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/...Linus Torvalds2021-11-021-2/+2
|\ \ | |/ |/|
| * vsprintf: Make %pGp print the hex valueMatthew Wilcox (Oracle)2021-10-271-2/+2
* | mm, slub: fix incorrect memcg slab count for bulk freeMiaohe Lin2021-10-181-1/+3
* | mm, slub: fix potential use-after-free in slab_debugfs_fopsMiaohe Lin2021-10-181-2/+4
* | mm, slub: fix potential memoryleak in kmem_cache_open()Miaohe Lin2021-10-181-1/+1
* | mm, slub: fix mismatch between reconstructed freelist depth and cntMiaohe Lin2021-10-181-2/+9
* | mm, slub: fix two bugs in slab_debug_trace_open()Miaohe Lin2021-10-181-1/+7
|/
* mm, slub: convert kmem_cpu_slab protection to local_lockVlastimil Babka2021-09-041-35/+111
* mm, slub: use migrate_disable() on PREEMPT_RTVlastimil Babka2021-09-041-9/+30
* mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchgVlastimil Babka2021-09-041-37/+44
* mm, slub: make slab_lock() disable irqs with PREEMPT_RTVlastimil Babka2021-09-041-17/+41
* mm: slub: make object_map_lock a raw_spinlock_tSebastian Andrzej Siewior2021-09-041-3/+3
* mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of ...Sebastian Andrzej Siewior2021-09-041-16/+78
* mm, slab: split out the cpu offline variant of flush_slab()Vlastimil Babka2021-09-041-2/+10
* mm, slub: don't disable irqs in slub_cpu_dead()Vlastimil Babka2021-09-041-5/+1
* mm, slub: only disable irq with spin_lock in __unfreeze_partials()Vlastimil Babka2021-09-041-8/+4
* mm, slub: separate detaching of partial list in unfreeze_partials() from unfr...Vlastimil Babka2021-09-041-22/+51
* mm, slub: detach whole partial list at once in unfreeze_partials()Vlastimil Babka2021-09-041-3/+7
* mm, slub: discard slabs in unfreeze_partials() without irqs disabledVlastimil Babka2021-09-041-1/+2
* mm, slub: move irq control into unfreeze_partials()Vlastimil Babka2021-09-041-6/+7
* mm, slub: call deactivate_slab() without disabling irqsVlastimil Babka2021-09-041-5/+19
* mm, slub: make locking in deactivate_slab() irq-safeVlastimil Babka2021-09-041-4/+5
* mm, slub: move reset of c->page and freelist out of deactivate_slab()Vlastimil Babka2021-09-041-13/+18
* mm, slub: stop disabling irqs around get_partial()Vlastimil Babka2021-09-041-14/+8
* mm, slub: check new pages with restored irqsVlastimil Babka2021-09-041-5/+3
* mm, slub: validate slab from partial list or page allocator before making it ...Vlastimil Babka2021-09-041-8/+9
* mm, slub: restore irqs around calling new_slab()Vlastimil Babka2021-09-041-6/+2
* mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc()Vlastimil Babka2021-09-041-9/+25
* mm, slub: do initial checks in ___slab_alloc() with irqs enabledVlastimil Babka2021-09-041-9/+45
* mm, slub: move disabling/enabling irqs to ___slab_alloc()Vlastimil Babka2021-09-041-12/+24
* mm, slub: simplify kmem_cache_cpu and tid setupVlastimil Babka2021-09-041-13/+9
* mm, slub: restructure new page checks in ___slab_alloc()Vlastimil Babka2021-09-041-6/+22
* mm, slub: return slab page from get_partial() and set c->page afterwardsVlastimil Babka2021-09-041-10/+11
* mm, slub: dissolve new_slab_objects() into ___slab_alloc()Vlastimil Babka2021-09-041-32/+18
* mm, slub: extract get_partial() from new_slab_objects()Vlastimil Babka2021-09-041-6/+6
* mm, slub: remove redundant unfreeze_partials() from put_cpu_partial()Vlastimil Babka2021-09-031-7/+0
* mm, slub: don't disable irq for debug_check_no_locks_freed()Vlastimil Babka2021-09-031-13/+1
* mm, slub: allocate private object map for validate_slab_cache()Vlastimil Babka2021-09-031-9/+15
* mm, slub: allocate private object map for debugfs listingsVlastimil Babka2021-09-031-15/+29
* mm, slub: don't call flush_all() from slab_debug_trace_open()Vlastimil Babka2021-09-031-3/+0
* mm: slub: fix slub_debug disabling for list of slabsVlastimil Babka2021-08-131-5/+8
* slub: fix kmalloc_pagealloc_invalid_free unit testShakeel Butt2021-08-131-4/+4
* kasan, slub: reset tag when printing addressKuan-Ying Lee2021-08-131-2/+2
* slub: fix unreclaimable slab stat for bulk freeShakeel Butt2021-07-301-10/+12