diff options
author | Christoph Lameter <cl@linux.com> | 2013-02-13 17:12:05 +0100 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2013-04-24 21:37:45 -0400 |
commit | a59ac04e27f7a509883b80b24fb64cb2e68f71cc (patch) | |
tree | f6179f9900811f156f3412422ffb2b8a571cd751 | |
parent | d04e8f919b8cc04d7f0a2d7a815e6084bf2f74c8 (diff) | |
download | linux-rt-a59ac04e27f7a509883b80b24fb64cb2e68f71cc.tar.gz |
FIX [2/2] slub: Tid must be retrieved from the percpu area of the current processor
As Steven Rostedt has pointer out: Rescheduling could occur on a differnet processor
after the determination of the per cpu pointer and before the tid is retrieved.
This could result in allocation from the wrong node in slab_alloc.
The effect is much more severe in slab_free() where we could free to the freelist
of the wrong page.
The window for something like that occurring is pretty small but it is possible.
Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-rw-r--r-- | mm/slub.c | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/mm/slub.c b/mm/slub.c index 08eb4c1f94bf..78d27566f9b9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2286,13 +2286,18 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, return NULL; redo: - /* * Must read kmem_cache cpu data via this cpu ptr. Preemption is * enabled. We may switch back and forth between cpus while * reading from one cpu area. That does not matter as long * as we end up on the original cpu again when doing the cmpxchg. + * + * Preemption is disabled for the retrieval of the tid because that + * must occur from the current processor. We cannot allow rescheduling + * on a different processor between the determination of the pointer + * and the retrieval of the tid. */ + preempt_disable(); c = __this_cpu_ptr(s->cpu_slab); /* @@ -2302,7 +2307,7 @@ redo: * linked list in between. */ tid = c->tid; - barrier(); + preempt_enable(); object = c->freelist; if (unlikely(!object || !node_match(c, node))) @@ -2544,10 +2549,11 @@ redo: * data is retrieved via this pointer. If we are on the same cpu * during the cmpxchg then the free will succedd. */ + preempt_disable(); c = __this_cpu_ptr(s->cpu_slab); tid = c->tid; - barrier(); + preempt_enable(); if (likely(page == c->page)) { set_freepointer(s, object, c->freelist); |