diff options
author | Juergen Gross <jgross@suse.com> | 2018-10-01 07:57:42 +0200 |
---|---|---|
committer | Juergen Gross <jgross@suse.com> | 2018-10-24 10:18:04 +0200 |
commit | a856531951dc8094359dfdac21d59cee5969c18e (patch) | |
tree | 3e87ca7c1afe1453d3aaac82b1ef4eae3114bf88 /arch/x86 | |
parent | 2ac2a7d4d9ff4e01e36f9c3d116582f6f655ab47 (diff) | |
download | linux-next-a856531951dc8094359dfdac21d59cee5969c18e.tar.gz |
xen: make xen_qlock_wait() nestable
xen_qlock_wait() isn't safe for nested calls due to interrupts. A call
of xen_qlock_kick() might be ignored in case a deeper nesting level
was active right before the call of xen_poll_irq():
CPU 1: CPU 2:
spin_lock(lock1)
spin_lock(lock1)
-> xen_qlock_wait()
-> xen_clear_irq_pending()
Interrupt happens
spin_unlock(lock1)
-> xen_qlock_kick(CPU 2)
spin_lock_irqsave(lock2)
spin_lock_irqsave(lock2)
-> xen_qlock_wait()
-> xen_clear_irq_pending()
clears kick for lock1
-> xen_poll_irq()
spin_unlock_irq_restore(lock2)
-> xen_qlock_kick(CPU 2)
wakes up
spin_unlock_irq_restore(lock2)
IRET
resumes in xen_qlock_wait()
-> xen_poll_irq()
never wakes up
The solution is to disable interrupts in xen_qlock_wait() and not to
poll for the irq in case xen_qlock_wait() is called in nmi context.
Cc: stable@vger.kernel.org
Cc: Waiman.Long@hp.com
Cc: peterz@infradead.org
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/xen/spinlock.c | 24 |
1 files changed, 10 insertions, 14 deletions
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 290a69ec7d4d..441c88262169 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -39,29 +39,25 @@ static void xen_qlock_kick(int cpu) */ static void xen_qlock_wait(u8 *byte, u8 val) { + unsigned long flags; int irq = __this_cpu_read(lock_kicker_irq); /* If kicker interrupts not initialized yet, just spin */ - if (irq == -1) + if (irq == -1 || in_nmi()) return; - /* If irq pending already clear it and return. */ + /* Guard against reentry. */ + local_irq_save(flags); + + /* If irq pending already clear it. */ if (xen_test_irq_pending(irq)) { xen_clear_irq_pending(irq); - return; + } else if (READ_ONCE(*byte) == val) { + /* Block until irq becomes pending (or a spurious wakeup) */ + xen_poll_irq(irq); } - if (READ_ONCE(*byte) != val) - return; - - /* - * If an interrupt happens here, it will leave the wakeup irq - * pending, which will cause xen_poll_irq() to return - * immediately. - */ - - /* Block until irq becomes pending (or perhaps a spurious wakeup) */ - xen_poll_irq(irq); + local_irq_restore(flags); } static irqreturn_t dummy_handler(int irq, void *dev_id) |