1. 27 Aug, 2021 2 commits
  2. 25 Aug, 2021 2 commits
    • Thomas Gleixner's avatar
      locking/rtmutex: Dequeue waiter on ww_mutex deadlock · 37e8abff
      Thomas Gleixner authored
      The rt_mutex based ww_mutex variant queues the new waiter first in the
      lock's rbtree before evaluating the ww_mutex specific conditions which
      might decide that the waiter should back out. This check and conditional
      exit happens before the waiter is enqueued into the PI chain.
      
      The failure handling at the call site assumes that the waiter, if it is the
      top most waiter on the lock, is queued in the PI chain and then proceeds to
      adjust the unmodified PI chain, which results in RB tree corruption.
      
      Dequeue the waiter from the lock waiter list in the ww_mutex error exit
      path to prevent this.
      
      Fixes: add46132 ("locking/rtmutex: Extend the rtmutex core to support ww_mutex")
      Reported-by: default avatarSebastian Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20210825102454.042280541@linutronix.de
      37e8abff
    • Thomas Gleixner's avatar
      locking/rtmutex: Dont dereference waiter lockless · c3123c43
      Thomas Gleixner authored
      The new rt_mutex_spin_on_onwer() loop checks whether the spinning waiter is
      still the top waiter on the lock by utilizing rt_mutex_top_waiter(), which
      is broken because that function contains a sanity check which dereferences
      the top waiter pointer to check whether the waiter belongs to the
      lock. That's wrong in the lockless spinwait case:
      
       CPU 0							CPU 1
       rt_mutex_lock(lock)					rt_mutex_lock(lock);
         queue(waiter0)
         waiter0 == rt_mutex_top_waiter(lock)
         rt_mutex_spin_on_onwer(lock, waiter0) {		queue(waiter1)
         					 		waiter1 == rt_mutex_top_waiter(lock)
         							...
           top_waiter = rt_mutex_top_waiter(lock)
             leftmost = rb_first_cached(&lock->waiters);
      							-> signal
      							dequeue(waiter1)
      							destroy(waiter1)
             w = rb_entry(leftmost, ....)
             BUG_ON(w->lock != lock)	 <- UAF
      
      The BUG_ON() is correct for the case where the caller holds lock->wait_lock
      which guarantees that the leftmost waiter entry cannot vanish. For the
      lockless spinwait case it's broken.
      
      Create a new helper function which avoids the pointer dereference and just
      compares the leftmost entry pointer with current's waiter pointer to
      validate that currrent is still elegible for spinning.
      
      Fixes: 992caf7f ("locking/rtmutex: Add adaptive spinwait mechanism")
      Reported-by: default avatarSebastian Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20210825102453.981720644@linutronix.de
      c3123c43
  3. 20 Aug, 2021 2 commits
  4. 17 Aug, 2021 34 commits