1. 21 May, 2019 2 commits
    • Waiman Long's avatar
      locking/rwsem: Prevent decrement of reader count before increment · e85fab7c
      Waiman Long authored
      [ Upstream commit a9e9bcb4 ]
      
      During my rwsem testing, it was found that after a down_read(), the
      reader count may occasionally become 0 or even negative. Consequently,
      a writer may steal the lock at that time and execute with the reader
      in parallel thus breaking the mutual exclusion guarantee of the write
      lock. In other words, both readers and writer can become rwsem owners
      simultaneously.
      
      The current reader wakeup code does it in one pass to clear waiter->task
      and put them into wake_q before fully incrementing the reader count.
      Once waiter->task is cleared, the corresponding reader may see it,
      finish the critical section and do unlock to decrement the count before
      the count is incremented. This is not a problem if there is only one
      reader to wake up as the count has been pre-incremented by 1.  It is
      a problem if there are more than one readers to be woken up and writer
      can steal the lock.
      
      The wakeup was actually done in 2 passes before the following v4.9 commit:
      
        70800c3c ("locking/rwsem: Scan the wait_list for readers only once")
      
      To fix this problem, the wakeup is now done in two passes
      again. In the first pass, we collect the readers and count them.
      The reader count is then fully incremented. In the second pass, the
      waiter->task is then cleared and they are put into wake_q to be woken
      up later.
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: huang ying <huang.ying.caritas@gmail.com>
      Fixes: 70800c3c ("locking/rwsem: Scan the wait_list for readers only once")
      Link: http://lkml.kernel.org/r/20190428212557.13482-2-longman@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      e85fab7c
    • Sasha Levin's avatar
      net: core: another layer of lists, around PF_MEMALLOC skb handling · fc7fab70
      Sasha Levin authored
      [ Upstream commit 78ed8cc25986ac5c21762eeddc1e86e94d422e36 ]
      
      First example of a layer splitting the list (rather than merely taking
       individual packets off it).
      Involves new list.h function, list_cut_before(), like list_cut_position()
       but cuts on the other side of the given entry.
      Signed-off-by: default avatarEdward Cree <ecree@solarflare.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      [sl: cut out non list.h bits, we only want list_cut_before]
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      fc7fab70
  2. 16 May, 2019 38 commits