1. 11 Dec, 2011 5 commits
    • Paul E. McKenney's avatar
      rcu: Track idleness independent of idle tasks · 9b2e4f18
      Paul E. McKenney authored
      Earlier versions of RCU used the scheduling-clock tick to detect idleness
      by checking for the idle task, but handled idleness differently for
      CONFIG_NO_HZ=y.  But there are now a number of uses of RCU read-side
      critical sections in the idle task, for example, for tracing.  A more
      fine-grained detection of idleness is therefore required.
      
      This commit presses the old dyntick-idle code into full-time service,
      so that rcu_idle_enter(), previously known as rcu_enter_nohz(), is
      always invoked at the beginning of an idle loop iteration.  Similarly,
      rcu_idle_exit(), previously known as rcu_exit_nohz(), is always invoked
      at the end of an idle-loop iteration.  This allows the idle task to
      use RCU everywhere except between consecutive rcu_idle_enter() and
      rcu_idle_exit() calls, in turn allowing architecture maintainers to
      specify exactly where in the idle loop that RCU may be used.
      
      Because some of the userspace upcall uses can result in what looks
      to RCU like half of an interrupt, it is not possible to expect that
      the irq_enter() and irq_exit() hooks will give exact counts.  This
      patch therefore expands the ->dynticks_nesting counter to 64 bits
      and uses two separate bitfields to count process/idle transitions
      and interrupt entry/exit transitions.  It is presumed that userspace
      upcalls do not happen in the idle loop or from usermode execution
      (though usermode might do a system call that results in an upcall).
      The counter is hard-reset on each process/idle transition, which
      avoids the interrupt entry/exit error from accumulating.  Overflow
      is avoided by the 64-bitness of the ->dyntick_nesting counter.
      
      This commit also adds warnings if a non-idle task asks RCU to enter
      idle state (and these checks will need some adjustment before applying
      Frederic's OS-jitter patches (http://lkml.org/lkml/2011/10/7/246).
      In addition, validation of ->dynticks and ->dynticks_nesting is added.
      Signed-off-by: default avatarPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarJosh Triplett <josh@joshtriplett.org>
      9b2e4f18
    • Paul E. McKenney's avatar
      lockdep: Update documentation for lock-class leak detection · b804cb9e
      Paul E. McKenney authored
      There are a number of bugs that can leak or overuse lock classes,
      which can cause the maximum number of lock classes (currently 8191)
      to be exceeded.  However, the documentation does not tell you how to
      track down these problems.  This commit addresses this shortcoming.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b804cb9e
    • Paul E. McKenney's avatar
      rcu: Make synchronize_sched_expedited() better at work sharing · 7077714e
      Paul E. McKenney authored
      When synchronize_sched_expedited() takes its second and subsequent
      snapshots of sync_sched_expedited_started, it subtracts 1.  This
      means that the concurrent caller of synchronize_sched_expedited()
      that incremented to that value sees our successful completion, it
      will not be able to take advantage of it.  This restriction is
      pointless, given that our full expedited grace period would have
      happened after the other guy started, and thus should be able to
      serve as a proxy for the other guy successfully executing
      try_stop_cpus().
      
      This commit therefore removes the subtraction of 1.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarJosh Triplett <josh@joshtriplett.org>
      7077714e
    • Paul E. McKenney's avatar
      rcu: Avoid RCU-preempt expedited grace-period botch · 389abd48
      Paul E. McKenney authored
      Because rcu_read_unlock_special() samples rcu_preempted_readers_exp(rnp)
      after dropping rnp->lock, the following sequence of events is possible:
      
      1.	Task A exits its RCU read-side critical section, and removes
      	itself from the ->blkd_tasks list, releases rnp->lock, and is
      	then preempted.  Task B remains on the ->blkd_tasks list, and
      	blocks the current expedited grace period.
      
      2.	Task B exits from its RCU read-side critical section and removes
      	itself from the ->blkd_tasks list.  Because it is the last task
      	blocking the current expedited grace period, it ends that
      	expedited grace period.
      
      3.	Task A resumes, and samples rcu_preempted_readers_exp(rnp) which
      	of course indicates that nothing is blocking the nonexistent
      	expedited grace period. Task A is again preempted.
      
      4.	Some other CPU starts an expedited grace period.  There are several
      	tasks blocking this expedited grace period queued on the
      	same rcu_node structure that Task A was using in step 1 above.
      
      5.	Task A examines its state and incorrectly concludes that it was
      	the last task blocking the expedited grace period on the current
      	rcu_node structure.  It therefore reports completion up the
      	rcu_node tree.
      
      6.	The expedited grace period can then incorrectly complete before
      	the tasks blocked on this same rcu_node structure exit their
      	RCU read-side critical sections.  Arbitrarily bad things happen.
      
      This commit therefore takes a snapshot of rcu_preempted_readers_exp(rnp)
      prior to dropping the lock, so that only the last task thinks that it is
      the last task, thus avoiding the failure scenario laid out above.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarJosh Triplett <josh@joshtriplett.org>
      389abd48
    • Paul E. McKenney's avatar
      rcu: ->signaled better named ->fqs_state · af446b70
      Paul E. McKenney authored
      The ->signaled field was named before complications in the form of
      dyntick-idle mode and offlined CPUs.  These complications have required
      that force_quiescent_state() be implemented as a state machine, instead
      of simply unconditionally sending reschedule IPIs.  Therefore, this
      commit renames ->signaled to ->fqs_state to catch up with the new
      force_quiescent_state() reality.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarJosh Triplett <josh@joshtriplett.org>
      af446b70
  2. 09 Dec, 2011 34 commits
  3. 08 Dec, 2011 1 commit