• Paul E. McKenney's avatar
    rcu: Process offlining and onlining only at grace-period start · 0aa04b05
    Paul E. McKenney authored
    Races between CPU hotplug and grace periods can be difficult to resolve,
    so the ->onoff_mutex is used to exclude the two events.  Unfortunately,
    this means that it is impossible for an outgoing CPU to perform the
    last bits of its offlining from its last pass through the idle loop,
    because sleeplocks cannot be acquired in that context.
    
    This commit avoids these problems by buffering online and offline events
    in a new ->qsmaskinitnext field in the leaf rcu_node structures.  When a
    grace period starts, the events accumulated in this mask are applied to
    the ->qsmaskinit field, and, if needed, up the rcu_node tree.  The special
    case of all CPUs corresponding to a given leaf rcu_node structure being
    offline while there are still elements in that structure's ->blkd_tasks
    list is handled using a new ->wait_blkd_tasks field.  In this case,
    propagating the offline bits up the tree is deferred until the beginning
    of the grace period after all of the tasks have exited their RCU read-side
    critical sections and removed themselves from the list, at which point
    the ->wait_blkd_tasks flag is cleared.  If one of that leaf rcu_node
    structure's CPUs comes back online before the list empties, then the
    ->wait_blkd_tasks flag is simply cleared.
    
    This of course means that RCU's notion of which CPUs are offline can be
    out of date.  This is OK because RCU need only wait on CPUs that were
    online at the time that the grace period started.  In addition, RCU's
    force-quiescent-state actions will handle the case where a CPU goes
    offline after the grace period starts.
    Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
    0aa04b05
tree.c 127 KB