Commit 2cdb54c9 authored by Mauro Carvalho Chehab's avatar Mauro Carvalho Chehab Committed by Paul E. McKenney

docs: RCU: Convert rculist_nulls.txt to ReST

- Add a SPDX header;
- Adjust document title;
- Some whitespace fixes and new line breaks;
- Mark literal blocks as such;
- Add it to RCU/index.rst.
Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+huawei@kernel.org>
Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
parent 058cc23b
...@@ -17,6 +17,7 @@ RCU concepts ...@@ -17,6 +17,7 @@ RCU concepts
rcu_dereference rcu_dereference
whatisRCU whatisRCU
rcu rcu
rculist_nulls
listRCU listRCU
NMI-RCU NMI-RCU
UP UP
......
Using hlist_nulls to protect read-mostly linked lists and .. SPDX-License-Identifier: GPL-2.0
=================================================
Using RCU hlist_nulls to protect list and objects
=================================================
This section describes how to use hlist_nulls to
protect read-mostly linked lists and
objects using SLAB_TYPESAFE_BY_RCU allocations. objects using SLAB_TYPESAFE_BY_RCU allocations.
Please read the basics in Documentation/RCU/listRCU.rst Please read the basics in Documentation/RCU/listRCU.rst
...@@ -12,10 +19,13 @@ use following algos : ...@@ -12,10 +19,13 @@ use following algos :
1) Lookup algo 1) Lookup algo
-------------- --------------
rcu_read_lock()
begin: ::
obj = lockless_lookup(key);
if (obj) { rcu_read_lock()
begin:
obj = lockless_lookup(key);
if (obj) {
if (!try_get_ref(obj)) // might fail for free objects if (!try_get_ref(obj)) // might fail for free objects
goto begin; goto begin;
/* /*
...@@ -27,14 +37,16 @@ if (obj) { ...@@ -27,14 +37,16 @@ if (obj) {
put_ref(obj); put_ref(obj);
goto begin; goto begin;
} }
} }
rcu_read_unlock(); rcu_read_unlock();
Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu() Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
but a version with an additional memory barrier (smp_rmb()) but a version with an additional memory barrier (smp_rmb())
lockless_lookup(key) ::
{
lockless_lookup(key)
{
struct hlist_node *node, *next; struct hlist_node *node, *next;
for (pos = rcu_dereference((head)->first); for (pos = rcu_dereference((head)->first);
pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) && pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
...@@ -43,8 +55,9 @@ lockless_lookup(key) ...@@ -43,8 +55,9 @@ lockless_lookup(key)
if (obj->key == key) if (obj->key == key)
return obj; return obj;
return NULL; return NULL;
}
And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() : And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb()::
struct hlist_node *node; struct hlist_node *node;
for (pos = rcu_dereference((head)->first); for (pos = rcu_dereference((head)->first);
...@@ -54,11 +67,10 @@ And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() : ...@@ -54,11 +67,10 @@ And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() :
if (obj->key == key) if (obj->key == key)
return obj; return obj;
return NULL; return NULL;
}
Quoting Corey Minyard : Quoting Corey Minyard::
"If the object is moved from one list to another list in-between the "If the object is moved from one list to another list in-between the
time the hash is calculated and the next field is accessed, and the time the hash is calculated and the next field is accessed, and the
object has moved to the end of a new list, the traversal will not object has moved to the end of a new list, the traversal will not
complete properly on the list it should have, since the object will complete properly on the list it should have, since the object will
...@@ -67,8 +79,8 @@ Quoting Corey Minyard : ...@@ -67,8 +79,8 @@ Quoting Corey Minyard :
solved by pre-fetching the "next" field (with proper barriers) before solved by pre-fetching the "next" field (with proper barriers) before
checking the key." checking the key."
2) Insert algo : 2) Insert algo
---------------- --------------
We need to make sure a reader cannot read the new 'obj->obj_next' value We need to make sure a reader cannot read the new 'obj->obj_next' value
and previous value of 'obj->key'. Or else, an item could be deleted and previous value of 'obj->key'. Or else, an item could be deleted
...@@ -76,21 +88,23 @@ from a chain, and inserted into another chain. If new chain was empty ...@@ -76,21 +88,23 @@ from a chain, and inserted into another chain. If new chain was empty
before the move, 'next' pointer is NULL, and lockless reader can before the move, 'next' pointer is NULL, and lockless reader can
not detect it missed following items in original chain. not detect it missed following items in original chain.
/* ::
/*
* Please note that new inserts are done at the head of list, * Please note that new inserts are done at the head of list,
* not in the middle or end. * not in the middle or end.
*/ */
obj = kmem_cache_alloc(...); obj = kmem_cache_alloc(...);
lock_chain(); // typically a spin_lock() lock_chain(); // typically a spin_lock()
obj->key = key; obj->key = key;
/* /*
* we need to make sure obj->key is updated before obj->next * we need to make sure obj->key is updated before obj->next
* or obj->refcnt * or obj->refcnt
*/ */
smp_wmb(); smp_wmb();
atomic_set(&obj->refcnt, 1); atomic_set(&obj->refcnt, 1);
hlist_add_head_rcu(&obj->obj_node, list); hlist_add_head_rcu(&obj->obj_node, list);
unlock_chain(); // typically a spin_unlock() unlock_chain(); // typically a spin_unlock()
3) Remove algo 3) Remove algo
...@@ -99,16 +113,19 @@ Nothing special here, we can use a standard RCU hlist deletion. ...@@ -99,16 +113,19 @@ Nothing special here, we can use a standard RCU hlist deletion.
But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused
very very fast (before the end of RCU grace period) very very fast (before the end of RCU grace period)
if (put_last_reference_on(obj) { ::
if (put_last_reference_on(obj) {
lock_chain(); // typically a spin_lock() lock_chain(); // typically a spin_lock()
hlist_del_init_rcu(&obj->obj_node); hlist_del_init_rcu(&obj->obj_node);
unlock_chain(); // typically a spin_unlock() unlock_chain(); // typically a spin_unlock()
kmem_cache_free(cachep, obj); kmem_cache_free(cachep, obj);
} }
-------------------------------------------------------------------------- --------------------------------------------------------------------------
With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
and extra smp_wmb() in insert function. and extra smp_wmb() in insert function.
...@@ -124,10 +141,13 @@ scan the list again without harm. ...@@ -124,10 +141,13 @@ scan the list again without harm.
1) lookup algo 1) lookup algo
--------------
::
head = &table[slot]; head = &table[slot];
rcu_read_lock(); rcu_read_lock();
begin: begin:
hlist_nulls_for_each_entry_rcu(obj, node, head, member) { hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
if (obj->key == key) { if (obj->key == key) {
if (!try_get_ref(obj)) // might fail for free objects if (!try_get_ref(obj)) // might fail for free objects
...@@ -138,7 +158,7 @@ begin: ...@@ -138,7 +158,7 @@ begin:
} }
goto out; goto out;
} }
/* /*
* if the nulls value we got at the end of this lookup is * if the nulls value we got at the end of this lookup is
* not the expected one, we must restart lookup. * not the expected one, we must restart lookup.
* We probably met an item that was moved to another chain. * We probably met an item that was moved to another chain.
...@@ -147,26 +167,28 @@ begin: ...@@ -147,26 +167,28 @@ begin:
goto begin; goto begin;
obj = NULL; obj = NULL;
out: out:
rcu_read_unlock(); rcu_read_unlock();
2) Insert function : 2) Insert function
-------------------- ------------------
/* ::
/*
* Please note that new inserts are done at the head of list, * Please note that new inserts are done at the head of list,
* not in the middle or end. * not in the middle or end.
*/ */
obj = kmem_cache_alloc(cachep); obj = kmem_cache_alloc(cachep);
lock_chain(); // typically a spin_lock() lock_chain(); // typically a spin_lock()
obj->key = key; obj->key = key;
/* /*
* changes to obj->key must be visible before refcnt one * changes to obj->key must be visible before refcnt one
*/ */
smp_wmb(); smp_wmb();
atomic_set(&obj->refcnt, 1); atomic_set(&obj->refcnt, 1);
/* /*
* insert obj in RCU way (readers might be traversing chain) * insert obj in RCU way (readers might be traversing chain)
*/ */
hlist_nulls_add_head_rcu(&obj->obj_node, list); hlist_nulls_add_head_rcu(&obj->obj_node, list);
unlock_chain(); // typically a spin_unlock() unlock_chain(); // typically a spin_unlock()
...@@ -162,7 +162,7 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) ...@@ -162,7 +162,7 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
* The barrier() is needed to make sure compiler doesn't cache first element [1], * The barrier() is needed to make sure compiler doesn't cache first element [1],
* as this loop can be restarted [2] * as this loop can be restarted [2]
* [1] Documentation/core-api/atomic_ops.rst around line 114 * [1] Documentation/core-api/atomic_ops.rst around line 114
* [2] Documentation/RCU/rculist_nulls.txt around line 146 * [2] Documentation/RCU/rculist_nulls.rst around line 146
*/ */
#define hlist_nulls_for_each_entry_rcu(tpos, pos, head, member) \ #define hlist_nulls_for_each_entry_rcu(tpos, pos, head, member) \
for (({barrier();}), \ for (({barrier();}), \
......
...@@ -1973,7 +1973,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) ...@@ -1973,7 +1973,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
/* /*
* Before updating sk_refcnt, we must commit prior changes to memory * Before updating sk_refcnt, we must commit prior changes to memory
* (Documentation/RCU/rculist_nulls.txt for details) * (Documentation/RCU/rculist_nulls.rst for details)
*/ */
smp_wmb(); smp_wmb();
refcount_set(&newsk->sk_refcnt, 2); refcount_set(&newsk->sk_refcnt, 2);
...@@ -3035,7 +3035,7 @@ void sock_init_data(struct socket *sock, struct sock *sk) ...@@ -3035,7 +3035,7 @@ void sock_init_data(struct socket *sock, struct sock *sk)
sk_rx_queue_clear(sk); sk_rx_queue_clear(sk);
/* /*
* Before updating sk_refcnt, we must commit prior changes to memory * Before updating sk_refcnt, we must commit prior changes to memory
* (Documentation/RCU/rculist_nulls.txt for details) * (Documentation/RCU/rculist_nulls.rst for details)
*/ */
smp_wmb(); smp_wmb();
refcount_set(&sk->sk_refcnt, 1); refcount_set(&sk->sk_refcnt, 1);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment