aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/RCU/rcuref.txt
diff options
context:
space:
mode:
authorDipankar Sarma <dipankar@in.ibm.com>2005-09-09 16:04:09 -0400
committerLinus Torvalds <torvalds@g5.osdl.org>2005-09-09 16:57:54 -0400
commitc0dfb2905126e9e94edebbce8d3e05001301f52d (patch)
treedb31f15f7e5dea0c812992c2ca87a1151507ed9c /Documentation/RCU/rcuref.txt
parent8b6490e5faafb3a16ea45654fb55f9ff086f1495 (diff)
[PATCH] files: rcuref APIs
Adds a set of primitives to do reference counting for objects that are looked up without locks using RCU. Signed-off-by: Ravikiran Thirumalai <kiran_th@gmail.com> Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'Documentation/RCU/rcuref.txt')
-rw-r--r--Documentation/RCU/rcuref.txt74
1 files changed, 74 insertions, 0 deletions
diff --git a/Documentation/RCU/rcuref.txt b/Documentation/RCU/rcuref.txt
new file mode 100644
index 000000000000..a23fee66064d
--- /dev/null
+++ b/Documentation/RCU/rcuref.txt
@@ -0,0 +1,74 @@
1Refcounter framework for elements of lists/arrays protected by
2RCU.
3
4Refcounting on elements of lists which are protected by traditional
5reader/writer spinlocks or semaphores are straight forward as in:
6
71. 2.
8add() search_and_reference()
9{ {
10 alloc_object read_lock(&list_lock);
11 ... search_for_element
12 atomic_set(&el->rc, 1); atomic_inc(&el->rc);
13 write_lock(&list_lock); ...
14 add_element read_unlock(&list_lock);
15 ... ...
16 write_unlock(&list_lock); }
17}
18
193. 4.
20release_referenced() delete()
21{ {
22 ... write_lock(&list_lock);
23 atomic_dec(&el->rc, relfunc) ...
24 ... delete_element
25} write_unlock(&list_lock);
26 ...
27 if (atomic_dec_and_test(&el->rc))
28 kfree(el);
29 ...
30 }
31
32If this list/array is made lock free using rcu as in changing the
33write_lock in add() and delete() to spin_lock and changing read_lock
34in search_and_reference to rcu_read_lock(), the rcuref_get in
35search_and_reference could potentially hold reference to an element which
36has already been deleted from the list/array. rcuref_lf_get_rcu takes
37care of this scenario. search_and_reference should look as;
38
391. 2.
40add() search_and_reference()
41{ {
42 alloc_object rcu_read_lock();
43 ... search_for_element
44 atomic_set(&el->rc, 1); if (rcuref_inc_lf(&el->rc)) {
45 write_lock(&list_lock); rcu_read_unlock();
46 return FAIL;
47 add_element }
48 ... ...
49 write_unlock(&list_lock); rcu_read_unlock();
50} }
513. 4.
52release_referenced() delete()
53{ {
54 ... write_lock(&list_lock);
55 rcuref_dec(&el->rc, relfunc) ...
56 ... delete_element
57} write_unlock(&list_lock);
58 ...
59 if (rcuref_dec_and_test(&el->rc))
60 call_rcu(&el->head, el_free);
61 ...
62 }
63
64Sometimes, reference to the element need to be obtained in the
65update (write) stream. In such cases, rcuref_inc_lf might be an overkill
66since the spinlock serialising list updates are held. rcuref_inc
67is to be used in such cases.
68For arches which do not have cmpxchg rcuref_inc_lf
69api uses a hashed spinlock implementation and the same hashed spinlock
70is acquired in all rcuref_xxx primitives to preserve atomicity.
71Note: Use rcuref_inc api only if you need to use rcuref_inc_lf on the
72refcounter atleast at one place. Mixing rcuref_inc and atomic_xxx api
73might lead to races. rcuref_inc_lf() must be used in lockfree
74RCU critical sections only.