aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorEric W. Biederman <ebiederm@xmission.com>2006-03-31 05:31:37 -0500
committerLinus Torvalds <torvalds@g5.osdl.org>2006-03-31 15:18:59 -0500
commit8c7904a00b06d2ee51149794b619e07369fcf9d4 (patch)
treec995150254e17dfda6e1679c3249343586e178cb /kernel
parente4e5d3fc80d26ed26ebe42907b224f08d7eccfbf (diff)
[PATCH] task: RCU protect task->usage
A big problem with rcu protected data structures that are also reference counted is that you must jump through several hoops to increase the reference count. I think someone finally implemented atomic_inc_not_zero(&count) to automate the common case. Unfortunately this means you must special case the rcu access case. When data structures are only visible via rcu in a manner that is not determined by the reference count on the object (i.e. tasks are visible until their zombies are reaped) there is a much simpler technique we can employ. Simply delaying the decrement of the reference count until the rcu interval is over. What that means is that the proc code that looks up a task and later wants to sleep can now do: rcu_read_lock(); task = find_task_by_pid(some_pid); if (task) { get_task_struct(task); } rcu_read_unlock(); The effect on the rest of the kernel is that put_task_struct becomes cheaper and immediate, and in the case where the task has been reaped it frees the task immediate instead of unnecessarily waiting an until the rcu interval is over. Cleanup of task_struct does not happen when its reference count drops to zero, instead cleanup happens when release_task is called. Tasks can only be looked up via rcu before release_task is called. All rcu protected members of task_struct are freed by release_task. Therefore we can move call_rcu from put_task_struct into release_task. And we can modify release_task to not immediately release the reference count but instead have it call put_task_struct from the function it gives to call_rcu. The end result: - get_task_struct is safe in an rcu context where we have just looked up the task. - put_task_struct() simplifies into its old pre rcu self. This reorganization also makes put_task_struct uncallable from modules as it is not exported but it does not appear to be called from any modules so this should not be an issue, and is trivially fixed. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/exit.c7
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/exit.c b/kernel/exit.c
index bc0ec674d3f4..6c2eeb8f6390 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -127,6 +127,11 @@ static void __exit_signal(struct task_struct *tsk)
127 } 127 }
128} 128}
129 129
130static void delayed_put_task_struct(struct rcu_head *rhp)
131{
132 put_task_struct(container_of(rhp, struct task_struct, rcu));
133}
134
130void release_task(struct task_struct * p) 135void release_task(struct task_struct * p)
131{ 136{
132 int zap_leader; 137 int zap_leader;
@@ -168,7 +173,7 @@ repeat:
168 spin_unlock(&p->proc_lock); 173 spin_unlock(&p->proc_lock);
169 proc_pid_flush(proc_dentry); 174 proc_pid_flush(proc_dentry);
170 release_thread(p); 175 release_thread(p);
171 put_task_struct(p); 176 call_rcu(&p->rcu, delayed_put_task_struct);
172 177
173 p = leader; 178 p = leader;
174 if (unlikely(zap_leader)) 179 if (unlikely(zap_leader))