aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2005-11-07 17:09:01 -0500
committerDavid S. Miller <davem@davemloft.net>2005-11-07 17:09:01 -0500
commitdedeb0029b9c83420fc1337d4ee53daa7b2a0ad4 (patch)
treed87e66e1d6240cd412c20ecbc12f5b810c9807e4 /kernel
parentb8ae48656db860d4c83a29aa7b0588fc89361935 (diff)
[SPARC64] mm: context switch ptlock
sparc64 is unique among architectures in taking the page_table_lock in its context switch (well, cris does too, but erroneously, and it's not yet SMP anyway). This seems to be a private affair between switch_mm and activate_mm, using page_table_lock as a per-mm lock, without any relation to its uses elsewhere. That's fine, but comment it as such; and unlock sooner in switch_mm, more like in activate_mm (preemption is disabled here). There is a block of "if (0)"ed code in smp_flush_tlb_pending which would have liked to rely on the page_table_lock, in switch_mm and elsewhere; but its comment explains how dup_mmap's flush_tlb_mm defeated it. And though that could have been changed at any time over the past few years, now the chance vanishes as we push the page_table_lock downwards, and perhaps split it per page table page. Just delete that block of code. Which leaves the mysterious spin_unlock_wait(&oldmm->page_table_lock) in kernel/fork.c copy_mm. Textual analysis (supported by Nick Piggin) suggests that the comment was written by DaveM, and that it relates to the defeated approach in the sparc64 smp_flush_tlb_pending. Just delete this block too. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/fork.c7
1 files changed, 0 insertions, 7 deletions
diff --git a/kernel/fork.c b/kernel/fork.c
index efac2c58ec7d..158710d22566 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -470,13 +470,6 @@ static int copy_mm(unsigned long clone_flags, struct task_struct * tsk)
470 if (clone_flags & CLONE_VM) { 470 if (clone_flags & CLONE_VM) {
471 atomic_inc(&oldmm->mm_users); 471 atomic_inc(&oldmm->mm_users);
472 mm = oldmm; 472 mm = oldmm;
473 /*
474 * There are cases where the PTL is held to ensure no
475 * new threads start up in user mode using an mm, which
476 * allows optimizing out ipis; the tlb_gather_mmu code
477 * is an example.
478 */
479 spin_unlock_wait(&oldmm->page_table_lock);
480 goto good_mm; 473 goto good_mm;
481 } 474 }
482 475