diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-11-07 17:09:01 -0500 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2005-11-07 17:09:01 -0500 |
commit | dedeb0029b9c83420fc1337d4ee53daa7b2a0ad4 (patch) | |
tree | d87e66e1d6240cd412c20ecbc12f5b810c9807e4 /arch | |
parent | b8ae48656db860d4c83a29aa7b0588fc89361935 (diff) |
[SPARC64] mm: context switch ptlock
sparc64 is unique among architectures in taking the page_table_lock in
its context switch (well, cris does too, but erroneously, and it's not
yet SMP anyway).
This seems to be a private affair between switch_mm and activate_mm,
using page_table_lock as a per-mm lock, without any relation to its uses
elsewhere. That's fine, but comment it as such; and unlock sooner in
switch_mm, more like in activate_mm (preemption is disabled here).
There is a block of "if (0)"ed code in smp_flush_tlb_pending which would
have liked to rely on the page_table_lock, in switch_mm and elsewhere;
but its comment explains how dup_mmap's flush_tlb_mm defeated it. And
though that could have been changed at any time over the past few years,
now the chance vanishes as we push the page_table_lock downwards, and
perhaps split it per page table page. Just delete that block of code.
Which leaves the mysterious spin_unlock_wait(&oldmm->page_table_lock)
in kernel/fork.c copy_mm. Textual analysis (supported by Nick Piggin)
suggests that the comment was written by DaveM, and that it relates to
the defeated approach in the sparc64 smp_flush_tlb_pending. Just delete
this block too.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/sparc64/kernel/smp.c | 31 |
1 files changed, 5 insertions, 26 deletions
diff --git a/arch/sparc64/kernel/smp.c b/arch/sparc64/kernel/smp.c index b137fd63f5e1..a9089e2140e9 100644 --- a/arch/sparc64/kernel/smp.c +++ b/arch/sparc64/kernel/smp.c | |||
@@ -883,34 +883,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long | |||
883 | u32 ctx = CTX_HWBITS(mm->context); | 883 | u32 ctx = CTX_HWBITS(mm->context); |
884 | int cpu = get_cpu(); | 884 | int cpu = get_cpu(); |
885 | 885 | ||
886 | if (mm == current->active_mm && atomic_read(&mm->mm_users) == 1) { | 886 | if (mm == current->active_mm && atomic_read(&mm->mm_users) == 1) |
887 | mm->cpu_vm_mask = cpumask_of_cpu(cpu); | 887 | mm->cpu_vm_mask = cpumask_of_cpu(cpu); |
888 | goto local_flush_and_out; | 888 | else |
889 | } else { | 889 | smp_cross_call_masked(&xcall_flush_tlb_pending, |
890 | /* This optimization is not valid. Normally | 890 | ctx, nr, (unsigned long) vaddrs, |
891 | * we will be holding the page_table_lock, but | 891 | mm->cpu_vm_mask); |
892 | * there is an exception which is copy_page_range() | ||
893 | * when forking. The lock is held during the individual | ||
894 | * page table updates in the parent, but not at the | ||
895 | * top level, which is where we are invoked. | ||
896 | */ | ||
897 | if (0) { | ||
898 | cpumask_t this_cpu_mask = cpumask_of_cpu(cpu); | ||
899 | |||
900 | /* By virtue of running under the mm->page_table_lock, | ||
901 | * and mmu_context.h:switch_mm doing the same, the | ||
902 | * following operation is safe. | ||
903 | */ | ||
904 | if (cpus_equal(mm->cpu_vm_mask, this_cpu_mask)) | ||
905 | goto local_flush_and_out; | ||
906 | } | ||
907 | } | ||
908 | |||
909 | smp_cross_call_masked(&xcall_flush_tlb_pending, | ||
910 | ctx, nr, (unsigned long) vaddrs, | ||
911 | mm->cpu_vm_mask); | ||
912 | 892 | ||
913 | local_flush_and_out: | ||
914 | __flush_tlb_pending(ctx, nr, vaddrs); | 893 | __flush_tlb_pending(ctx, nr, vaddrs); |
915 | 894 | ||
916 | put_cpu(); | 895 | put_cpu(); |