aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include/asm/mmu-fsl-booke.h
diff options
context:
space:
mode:
authorBenjamin Herrenschmidt <benh@kernel.crashing.org>2008-12-18 14:13:29 -0500
committerPaul Mackerras <paulus@samba.org>2008-12-20 22:21:15 -0500
commit2ca8cf738907180e7fbda90f25f32b86feda609f (patch)
tree60d8af9b53a78ae9300ef7e68f222b02fe3be542 /arch/powerpc/include/asm/mmu-fsl-booke.h
parent5e696617c425eb97bd943d781f3941fb1e8f0e5b (diff)
powerpc/mm: Rework context management for CPUs with no hash table
This reworks the context management code used by 4xx,8xx and freescale BookE. It adds support for SMP by implementing a concept of stale context map to lazily flush the TLB on processors where a context may have been invalidated. This also contains the ground work for generalizing such lazy TLB flushing by just picking up a new PID and marking the old one stale. This will be implemented later. This is a first implementation that uses a global spinlock. Ideally, we should try to get at least the fast path (context ID already assigned) lockless or limited to a per context lock, but for now this will do. I tried to keep the UP case reasonably simple to avoid adding too much overhead to 8xx which does a lot of context stealing since it effectively has only 16 PIDs available. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'arch/powerpc/include/asm/mmu-fsl-booke.h')
-rw-r--r--arch/powerpc/include/asm/mmu-fsl-booke.h5
1 files changed, 3 insertions, 2 deletions
diff --git a/arch/powerpc/include/asm/mmu-fsl-booke.h b/arch/powerpc/include/asm/mmu-fsl-booke.h
index 5588a41f439c..3f941c0f7e8e 100644
--- a/arch/powerpc/include/asm/mmu-fsl-booke.h
+++ b/arch/powerpc/include/asm/mmu-fsl-booke.h
@@ -76,8 +76,9 @@
76#ifndef __ASSEMBLY__ 76#ifndef __ASSEMBLY__
77 77
78typedef struct { 78typedef struct {
79 unsigned long id; 79 unsigned int id;
80 unsigned long vdso_base; 80 unsigned int active;
81 unsigned long vdso_base;
81} mm_context_t; 82} mm_context_t;
82#endif /* !__ASSEMBLY__ */ 83#endif /* !__ASSEMBLY__ */
83 84