aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-ia64/mmu_context.h
Commit message (Collapse)AuthorAge
* [IA64] Move include/asm-ia64 to arch/ia64/include/asmTony Luck2008-08-01
| | | | | | | | | | | | | | | After moving the the include files there were a few clean-ups: 1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h> 2) Some comments alerted maintainers to look at various header files to make matching updates if certain code were to be changed. Updated these comments to use the new include paths. 3) Some header files mentioned their own names in initial comments. Just deleted these self references. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] pvops: preparation: introduce ia64_set_rr0_to_rr4() to make kernel ↵Isaku Yamahata2008-05-27
| | | | | | | | | | | paravirtualization friendly. make kernel paravirtualization friendly by introducing ia64_set_rr0_to_rr4(). ia64/Xen will replace setting rr[0-4] with single hypercall later. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] x86: PARAVIRT: add hooks to intercept mm creation and destructionJeremy Fitzhardinge2007-05-02
| | | | | | | | | | | | | | | | | | | | | Add hooks to allow a paravirt implementation to track the lifetime of an mm. Paravirtualization requires three hooks, but only two are needed in common code. They are: arch_dup_mmap, which is called when a new mmap is created at fork arch_exit_mmap, which is called when the last process reference to an mm is dropped, which typically happens on exit and exec. The third hook is activate_mm, which is called from the arch-specific activate_mm() macro/function, and so doesn't need stub versions for other architectures. It's called when an mm is first used. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: linux-arch@vger.kernel.org Cc: James Bottomley <James.Bottomley@SteelEye.com> Acked-by: Ingo Molnar <mingo@elte.hu>
* [IA64] make mmu_context.h and tlb.c 80-column friendlyChen, Kenneth W2005-11-03
| | | | | | | | | | | | | wrap_mmu_context(), delayed_tlb_flush(), get_mmu_context() all have an extra { } block which cause one extra indentation. get_mmu_context() is particularly bad with 5 indentations to the most inner "if". It finally gets on my nerve that I can't keep the code within 80 columns. Remove the extra { } block and while I'm at it, reformat all the comments to 80-column friendly. No functional change at all with this patch. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Use bitmaps for efficient context allocation/freePeter Keilty2005-10-31
| | | | | | | | | | | | | | Corrects the very inefficent method of finding free context_ids in get_mmu_context(). Instead of walking the task_list of all processes, 2 bitmaps are used to efficently store and lookup state, inuse and needs flushing. The entire rid address space is now used before calling wrap_mmu_context and global tlb flushing. Special thanks to Ken and Rohit for their review and modifications in using a bit flushmap. Signed-off-by: Peter Keilty <peter.keilty@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull rationalise-regions into release branchTony Luck2005-08-29
|\
| * [IA64] Rationalise Region DefinitionsPeter Chubb2005-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, region numbers are defined in several files, with several names. For example, we have REGION_KERNEL in asm/page.h and RGN_KERNEL in pgtable.h We also have address definitions that should depend on the RGN_XXX macros, but are currently just long constants. The following patch reorganises all the definitions so that they have the same form (RGN_XXX), are in one place, and that addresses that depend on RGN_XXX are derived from them. (This is a necessary but not sufficient patch to allow UML-like operation on IA64). Thanks to David Mosberger for catching the change I missed in mmu_context.h. Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Fix race in mm-context wrap-around logic.David Mosberger-Tang2005-08-12
|/ | | | | | | | | | | | | | | | The patch below should fix a race which could cause stale TLB entries. Specifically, when 2 CPUs ended up racing for entrance to wrap_mmu_context(). The losing CPU would find that by the time it acquired ctx.lock, mm->context already had a valid value, but then it failed to (re-)check the delayed TLB flushing logic and hence could end up using a context number when there were still stale entries in its TLB. The fix is to check for delayed TLB flushes only after mm->context is valid (non-zero). The patch also makes GCC v4.x happier by defining a non-volatile variant of mm_context_t called nv_mm_context_t. Signed-off-by: David Mosberger-Tang <David.Mosberger@acm.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Fix another IA64 preemption problemPeter Chubb2005-06-28
| | | | | | | | | | | There's another problem shown up by Ingo's recent patch to make smp_processor_id() complain if it's called with preemption enabled. local_finish_flush_tlb_mm() calls activate_context() in a situation where it could be rescheduled to another processor. This patch disables preemption around the call. Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-16
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!