diff options
author | Jeremy Fitzhardinge <jeremy@xensource.com> | 2007-07-17 21:37:04 -0400 |
---|---|---|
committer | Jeremy Fitzhardinge <jeremy@goop.org> | 2007-07-18 11:47:42 -0400 |
commit | 5ead97c84fa7d63a6a7a2f4e9f18f452bd109045 (patch) | |
tree | 26f6bc55dce0f119f7d3c8d6b40d2f287601db36 /include/xen/page.h | |
parent | a42089dd358a7673a0a23126589a9029e57c2049 (diff) |
xen: Core Xen implementation
This patch is a rollup of all the core pieces of the Xen
implementation, including:
- booting and setup
- pagetable setup
- privileged instructions
- segmentation
- interrupt flags
- upcalls
- multicall batching
BOOTING AND SETUP
The vmlinux image is decorated with ELF notes which tell the Xen
domain builder what the kernel's requirements are; the domain builder
then constructs the address space accordingly and starts the kernel.
Xen has its own entrypoint for the kernel (contained in an ELF note).
The ELF notes are set up by xen-head.S, which is included into head.S.
In principle it could be linked separately, but it seems to provoke
lots of binutils bugs.
Because the domain builder starts the kernel in a fairly sane state
(32-bit protected mode, paging enabled, flat segments set up), there's
not a lot of setup needed before starting the kernel proper. The main
steps are:
1. Install the Xen paravirt_ops, which is simply a matter of a
structure assignment.
2. Set init_mm to use the Xen-supplied pagetables (analogous to the
head.S generated pagetables in a native boot).
3. Reserve address space for Xen, since it takes a chunk at the top
of the address space for its own use.
4. Call start_kernel()
PAGETABLE SETUP
Once we hit the main kernel boot sequence, it will end up calling back
via paravirt_ops to set up various pieces of Xen specific state. One
of the critical things which requires a bit of extra care is the
construction of the initial init_mm pagetable. Because Xen places
tight constraints on pagetables (an active pagetable must always be
valid, and must always be mapped read-only to the guest domain), we
need to be careful when constructing the new pagetable to keep these
constraints in mind. It turns out that the easiest way to do this is
use the initial Xen-provided pagetable as a template, and then just
insert new mappings for memory where a mapping doesn't already exist.
This means that during pagetable setup, it uses a special version of
xen_set_pte which ignores any attempt to remap a read-only page as
read-write (since Xen will map its own initial pagetable as RO), but
lets other changes to the ptes happen, so that things like NX are set
properly.
PRIVILEGED INSTRUCTIONS AND SEGMENTATION
When the kernel runs under Xen, it runs in ring 1 rather than ring 0.
This means that it is more privileged than user-mode in ring 3, but it
still can't run privileged instructions directly. Non-performance
critical instructions are dealt with by taking a privilege exception
and trapping into the hypervisor and emulating the instruction, but
more performance-critical instructions have their own specific
paravirt_ops. In many cases we can avoid having to do any hypercalls
for these instructions, or the Xen implementation is quite different
from the normal native version.
The privileged instructions fall into the broad classes of:
Segmentation: setting up the GDT and the GDT entries, LDT,
TLS and so on. Xen doesn't allow the GDT to be directly
modified; all GDT updates are done via hypercalls where the new
entries can be validated. This is important because Xen uses
segment limits to prevent the guest kernel from damaging the
hypervisor itself.
Traps and exceptions: Xen uses a special format for trap entrypoints,
so when the kernel wants to set an IDT entry, it needs to be
converted to the form Xen expects. Xen sets int 0x80 up specially
so that the trap goes straight from userspace into the guest kernel
without going via the hypervisor. sysenter isn't supported.
Kernel stack: The esp0 entry is extracted from the tss and provided to
Xen.
TLB operations: the various TLB calls are mapped into corresponding
Xen hypercalls.
Control registers: all the control registers are privileged. The most
important is cr3, which points to the base of the current pagetable,
and we handle it specially.
Another instruction we treat specially is CPUID, even though its not
privileged. We want to control what CPU features are visible to the
rest of the kernel, and so CPUID ends up going into a paravirt_op.
Xen implements this mainly to disable the ACPI and APIC subsystems.
INTERRUPT FLAGS
Xen maintains its own separate flag for masking events, which is
contained within the per-cpu vcpu_info structure. Because the guest
kernel runs in ring 1 and not 0, the IF flag in EFLAGS is completely
ignored (and must be, because even if a guest domain disables
interrupts for itself, it can't disable them overall).
(A note on terminology: "events" and interrupts are effectively
synonymous. However, rather than using an "enable flag", Xen uses a
"mask flag", which blocks event delivery when it is non-zero.)
There are paravirt_ops for each of cli/sti/save_fl/restore_fl, which
are implemented to manage the Xen event mask state. The only thing
worth noting is that when events are unmasked, we need to explicitly
see if there's a pending event and call into the hypervisor to make
sure it gets delivered.
UPCALLS
Xen needs a couple of upcall (or callback) functions to be implemented
by each guest. One is the event upcalls, which is how events
(interrupts, effectively) are delivered to the guests. The other is
the failsafe callback, which is used to report errors in either
reloading a segment register, or caused by iret. These are
implemented in i386/kernel/entry.S so they can jump into the normal
iret_exc path when necessary.
MULTICALL BATCHING
Xen provides a multicall mechanism, which allows multiple hypercalls
to be issued at once in order to mitigate the cost of trapping into
the hypervisor. This is particularly useful for context switches,
since the 4-5 hypercalls they would normally need (reload cr3, update
TLS, maybe update LDT) can be reduced to one. This patch implements a
generic batching mechanism for hypercalls, which gets used in many
places in the Xen code.
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Cc: Ian Pratt <ian.pratt@xensource.com>
Cc: Christian Limpach <Christian.Limpach@cl.cam.ac.uk>
Cc: Adrian Bunk <bunk@stusta.de>
Diffstat (limited to 'include/xen/page.h')
-rw-r--r-- | include/xen/page.h | 179 |
1 files changed, 179 insertions, 0 deletions
diff --git a/include/xen/page.h b/include/xen/page.h new file mode 100644 index 000000000000..1df6c1930578 --- /dev/null +++ b/include/xen/page.h | |||
@@ -0,0 +1,179 @@ | |||
1 | #ifndef __XEN_PAGE_H | ||
2 | #define __XEN_PAGE_H | ||
3 | |||
4 | #include <linux/pfn.h> | ||
5 | |||
6 | #include <asm/uaccess.h> | ||
7 | |||
8 | #include <xen/features.h> | ||
9 | |||
10 | #ifdef CONFIG_X86_PAE | ||
11 | /* Xen machine address */ | ||
12 | typedef struct xmaddr { | ||
13 | unsigned long long maddr; | ||
14 | } xmaddr_t; | ||
15 | |||
16 | /* Xen pseudo-physical address */ | ||
17 | typedef struct xpaddr { | ||
18 | unsigned long long paddr; | ||
19 | } xpaddr_t; | ||
20 | #else | ||
21 | /* Xen machine address */ | ||
22 | typedef struct xmaddr { | ||
23 | unsigned long maddr; | ||
24 | } xmaddr_t; | ||
25 | |||
26 | /* Xen pseudo-physical address */ | ||
27 | typedef struct xpaddr { | ||
28 | unsigned long paddr; | ||
29 | } xpaddr_t; | ||
30 | #endif | ||
31 | |||
32 | #define XMADDR(x) ((xmaddr_t) { .maddr = (x) }) | ||
33 | #define XPADDR(x) ((xpaddr_t) { .paddr = (x) }) | ||
34 | |||
35 | /**** MACHINE <-> PHYSICAL CONVERSION MACROS ****/ | ||
36 | #define INVALID_P2M_ENTRY (~0UL) | ||
37 | #define FOREIGN_FRAME_BIT (1UL<<31) | ||
38 | #define FOREIGN_FRAME(m) ((m) | FOREIGN_FRAME_BIT) | ||
39 | |||
40 | extern unsigned long *phys_to_machine_mapping; | ||
41 | |||
42 | static inline unsigned long pfn_to_mfn(unsigned long pfn) | ||
43 | { | ||
44 | if (xen_feature(XENFEAT_auto_translated_physmap)) | ||
45 | return pfn; | ||
46 | |||
47 | return phys_to_machine_mapping[(unsigned int)(pfn)] & | ||
48 | ~FOREIGN_FRAME_BIT; | ||
49 | } | ||
50 | |||
51 | static inline int phys_to_machine_mapping_valid(unsigned long pfn) | ||
52 | { | ||
53 | if (xen_feature(XENFEAT_auto_translated_physmap)) | ||
54 | return 1; | ||
55 | |||
56 | return (phys_to_machine_mapping[pfn] != INVALID_P2M_ENTRY); | ||
57 | } | ||
58 | |||
59 | static inline unsigned long mfn_to_pfn(unsigned long mfn) | ||
60 | { | ||
61 | unsigned long pfn; | ||
62 | |||
63 | if (xen_feature(XENFEAT_auto_translated_physmap)) | ||
64 | return mfn; | ||
65 | |||
66 | #if 0 | ||
67 | if (unlikely((mfn >> machine_to_phys_order) != 0)) | ||
68 | return max_mapnr; | ||
69 | #endif | ||
70 | |||
71 | pfn = 0; | ||
72 | /* | ||
73 | * The array access can fail (e.g., device space beyond end of RAM). | ||
74 | * In such cases it doesn't matter what we return (we return garbage), | ||
75 | * but we must handle the fault without crashing! | ||
76 | */ | ||
77 | __get_user(pfn, &machine_to_phys_mapping[mfn]); | ||
78 | |||
79 | return pfn; | ||
80 | } | ||
81 | |||
82 | static inline xmaddr_t phys_to_machine(xpaddr_t phys) | ||
83 | { | ||
84 | unsigned offset = phys.paddr & ~PAGE_MASK; | ||
85 | return XMADDR(PFN_PHYS((u64)pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset); | ||
86 | } | ||
87 | |||
88 | static inline xpaddr_t machine_to_phys(xmaddr_t machine) | ||
89 | { | ||
90 | unsigned offset = machine.maddr & ~PAGE_MASK; | ||
91 | return XPADDR(PFN_PHYS((u64)mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset); | ||
92 | } | ||
93 | |||
94 | /* | ||
95 | * We detect special mappings in one of two ways: | ||
96 | * 1. If the MFN is an I/O page then Xen will set the m2p entry | ||
97 | * to be outside our maximum possible pseudophys range. | ||
98 | * 2. If the MFN belongs to a different domain then we will certainly | ||
99 | * not have MFN in our p2m table. Conversely, if the page is ours, | ||
100 | * then we'll have p2m(m2p(MFN))==MFN. | ||
101 | * If we detect a special mapping then it doesn't have a 'struct page'. | ||
102 | * We force !pfn_valid() by returning an out-of-range pointer. | ||
103 | * | ||
104 | * NB. These checks require that, for any MFN that is not in our reservation, | ||
105 | * there is no PFN such that p2m(PFN) == MFN. Otherwise we can get confused if | ||
106 | * we are foreign-mapping the MFN, and the other domain as m2p(MFN) == PFN. | ||
107 | * Yikes! Various places must poke in INVALID_P2M_ENTRY for safety. | ||
108 | * | ||
109 | * NB2. When deliberately mapping foreign pages into the p2m table, you *must* | ||
110 | * use FOREIGN_FRAME(). This will cause pte_pfn() to choke on it, as we | ||
111 | * require. In all the cases we care about, the FOREIGN_FRAME bit is | ||
112 | * masked (e.g., pfn_to_mfn()) so behaviour there is correct. | ||
113 | */ | ||
114 | static inline unsigned long mfn_to_local_pfn(unsigned long mfn) | ||
115 | { | ||
116 | extern unsigned long max_mapnr; | ||
117 | unsigned long pfn = mfn_to_pfn(mfn); | ||
118 | if ((pfn < max_mapnr) | ||
119 | && !xen_feature(XENFEAT_auto_translated_physmap) | ||
120 | && (phys_to_machine_mapping[pfn] != mfn)) | ||
121 | return max_mapnr; /* force !pfn_valid() */ | ||
122 | return pfn; | ||
123 | } | ||
124 | |||
125 | static inline void set_phys_to_machine(unsigned long pfn, unsigned long mfn) | ||
126 | { | ||
127 | if (xen_feature(XENFEAT_auto_translated_physmap)) { | ||
128 | BUG_ON(pfn != mfn && mfn != INVALID_P2M_ENTRY); | ||
129 | return; | ||
130 | } | ||
131 | phys_to_machine_mapping[pfn] = mfn; | ||
132 | } | ||
133 | |||
134 | /* VIRT <-> MACHINE conversion */ | ||
135 | #define virt_to_machine(v) (phys_to_machine(XPADDR(__pa(v)))) | ||
136 | #define virt_to_mfn(v) (pfn_to_mfn(PFN_DOWN(__pa(v)))) | ||
137 | #define mfn_to_virt(m) (__va(mfn_to_pfn(m) << PAGE_SHIFT)) | ||
138 | |||
139 | #ifdef CONFIG_X86_PAE | ||
140 | #define pte_mfn(_pte) (((_pte).pte_low >> PAGE_SHIFT) | \ | ||
141 | (((_pte).pte_high & 0xfff) << (32-PAGE_SHIFT))) | ||
142 | |||
143 | static inline pte_t mfn_pte(unsigned long page_nr, pgprot_t pgprot) | ||
144 | { | ||
145 | pte_t pte; | ||
146 | |||
147 | pte.pte_high = (page_nr >> (32 - PAGE_SHIFT)) | | ||
148 | (pgprot_val(pgprot) >> 32); | ||
149 | pte.pte_high &= (__supported_pte_mask >> 32); | ||
150 | pte.pte_low = ((page_nr << PAGE_SHIFT) | pgprot_val(pgprot)); | ||
151 | pte.pte_low &= __supported_pte_mask; | ||
152 | |||
153 | return pte; | ||
154 | } | ||
155 | |||
156 | static inline unsigned long long pte_val_ma(pte_t x) | ||
157 | { | ||
158 | return ((unsigned long long)x.pte_high << 32) | x.pte_low; | ||
159 | } | ||
160 | #define pmd_val_ma(v) ((v).pmd) | ||
161 | #define pud_val_ma(v) ((v).pgd.pgd) | ||
162 | #define __pte_ma(x) ((pte_t) { .pte_low = (x), .pte_high = (x)>>32 } ) | ||
163 | #define __pmd_ma(x) ((pmd_t) { (x) } ) | ||
164 | #else /* !X86_PAE */ | ||
165 | #define pte_mfn(_pte) ((_pte).pte_low >> PAGE_SHIFT) | ||
166 | #define mfn_pte(pfn, prot) __pte_ma(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) | ||
167 | #define pte_val_ma(x) ((x).pte_low) | ||
168 | #define pmd_val_ma(v) ((v).pud.pgd.pgd) | ||
169 | #define __pte_ma(x) ((pte_t) { (x) } ) | ||
170 | #endif /* CONFIG_X86_PAE */ | ||
171 | |||
172 | #define pgd_val_ma(x) ((x).pgd) | ||
173 | |||
174 | |||
175 | xmaddr_t arbitrary_virt_to_machine(unsigned long address); | ||
176 | void make_lowmem_page_readonly(void *vaddr); | ||
177 | void make_lowmem_page_readwrite(void *vaddr); | ||
178 | |||
179 | #endif /* __XEN_PAGE_H */ | ||