aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/virtual/kvm
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/virtual/kvm')
-rw-r--r--Documentation/virtual/kvm/api.txt1479
-rw-r--r--Documentation/virtual/kvm/cpuid.txt45
-rw-r--r--Documentation/virtual/kvm/locking.txt25
-rw-r--r--Documentation/virtual/kvm/mmu.txt348
-rw-r--r--Documentation/virtual/kvm/msr.txt187
-rw-r--r--Documentation/virtual/kvm/ppc-pv.txt196
-rw-r--r--Documentation/virtual/kvm/review-checklist.txt38
-rw-r--r--Documentation/virtual/kvm/timekeeping.txt612
8 files changed, 2930 insertions, 0 deletions
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
new file mode 100644
index 000000000000..42542eb802ca
--- /dev/null
+++ b/Documentation/virtual/kvm/api.txt
@@ -0,0 +1,1479 @@
1The Definitive KVM (Kernel-based Virtual Machine) API Documentation
2===================================================================
3
41. General description
5
6The kvm API is a set of ioctls that are issued to control various aspects
7of a virtual machine. The ioctls belong to three classes
8
9 - System ioctls: These query and set global attributes which affect the
10 whole kvm subsystem. In addition a system ioctl is used to create
11 virtual machines
12
13 - VM ioctls: These query and set attributes that affect an entire virtual
14 machine, for example memory layout. In addition a VM ioctl is used to
15 create virtual cpus (vcpus).
16
17 Only run VM ioctls from the same process (address space) that was used
18 to create the VM.
19
20 - vcpu ioctls: These query and set attributes that control the operation
21 of a single virtual cpu.
22
23 Only run vcpu ioctls from the same thread that was used to create the
24 vcpu.
25
262. File descriptors
27
28The kvm API is centered around file descriptors. An initial
29open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
30can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this
31handle will create a VM file descriptor which can be used to issue VM
32ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
33and return a file descriptor pointing to it. Finally, ioctls on a vcpu
34fd can be used to control the vcpu, including the important task of
35actually running guest code.
36
37In general file descriptors can be migrated among processes by means
38of fork() and the SCM_RIGHTS facility of unix domain socket. These
39kinds of tricks are explicitly not supported by kvm. While they will
40not cause harm to the host, their actual behavior is not guaranteed by
41the API. The only supported use is one virtual machine per process,
42and one vcpu per thread.
43
443. Extensions
45
46As of Linux 2.6.22, the KVM ABI has been stabilized: no backward
47incompatible change are allowed. However, there is an extension
48facility that allows backward-compatible extensions to the API to be
49queried and used.
50
51The extension mechanism is not based on on the Linux version number.
52Instead, kvm defines extension identifiers and a facility to query
53whether a particular extension identifier is available. If it is, a
54set of ioctls is available for application use.
55
564. API description
57
58This section describes ioctls that can be used to control kvm guests.
59For each ioctl, the following information is provided along with a
60description:
61
62 Capability: which KVM extension provides this ioctl. Can be 'basic',
63 which means that is will be provided by any kernel that supports
64 API version 12 (see section 4.1), or a KVM_CAP_xyz constant, which
65 means availability needs to be checked with KVM_CHECK_EXTENSION
66 (see section 4.4).
67
68 Architectures: which instruction set architectures provide this ioctl.
69 x86 includes both i386 and x86_64.
70
71 Type: system, vm, or vcpu.
72
73 Parameters: what parameters are accepted by the ioctl.
74
75 Returns: the return value. General error numbers (EBADF, ENOMEM, EINVAL)
76 are not detailed, but errors with specific meanings are.
77
784.1 KVM_GET_API_VERSION
79
80Capability: basic
81Architectures: all
82Type: system ioctl
83Parameters: none
84Returns: the constant KVM_API_VERSION (=12)
85
86This identifies the API version as the stable kvm API. It is not
87expected that this number will change. However, Linux 2.6.20 and
882.6.21 report earlier versions; these are not documented and not
89supported. Applications should refuse to run if KVM_GET_API_VERSION
90returns a value other than 12. If this check passes, all ioctls
91described as 'basic' will be available.
92
934.2 KVM_CREATE_VM
94
95Capability: basic
96Architectures: all
97Type: system ioctl
98Parameters: none
99Returns: a VM fd that can be used to control the new virtual machine.
100
101The new VM has no virtual cpus and no memory. An mmap() of a VM fd
102will access the virtual machine's physical address space; offset zero
103corresponds to guest physical address zero. Use of mmap() on a VM fd
104is discouraged if userspace memory allocation (KVM_CAP_USER_MEMORY) is
105available.
106
1074.3 KVM_GET_MSR_INDEX_LIST
108
109Capability: basic
110Architectures: x86
111Type: system
112Parameters: struct kvm_msr_list (in/out)
113Returns: 0 on success; -1 on error
114Errors:
115 E2BIG: the msr index list is to be to fit in the array specified by
116 the user.
117
118struct kvm_msr_list {
119 __u32 nmsrs; /* number of msrs in entries */
120 __u32 indices[0];
121};
122
123This ioctl returns the guest msrs that are supported. The list varies
124by kvm version and host processor, but does not change otherwise. The
125user fills in the size of the indices array in nmsrs, and in return
126kvm adjusts nmsrs to reflect the actual number of msrs and fills in
127the indices array with their numbers.
128
129Note: if kvm indicates supports MCE (KVM_CAP_MCE), then the MCE bank MSRs are
130not returned in the MSR list, as different vcpus can have a different number
131of banks, as set via the KVM_X86_SETUP_MCE ioctl.
132
1334.4 KVM_CHECK_EXTENSION
134
135Capability: basic
136Architectures: all
137Type: system ioctl
138Parameters: extension identifier (KVM_CAP_*)
139Returns: 0 if unsupported; 1 (or some other positive integer) if supported
140
141The API allows the application to query about extensions to the core
142kvm API. Userspace passes an extension identifier (an integer) and
143receives an integer that describes the extension availability.
144Generally 0 means no and 1 means yes, but some extensions may report
145additional information in the integer return value.
146
1474.5 KVM_GET_VCPU_MMAP_SIZE
148
149Capability: basic
150Architectures: all
151Type: system ioctl
152Parameters: none
153Returns: size of vcpu mmap area, in bytes
154
155The KVM_RUN ioctl (cf.) communicates with userspace via a shared
156memory region. This ioctl returns the size of that region. See the
157KVM_RUN documentation for details.
158
1594.6 KVM_SET_MEMORY_REGION
160
161Capability: basic
162Architectures: all
163Type: vm ioctl
164Parameters: struct kvm_memory_region (in)
165Returns: 0 on success, -1 on error
166
167This ioctl is obsolete and has been removed.
168
1694.7 KVM_CREATE_VCPU
170
171Capability: basic
172Architectures: all
173Type: vm ioctl
174Parameters: vcpu id (apic id on x86)
175Returns: vcpu fd on success, -1 on error
176
177This API adds a vcpu to a virtual machine. The vcpu id is a small integer
178in the range [0, max_vcpus). You can use KVM_CAP_NR_VCPUS of the
179KVM_CHECK_EXTENSION ioctl() to determine the value for max_vcpus at run-time.
180If the KVM_CAP_NR_VCPUS does not exist, you should assume that max_vcpus is 4
181cpus max.
182
1834.8 KVM_GET_DIRTY_LOG (vm ioctl)
184
185Capability: basic
186Architectures: x86
187Type: vm ioctl
188Parameters: struct kvm_dirty_log (in/out)
189Returns: 0 on success, -1 on error
190
191/* for KVM_GET_DIRTY_LOG */
192struct kvm_dirty_log {
193 __u32 slot;
194 __u32 padding;
195 union {
196 void __user *dirty_bitmap; /* one bit per page */
197 __u64 padding;
198 };
199};
200
201Given a memory slot, return a bitmap containing any pages dirtied
202since the last call to this ioctl. Bit 0 is the first page in the
203memory slot. Ensure the entire structure is cleared to avoid padding
204issues.
205
2064.9 KVM_SET_MEMORY_ALIAS
207
208Capability: basic
209Architectures: x86
210Type: vm ioctl
211Parameters: struct kvm_memory_alias (in)
212Returns: 0 (success), -1 (error)
213
214This ioctl is obsolete and has been removed.
215
2164.10 KVM_RUN
217
218Capability: basic
219Architectures: all
220Type: vcpu ioctl
221Parameters: none
222Returns: 0 on success, -1 on error
223Errors:
224 EINTR: an unmasked signal is pending
225
226This ioctl is used to run a guest virtual cpu. While there are no
227explicit parameters, there is an implicit parameter block that can be
228obtained by mmap()ing the vcpu fd at offset 0, with the size given by
229KVM_GET_VCPU_MMAP_SIZE. The parameter block is formatted as a 'struct
230kvm_run' (see below).
231
2324.11 KVM_GET_REGS
233
234Capability: basic
235Architectures: all
236Type: vcpu ioctl
237Parameters: struct kvm_regs (out)
238Returns: 0 on success, -1 on error
239
240Reads the general purpose registers from the vcpu.
241
242/* x86 */
243struct kvm_regs {
244 /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
245 __u64 rax, rbx, rcx, rdx;
246 __u64 rsi, rdi, rsp, rbp;
247 __u64 r8, r9, r10, r11;
248 __u64 r12, r13, r14, r15;
249 __u64 rip, rflags;
250};
251
2524.12 KVM_SET_REGS
253
254Capability: basic
255Architectures: all
256Type: vcpu ioctl
257Parameters: struct kvm_regs (in)
258Returns: 0 on success, -1 on error
259
260Writes the general purpose registers into the vcpu.
261
262See KVM_GET_REGS for the data structure.
263
2644.13 KVM_GET_SREGS
265
266Capability: basic
267Architectures: x86, ppc
268Type: vcpu ioctl
269Parameters: struct kvm_sregs (out)
270Returns: 0 on success, -1 on error
271
272Reads special registers from the vcpu.
273
274/* x86 */
275struct kvm_sregs {
276 struct kvm_segment cs, ds, es, fs, gs, ss;
277 struct kvm_segment tr, ldt;
278 struct kvm_dtable gdt, idt;
279 __u64 cr0, cr2, cr3, cr4, cr8;
280 __u64 efer;
281 __u64 apic_base;
282 __u64 interrupt_bitmap[(KVM_NR_INTERRUPTS + 63) / 64];
283};
284
285/* ppc -- see arch/powerpc/include/asm/kvm.h */
286
287interrupt_bitmap is a bitmap of pending external interrupts. At most
288one bit may be set. This interrupt has been acknowledged by the APIC
289but not yet injected into the cpu core.
290
2914.14 KVM_SET_SREGS
292
293Capability: basic
294Architectures: x86, ppc
295Type: vcpu ioctl
296Parameters: struct kvm_sregs (in)
297Returns: 0 on success, -1 on error
298
299Writes special registers into the vcpu. See KVM_GET_SREGS for the
300data structures.
301
3024.15 KVM_TRANSLATE
303
304Capability: basic
305Architectures: x86
306Type: vcpu ioctl
307Parameters: struct kvm_translation (in/out)
308Returns: 0 on success, -1 on error
309
310Translates a virtual address according to the vcpu's current address
311translation mode.
312
313struct kvm_translation {
314 /* in */
315 __u64 linear_address;
316
317 /* out */
318 __u64 physical_address;
319 __u8 valid;
320 __u8 writeable;
321 __u8 usermode;
322 __u8 pad[5];
323};
324
3254.16 KVM_INTERRUPT
326
327Capability: basic
328Architectures: x86, ppc
329Type: vcpu ioctl
330Parameters: struct kvm_interrupt (in)
331Returns: 0 on success, -1 on error
332
333Queues a hardware interrupt vector to be injected. This is only
334useful if in-kernel local APIC or equivalent is not used.
335
336/* for KVM_INTERRUPT */
337struct kvm_interrupt {
338 /* in */
339 __u32 irq;
340};
341
342X86:
343
344Note 'irq' is an interrupt vector, not an interrupt pin or line.
345
346PPC:
347
348Queues an external interrupt to be injected. This ioctl is overleaded
349with 3 different irq values:
350
351a) KVM_INTERRUPT_SET
352
353 This injects an edge type external interrupt into the guest once it's ready
354 to receive interrupts. When injected, the interrupt is done.
355
356b) KVM_INTERRUPT_UNSET
357
358 This unsets any pending interrupt.
359
360 Only available with KVM_CAP_PPC_UNSET_IRQ.
361
362c) KVM_INTERRUPT_SET_LEVEL
363
364 This injects a level type external interrupt into the guest context. The
365 interrupt stays pending until a specific ioctl with KVM_INTERRUPT_UNSET
366 is triggered.
367
368 Only available with KVM_CAP_PPC_IRQ_LEVEL.
369
370Note that any value for 'irq' other than the ones stated above is invalid
371and incurs unexpected behavior.
372
3734.17 KVM_DEBUG_GUEST
374
375Capability: basic
376Architectures: none
377Type: vcpu ioctl
378Parameters: none)
379Returns: -1 on error
380
381Support for this has been removed. Use KVM_SET_GUEST_DEBUG instead.
382
3834.18 KVM_GET_MSRS
384
385Capability: basic
386Architectures: x86
387Type: vcpu ioctl
388Parameters: struct kvm_msrs (in/out)
389Returns: 0 on success, -1 on error
390
391Reads model-specific registers from the vcpu. Supported msr indices can
392be obtained using KVM_GET_MSR_INDEX_LIST.
393
394struct kvm_msrs {
395 __u32 nmsrs; /* number of msrs in entries */
396 __u32 pad;
397
398 struct kvm_msr_entry entries[0];
399};
400
401struct kvm_msr_entry {
402 __u32 index;
403 __u32 reserved;
404 __u64 data;
405};
406
407Application code should set the 'nmsrs' member (which indicates the
408size of the entries array) and the 'index' member of each array entry.
409kvm will fill in the 'data' member.
410
4114.19 KVM_SET_MSRS
412
413Capability: basic
414Architectures: x86
415Type: vcpu ioctl
416Parameters: struct kvm_msrs (in)
417Returns: 0 on success, -1 on error
418
419Writes model-specific registers to the vcpu. See KVM_GET_MSRS for the
420data structures.
421
422Application code should set the 'nmsrs' member (which indicates the
423size of the entries array), and the 'index' and 'data' members of each
424array entry.
425
4264.20 KVM_SET_CPUID
427
428Capability: basic
429Architectures: x86
430Type: vcpu ioctl
431Parameters: struct kvm_cpuid (in)
432Returns: 0 on success, -1 on error
433
434Defines the vcpu responses to the cpuid instruction. Applications
435should use the KVM_SET_CPUID2 ioctl if available.
436
437
438struct kvm_cpuid_entry {
439 __u32 function;
440 __u32 eax;
441 __u32 ebx;
442 __u32 ecx;
443 __u32 edx;
444 __u32 padding;
445};
446
447/* for KVM_SET_CPUID */
448struct kvm_cpuid {
449 __u32 nent;
450 __u32 padding;
451 struct kvm_cpuid_entry entries[0];
452};
453
4544.21 KVM_SET_SIGNAL_MASK
455
456Capability: basic
457Architectures: x86
458Type: vcpu ioctl
459Parameters: struct kvm_signal_mask (in)
460Returns: 0 on success, -1 on error
461
462Defines which signals are blocked during execution of KVM_RUN. This
463signal mask temporarily overrides the threads signal mask. Any
464unblocked signal received (except SIGKILL and SIGSTOP, which retain
465their traditional behaviour) will cause KVM_RUN to return with -EINTR.
466
467Note the signal will only be delivered if not blocked by the original
468signal mask.
469
470/* for KVM_SET_SIGNAL_MASK */
471struct kvm_signal_mask {
472 __u32 len;
473 __u8 sigset[0];
474};
475
4764.22 KVM_GET_FPU
477
478Capability: basic
479Architectures: x86
480Type: vcpu ioctl
481Parameters: struct kvm_fpu (out)
482Returns: 0 on success, -1 on error
483
484Reads the floating point state from the vcpu.
485
486/* for KVM_GET_FPU and KVM_SET_FPU */
487struct kvm_fpu {
488 __u8 fpr[8][16];
489 __u16 fcw;
490 __u16 fsw;
491 __u8 ftwx; /* in fxsave format */
492 __u8 pad1;
493 __u16 last_opcode;
494 __u64 last_ip;
495 __u64 last_dp;
496 __u8 xmm[16][16];
497 __u32 mxcsr;
498 __u32 pad2;
499};
500
5014.23 KVM_SET_FPU
502
503Capability: basic
504Architectures: x86
505Type: vcpu ioctl
506Parameters: struct kvm_fpu (in)
507Returns: 0 on success, -1 on error
508
509Writes the floating point state to the vcpu.
510
511/* for KVM_GET_FPU and KVM_SET_FPU */
512struct kvm_fpu {
513 __u8 fpr[8][16];
514 __u16 fcw;
515 __u16 fsw;
516 __u8 ftwx; /* in fxsave format */
517 __u8 pad1;
518 __u16 last_opcode;
519 __u64 last_ip;
520 __u64 last_dp;
521 __u8 xmm[16][16];
522 __u32 mxcsr;
523 __u32 pad2;
524};
525
5264.24 KVM_CREATE_IRQCHIP
527
528Capability: KVM_CAP_IRQCHIP
529Architectures: x86, ia64
530Type: vm ioctl
531Parameters: none
532Returns: 0 on success, -1 on error
533
534Creates an interrupt controller model in the kernel. On x86, creates a virtual
535ioapic, a virtual PIC (two PICs, nested), and sets up future vcpus to have a
536local APIC. IRQ routing for GSIs 0-15 is set to both PIC and IOAPIC; GSI 16-23
537only go to the IOAPIC. On ia64, a IOSAPIC is created.
538
5394.25 KVM_IRQ_LINE
540
541Capability: KVM_CAP_IRQCHIP
542Architectures: x86, ia64
543Type: vm ioctl
544Parameters: struct kvm_irq_level
545Returns: 0 on success, -1 on error
546
547Sets the level of a GSI input to the interrupt controller model in the kernel.
548Requires that an interrupt controller model has been previously created with
549KVM_CREATE_IRQCHIP. Note that edge-triggered interrupts require the level
550to be set to 1 and then back to 0.
551
552struct kvm_irq_level {
553 union {
554 __u32 irq; /* GSI */
555 __s32 status; /* not used for KVM_IRQ_LEVEL */
556 };
557 __u32 level; /* 0 or 1 */
558};
559
5604.26 KVM_GET_IRQCHIP
561
562Capability: KVM_CAP_IRQCHIP
563Architectures: x86, ia64
564Type: vm ioctl
565Parameters: struct kvm_irqchip (in/out)
566Returns: 0 on success, -1 on error
567
568Reads the state of a kernel interrupt controller created with
569KVM_CREATE_IRQCHIP into a buffer provided by the caller.
570
571struct kvm_irqchip {
572 __u32 chip_id; /* 0 = PIC1, 1 = PIC2, 2 = IOAPIC */
573 __u32 pad;
574 union {
575 char dummy[512]; /* reserving space */
576 struct kvm_pic_state pic;
577 struct kvm_ioapic_state ioapic;
578 } chip;
579};
580
5814.27 KVM_SET_IRQCHIP
582
583Capability: KVM_CAP_IRQCHIP
584Architectures: x86, ia64
585Type: vm ioctl
586Parameters: struct kvm_irqchip (in)
587Returns: 0 on success, -1 on error
588
589Sets the state of a kernel interrupt controller created with
590KVM_CREATE_IRQCHIP from a buffer provided by the caller.
591
592struct kvm_irqchip {
593 __u32 chip_id; /* 0 = PIC1, 1 = PIC2, 2 = IOAPIC */
594 __u32 pad;
595 union {
596 char dummy[512]; /* reserving space */
597 struct kvm_pic_state pic;
598 struct kvm_ioapic_state ioapic;
599 } chip;
600};
601
6024.28 KVM_XEN_HVM_CONFIG
603
604Capability: KVM_CAP_XEN_HVM
605Architectures: x86
606Type: vm ioctl
607Parameters: struct kvm_xen_hvm_config (in)
608Returns: 0 on success, -1 on error
609
610Sets the MSR that the Xen HVM guest uses to initialize its hypercall
611page, and provides the starting address and size of the hypercall
612blobs in userspace. When the guest writes the MSR, kvm copies one
613page of a blob (32- or 64-bit, depending on the vcpu mode) to guest
614memory.
615
616struct kvm_xen_hvm_config {
617 __u32 flags;
618 __u32 msr;
619 __u64 blob_addr_32;
620 __u64 blob_addr_64;
621 __u8 blob_size_32;
622 __u8 blob_size_64;
623 __u8 pad2[30];
624};
625
6264.29 KVM_GET_CLOCK
627
628Capability: KVM_CAP_ADJUST_CLOCK
629Architectures: x86
630Type: vm ioctl
631Parameters: struct kvm_clock_data (out)
632Returns: 0 on success, -1 on error
633
634Gets the current timestamp of kvmclock as seen by the current guest. In
635conjunction with KVM_SET_CLOCK, it is used to ensure monotonicity on scenarios
636such as migration.
637
638struct kvm_clock_data {
639 __u64 clock; /* kvmclock current value */
640 __u32 flags;
641 __u32 pad[9];
642};
643
6444.30 KVM_SET_CLOCK
645
646Capability: KVM_CAP_ADJUST_CLOCK
647Architectures: x86
648Type: vm ioctl
649Parameters: struct kvm_clock_data (in)
650Returns: 0 on success, -1 on error
651
652Sets the current timestamp of kvmclock to the value specified in its parameter.
653In conjunction with KVM_GET_CLOCK, it is used to ensure monotonicity on scenarios
654such as migration.
655
656struct kvm_clock_data {
657 __u64 clock; /* kvmclock current value */
658 __u32 flags;
659 __u32 pad[9];
660};
661
6624.31 KVM_GET_VCPU_EVENTS
663
664Capability: KVM_CAP_VCPU_EVENTS
665Extended by: KVM_CAP_INTR_SHADOW
666Architectures: x86
667Type: vm ioctl
668Parameters: struct kvm_vcpu_event (out)
669Returns: 0 on success, -1 on error
670
671Gets currently pending exceptions, interrupts, and NMIs as well as related
672states of the vcpu.
673
674struct kvm_vcpu_events {
675 struct {
676 __u8 injected;
677 __u8 nr;
678 __u8 has_error_code;
679 __u8 pad;
680 __u32 error_code;
681 } exception;
682 struct {
683 __u8 injected;
684 __u8 nr;
685 __u8 soft;
686 __u8 shadow;
687 } interrupt;
688 struct {
689 __u8 injected;
690 __u8 pending;
691 __u8 masked;
692 __u8 pad;
693 } nmi;
694 __u32 sipi_vector;
695 __u32 flags;
696};
697
698KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
699interrupt.shadow contains a valid state. Otherwise, this field is undefined.
700
7014.32 KVM_SET_VCPU_EVENTS
702
703Capability: KVM_CAP_VCPU_EVENTS
704Extended by: KVM_CAP_INTR_SHADOW
705Architectures: x86
706Type: vm ioctl
707Parameters: struct kvm_vcpu_event (in)
708Returns: 0 on success, -1 on error
709
710Set pending exceptions, interrupts, and NMIs as well as related states of the
711vcpu.
712
713See KVM_GET_VCPU_EVENTS for the data structure.
714
715Fields that may be modified asynchronously by running VCPUs can be excluded
716from the update. These fields are nmi.pending and sipi_vector. Keep the
717corresponding bits in the flags field cleared to suppress overwriting the
718current in-kernel state. The bits are:
719
720KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
721KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
722
723If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
724the flags field to signal that interrupt.shadow contains a valid state and
725shall be written into the VCPU.
726
7274.33 KVM_GET_DEBUGREGS
728
729Capability: KVM_CAP_DEBUGREGS
730Architectures: x86
731Type: vm ioctl
732Parameters: struct kvm_debugregs (out)
733Returns: 0 on success, -1 on error
734
735Reads debug registers from the vcpu.
736
737struct kvm_debugregs {
738 __u64 db[4];
739 __u64 dr6;
740 __u64 dr7;
741 __u64 flags;
742 __u64 reserved[9];
743};
744
7454.34 KVM_SET_DEBUGREGS
746
747Capability: KVM_CAP_DEBUGREGS
748Architectures: x86
749Type: vm ioctl
750Parameters: struct kvm_debugregs (in)
751Returns: 0 on success, -1 on error
752
753Writes debug registers into the vcpu.
754
755See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
756yet and must be cleared on entry.
757
7584.35 KVM_SET_USER_MEMORY_REGION
759
760Capability: KVM_CAP_USER_MEM
761Architectures: all
762Type: vm ioctl
763Parameters: struct kvm_userspace_memory_region (in)
764Returns: 0 on success, -1 on error
765
766struct kvm_userspace_memory_region {
767 __u32 slot;
768 __u32 flags;
769 __u64 guest_phys_addr;
770 __u64 memory_size; /* bytes */
771 __u64 userspace_addr; /* start of the userspace allocated memory */
772};
773
774/* for kvm_memory_region::flags */
775#define KVM_MEM_LOG_DIRTY_PAGES 1UL
776
777This ioctl allows the user to create or modify a guest physical memory
778slot. When changing an existing slot, it may be moved in the guest
779physical memory space, or its flags may be modified. It may not be
780resized. Slots may not overlap in guest physical address space.
781
782Memory for the region is taken starting at the address denoted by the
783field userspace_addr, which must point at user addressable memory for
784the entire memory slot size. Any object may back this memory, including
785anonymous memory, ordinary files, and hugetlbfs.
786
787It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
788be identical. This allows large pages in the guest to be backed by large
789pages in the host.
790
791The flags field supports just one flag, KVM_MEM_LOG_DIRTY_PAGES, which
792instructs kvm to keep track of writes to memory within the slot. See
793the KVM_GET_DIRTY_LOG ioctl.
794
795When the KVM_CAP_SYNC_MMU capability, changes in the backing of the memory
796region are automatically reflected into the guest. For example, an mmap()
797that affects the region will be made visible immediately. Another example
798is madvise(MADV_DROP).
799
800It is recommended to use this API instead of the KVM_SET_MEMORY_REGION ioctl.
801The KVM_SET_MEMORY_REGION does not allow fine grained control over memory
802allocation and is deprecated.
803
8044.36 KVM_SET_TSS_ADDR
805
806Capability: KVM_CAP_SET_TSS_ADDR
807Architectures: x86
808Type: vm ioctl
809Parameters: unsigned long tss_address (in)
810Returns: 0 on success, -1 on error
811
812This ioctl defines the physical address of a three-page region in the guest
813physical address space. The region must be within the first 4GB of the
814guest physical address space and must not conflict with any memory slot
815or any mmio address. The guest may malfunction if it accesses this memory
816region.
817
818This ioctl is required on Intel-based hosts. This is needed on Intel hardware
819because of a quirk in the virtualization implementation (see the internals
820documentation when it pops into existence).
821
8224.37 KVM_ENABLE_CAP
823
824Capability: KVM_CAP_ENABLE_CAP
825Architectures: ppc
826Type: vcpu ioctl
827Parameters: struct kvm_enable_cap (in)
828Returns: 0 on success; -1 on error
829
830+Not all extensions are enabled by default. Using this ioctl the application
831can enable an extension, making it available to the guest.
832
833On systems that do not support this ioctl, it always fails. On systems that
834do support it, it only works for extensions that are supported for enablement.
835
836To check if a capability can be enabled, the KVM_CHECK_EXTENSION ioctl should
837be used.
838
839struct kvm_enable_cap {
840 /* in */
841 __u32 cap;
842
843The capability that is supposed to get enabled.
844
845 __u32 flags;
846
847A bitfield indicating future enhancements. Has to be 0 for now.
848
849 __u64 args[4];
850
851Arguments for enabling a feature. If a feature needs initial values to
852function properly, this is the place to put them.
853
854 __u8 pad[64];
855};
856
8574.38 KVM_GET_MP_STATE
858
859Capability: KVM_CAP_MP_STATE
860Architectures: x86, ia64
861Type: vcpu ioctl
862Parameters: struct kvm_mp_state (out)
863Returns: 0 on success; -1 on error
864
865struct kvm_mp_state {
866 __u32 mp_state;
867};
868
869Returns the vcpu's current "multiprocessing state" (though also valid on
870uniprocessor guests).
871
872Possible values are:
873
874 - KVM_MP_STATE_RUNNABLE: the vcpu is currently running
875 - KVM_MP_STATE_UNINITIALIZED: the vcpu is an application processor (AP)
876 which has not yet received an INIT signal
877 - KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
878 now ready for a SIPI
879 - KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
880 is waiting for an interrupt
881 - KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
882 accessible via KVM_GET_VCPU_EVENTS)
883
884This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
885irqchip, the multiprocessing state must be maintained by userspace.
886
8874.39 KVM_SET_MP_STATE
888
889Capability: KVM_CAP_MP_STATE
890Architectures: x86, ia64
891Type: vcpu ioctl
892Parameters: struct kvm_mp_state (in)
893Returns: 0 on success; -1 on error
894
895Sets the vcpu's current "multiprocessing state"; see KVM_GET_MP_STATE for
896arguments.
897
898This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
899irqchip, the multiprocessing state must be maintained by userspace.
900
9014.40 KVM_SET_IDENTITY_MAP_ADDR
902
903Capability: KVM_CAP_SET_IDENTITY_MAP_ADDR
904Architectures: x86
905Type: vm ioctl
906Parameters: unsigned long identity (in)
907Returns: 0 on success, -1 on error
908
909This ioctl defines the physical address of a one-page region in the guest
910physical address space. The region must be within the first 4GB of the
911guest physical address space and must not conflict with any memory slot
912or any mmio address. The guest may malfunction if it accesses this memory
913region.
914
915This ioctl is required on Intel-based hosts. This is needed on Intel hardware
916because of a quirk in the virtualization implementation (see the internals
917documentation when it pops into existence).
918
9194.41 KVM_SET_BOOT_CPU_ID
920
921Capability: KVM_CAP_SET_BOOT_CPU_ID
922Architectures: x86, ia64
923Type: vm ioctl
924Parameters: unsigned long vcpu_id
925Returns: 0 on success, -1 on error
926
927Define which vcpu is the Bootstrap Processor (BSP). Values are the same
928as the vcpu id in KVM_CREATE_VCPU. If this ioctl is not called, the default
929is vcpu 0.
930
9314.42 KVM_GET_XSAVE
932
933Capability: KVM_CAP_XSAVE
934Architectures: x86
935Type: vcpu ioctl
936Parameters: struct kvm_xsave (out)
937Returns: 0 on success, -1 on error
938
939struct kvm_xsave {
940 __u32 region[1024];
941};
942
943This ioctl would copy current vcpu's xsave struct to the userspace.
944
9454.43 KVM_SET_XSAVE
946
947Capability: KVM_CAP_XSAVE
948Architectures: x86
949Type: vcpu ioctl
950Parameters: struct kvm_xsave (in)
951Returns: 0 on success, -1 on error
952
953struct kvm_xsave {
954 __u32 region[1024];
955};
956
957This ioctl would copy userspace's xsave struct to the kernel.
958
9594.44 KVM_GET_XCRS
960
961Capability: KVM_CAP_XCRS
962Architectures: x86
963Type: vcpu ioctl
964Parameters: struct kvm_xcrs (out)
965Returns: 0 on success, -1 on error
966
967struct kvm_xcr {
968 __u32 xcr;
969 __u32 reserved;
970 __u64 value;
971};
972
973struct kvm_xcrs {
974 __u32 nr_xcrs;
975 __u32 flags;
976 struct kvm_xcr xcrs[KVM_MAX_XCRS];
977 __u64 padding[16];
978};
979
980This ioctl would copy current vcpu's xcrs to the userspace.
981
9824.45 KVM_SET_XCRS
983
984Capability: KVM_CAP_XCRS
985Architectures: x86
986Type: vcpu ioctl
987Parameters: struct kvm_xcrs (in)
988Returns: 0 on success, -1 on error
989
990struct kvm_xcr {
991 __u32 xcr;
992 __u32 reserved;
993 __u64 value;
994};
995
996struct kvm_xcrs {
997 __u32 nr_xcrs;
998 __u32 flags;
999 struct kvm_xcr xcrs[KVM_MAX_XCRS];
1000 __u64 padding[16];
1001};
1002
1003This ioctl would set vcpu's xcr to the value userspace specified.
1004
10054.46 KVM_GET_SUPPORTED_CPUID
1006
1007Capability: KVM_CAP_EXT_CPUID
1008Architectures: x86
1009Type: system ioctl
1010Parameters: struct kvm_cpuid2 (in/out)
1011Returns: 0 on success, -1 on error
1012
1013struct kvm_cpuid2 {
1014 __u32 nent;
1015 __u32 padding;
1016 struct kvm_cpuid_entry2 entries[0];
1017};
1018
1019#define KVM_CPUID_FLAG_SIGNIFCANT_INDEX 1
1020#define KVM_CPUID_FLAG_STATEFUL_FUNC 2
1021#define KVM_CPUID_FLAG_STATE_READ_NEXT 4
1022
1023struct kvm_cpuid_entry2 {
1024 __u32 function;
1025 __u32 index;
1026 __u32 flags;
1027 __u32 eax;
1028 __u32 ebx;
1029 __u32 ecx;
1030 __u32 edx;
1031 __u32 padding[3];
1032};
1033
1034This ioctl returns x86 cpuid features which are supported by both the hardware
1035and kvm. Userspace can use the information returned by this ioctl to
1036construct cpuid information (for KVM_SET_CPUID2) that is consistent with
1037hardware, kernel, and userspace capabilities, and with user requirements (for
1038example, the user may wish to constrain cpuid to emulate older hardware,
1039or for feature consistency across a cluster).
1040
1041Userspace invokes KVM_GET_SUPPORTED_CPUID by passing a kvm_cpuid2 structure
1042with the 'nent' field indicating the number of entries in the variable-size
1043array 'entries'. If the number of entries is too low to describe the cpu
1044capabilities, an error (E2BIG) is returned. If the number is too high,
1045the 'nent' field is adjusted and an error (ENOMEM) is returned. If the
1046number is just right, the 'nent' field is adjusted to the number of valid
1047entries in the 'entries' array, which is then filled.
1048
1049The entries returned are the host cpuid as returned by the cpuid instruction,
1050with unknown or unsupported features masked out. Some features (for example,
1051x2apic), may not be present in the host cpu, but are exposed by kvm if it can
1052emulate them efficiently. The fields in each entry are defined as follows:
1053
1054 function: the eax value used to obtain the entry
1055 index: the ecx value used to obtain the entry (for entries that are
1056 affected by ecx)
1057 flags: an OR of zero or more of the following:
1058 KVM_CPUID_FLAG_SIGNIFCANT_INDEX:
1059 if the index field is valid
1060 KVM_CPUID_FLAG_STATEFUL_FUNC:
1061 if cpuid for this function returns different values for successive
1062 invocations; there will be several entries with the same function,
1063 all with this flag set
1064 KVM_CPUID_FLAG_STATE_READ_NEXT:
1065 for KVM_CPUID_FLAG_STATEFUL_FUNC entries, set if this entry is
1066 the first entry to be read by a cpu
1067 eax, ebx, ecx, edx: the values returned by the cpuid instruction for
1068 this function/index combination
1069
10704.47 KVM_PPC_GET_PVINFO
1071
1072Capability: KVM_CAP_PPC_GET_PVINFO
1073Architectures: ppc
1074Type: vm ioctl
1075Parameters: struct kvm_ppc_pvinfo (out)
1076Returns: 0 on success, !0 on error
1077
1078struct kvm_ppc_pvinfo {
1079 __u32 flags;
1080 __u32 hcall[4];
1081 __u8 pad[108];
1082};
1083
1084This ioctl fetches PV specific information that need to be passed to the guest
1085using the device tree or other means from vm context.
1086
1087For now the only implemented piece of information distributed here is an array
1088of 4 instructions that make up a hypercall.
1089
1090If any additional field gets added to this structure later on, a bit for that
1091additional piece of information will be set in the flags bitmap.
1092
10934.48 KVM_ASSIGN_PCI_DEVICE
1094
1095Capability: KVM_CAP_DEVICE_ASSIGNMENT
1096Architectures: x86 ia64
1097Type: vm ioctl
1098Parameters: struct kvm_assigned_pci_dev (in)
1099Returns: 0 on success, -1 on error
1100
1101Assigns a host PCI device to the VM.
1102
1103struct kvm_assigned_pci_dev {
1104 __u32 assigned_dev_id;
1105 __u32 busnr;
1106 __u32 devfn;
1107 __u32 flags;
1108 __u32 segnr;
1109 union {
1110 __u32 reserved[11];
1111 };
1112};
1113
1114The PCI device is specified by the triple segnr, busnr, and devfn.
1115Identification in succeeding service requests is done via assigned_dev_id. The
1116following flags are specified:
1117
1118/* Depends on KVM_CAP_IOMMU */
1119#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
1120
11214.49 KVM_DEASSIGN_PCI_DEVICE
1122
1123Capability: KVM_CAP_DEVICE_DEASSIGNMENT
1124Architectures: x86 ia64
1125Type: vm ioctl
1126Parameters: struct kvm_assigned_pci_dev (in)
1127Returns: 0 on success, -1 on error
1128
1129Ends PCI device assignment, releasing all associated resources.
1130
1131See KVM_CAP_DEVICE_ASSIGNMENT for the data structure. Only assigned_dev_id is
1132used in kvm_assigned_pci_dev to identify the device.
1133
11344.50 KVM_ASSIGN_DEV_IRQ
1135
1136Capability: KVM_CAP_ASSIGN_DEV_IRQ
1137Architectures: x86 ia64
1138Type: vm ioctl
1139Parameters: struct kvm_assigned_irq (in)
1140Returns: 0 on success, -1 on error
1141
1142Assigns an IRQ to a passed-through device.
1143
1144struct kvm_assigned_irq {
1145 __u32 assigned_dev_id;
1146 __u32 host_irq;
1147 __u32 guest_irq;
1148 __u32 flags;
1149 union {
1150 struct {
1151 __u32 addr_lo;
1152 __u32 addr_hi;
1153 __u32 data;
1154 } guest_msi;
1155 __u32 reserved[12];
1156 };
1157};
1158
1159The following flags are defined:
1160
1161#define KVM_DEV_IRQ_HOST_INTX (1 << 0)
1162#define KVM_DEV_IRQ_HOST_MSI (1 << 1)
1163#define KVM_DEV_IRQ_HOST_MSIX (1 << 2)
1164
1165#define KVM_DEV_IRQ_GUEST_INTX (1 << 8)
1166#define KVM_DEV_IRQ_GUEST_MSI (1 << 9)
1167#define KVM_DEV_IRQ_GUEST_MSIX (1 << 10)
1168
1169It is not valid to specify multiple types per host or guest IRQ. However, the
1170IRQ type of host and guest can differ or can even be null.
1171
11724.51 KVM_DEASSIGN_DEV_IRQ
1173
1174Capability: KVM_CAP_ASSIGN_DEV_IRQ
1175Architectures: x86 ia64
1176Type: vm ioctl
1177Parameters: struct kvm_assigned_irq (in)
1178Returns: 0 on success, -1 on error
1179
1180Ends an IRQ assignment to a passed-through device.
1181
1182See KVM_ASSIGN_DEV_IRQ for the data structure. The target device is specified
1183by assigned_dev_id, flags must correspond to the IRQ type specified on
1184KVM_ASSIGN_DEV_IRQ. Partial deassignment of host or guest IRQ is allowed.
1185
11864.52 KVM_SET_GSI_ROUTING
1187
1188Capability: KVM_CAP_IRQ_ROUTING
1189Architectures: x86 ia64
1190Type: vm ioctl
1191Parameters: struct kvm_irq_routing (in)
1192Returns: 0 on success, -1 on error
1193
1194Sets the GSI routing table entries, overwriting any previously set entries.
1195
1196struct kvm_irq_routing {
1197 __u32 nr;
1198 __u32 flags;
1199 struct kvm_irq_routing_entry entries[0];
1200};
1201
1202No flags are specified so far, the corresponding field must be set to zero.
1203
1204struct kvm_irq_routing_entry {
1205 __u32 gsi;
1206 __u32 type;
1207 __u32 flags;
1208 __u32 pad;
1209 union {
1210 struct kvm_irq_routing_irqchip irqchip;
1211 struct kvm_irq_routing_msi msi;
1212 __u32 pad[8];
1213 } u;
1214};
1215
1216/* gsi routing entry types */
1217#define KVM_IRQ_ROUTING_IRQCHIP 1
1218#define KVM_IRQ_ROUTING_MSI 2
1219
1220No flags are specified so far, the corresponding field must be set to zero.
1221
1222struct kvm_irq_routing_irqchip {
1223 __u32 irqchip;
1224 __u32 pin;
1225};
1226
1227struct kvm_irq_routing_msi {
1228 __u32 address_lo;
1229 __u32 address_hi;
1230 __u32 data;
1231 __u32 pad;
1232};
1233
12344.53 KVM_ASSIGN_SET_MSIX_NR
1235
1236Capability: KVM_CAP_DEVICE_MSIX
1237Architectures: x86 ia64
1238Type: vm ioctl
1239Parameters: struct kvm_assigned_msix_nr (in)
1240Returns: 0 on success, -1 on error
1241
1242Set the number of MSI-X interrupts for an assigned device. This service can
1243only be called once in the lifetime of an assigned device.
1244
1245struct kvm_assigned_msix_nr {
1246 __u32 assigned_dev_id;
1247 __u16 entry_nr;
1248 __u16 padding;
1249};
1250
1251#define KVM_MAX_MSIX_PER_DEV 256
1252
12534.54 KVM_ASSIGN_SET_MSIX_ENTRY
1254
1255Capability: KVM_CAP_DEVICE_MSIX
1256Architectures: x86 ia64
1257Type: vm ioctl
1258Parameters: struct kvm_assigned_msix_entry (in)
1259Returns: 0 on success, -1 on error
1260
1261Specifies the routing of an MSI-X assigned device interrupt to a GSI. Setting
1262the GSI vector to zero means disabling the interrupt.
1263
1264struct kvm_assigned_msix_entry {
1265 __u32 assigned_dev_id;
1266 __u32 gsi;
1267 __u16 entry; /* The index of entry in the MSI-X table */
1268 __u16 padding[3];
1269};
1270
12714.54 KVM_SET_TSC_KHZ
1272
1273Capability: KVM_CAP_TSC_CONTROL
1274Architectures: x86
1275Type: vcpu ioctl
1276Parameters: virtual tsc_khz
1277Returns: 0 on success, -1 on error
1278
1279Specifies the tsc frequency for the virtual machine. The unit of the
1280frequency is KHz.
1281
12824.55 KVM_GET_TSC_KHZ
1283
1284Capability: KVM_CAP_GET_TSC_KHZ
1285Architectures: x86
1286Type: vcpu ioctl
1287Parameters: none
1288Returns: virtual tsc-khz on success, negative value on error
1289
1290Returns the tsc frequency of the guest. The unit of the return value is
1291KHz. If the host has unstable tsc this ioctl returns -EIO instead as an
1292error.
1293
12945. The kvm_run structure
1295
1296Application code obtains a pointer to the kvm_run structure by
1297mmap()ing a vcpu fd. From that point, application code can control
1298execution by changing fields in kvm_run prior to calling the KVM_RUN
1299ioctl, and obtain information about the reason KVM_RUN returned by
1300looking up structure members.
1301
1302struct kvm_run {
1303 /* in */
1304 __u8 request_interrupt_window;
1305
1306Request that KVM_RUN return when it becomes possible to inject external
1307interrupts into the guest. Useful in conjunction with KVM_INTERRUPT.
1308
1309 __u8 padding1[7];
1310
1311 /* out */
1312 __u32 exit_reason;
1313
1314When KVM_RUN has returned successfully (return value 0), this informs
1315application code why KVM_RUN has returned. Allowable values for this
1316field are detailed below.
1317
1318 __u8 ready_for_interrupt_injection;
1319
1320If request_interrupt_window has been specified, this field indicates
1321an interrupt can be injected now with KVM_INTERRUPT.
1322
1323 __u8 if_flag;
1324
1325The value of the current interrupt flag. Only valid if in-kernel
1326local APIC is not used.
1327
1328 __u8 padding2[2];
1329
1330 /* in (pre_kvm_run), out (post_kvm_run) */
1331 __u64 cr8;
1332
1333The value of the cr8 register. Only valid if in-kernel local APIC is
1334not used. Both input and output.
1335
1336 __u64 apic_base;
1337
1338The value of the APIC BASE msr. Only valid if in-kernel local
1339APIC is not used. Both input and output.
1340
1341 union {
1342 /* KVM_EXIT_UNKNOWN */
1343 struct {
1344 __u64 hardware_exit_reason;
1345 } hw;
1346
1347If exit_reason is KVM_EXIT_UNKNOWN, the vcpu has exited due to unknown
1348reasons. Further architecture-specific information is available in
1349hardware_exit_reason.
1350
1351 /* KVM_EXIT_FAIL_ENTRY */
1352 struct {
1353 __u64 hardware_entry_failure_reason;
1354 } fail_entry;
1355
1356If exit_reason is KVM_EXIT_FAIL_ENTRY, the vcpu could not be run due
1357to unknown reasons. Further architecture-specific information is
1358available in hardware_entry_failure_reason.
1359
1360 /* KVM_EXIT_EXCEPTION */
1361 struct {
1362 __u32 exception;
1363 __u32 error_code;
1364 } ex;
1365
1366Unused.
1367
1368 /* KVM_EXIT_IO */
1369 struct {
1370#define KVM_EXIT_IO_IN 0
1371#define KVM_EXIT_IO_OUT 1
1372 __u8 direction;
1373 __u8 size; /* bytes */
1374 __u16 port;
1375 __u32 count;
1376 __u64 data_offset; /* relative to kvm_run start */
1377 } io;
1378
1379If exit_reason is KVM_EXIT_IO, then the vcpu has
1380executed a port I/O instruction which could not be satisfied by kvm.
1381data_offset describes where the data is located (KVM_EXIT_IO_OUT) or
1382where kvm expects application code to place the data for the next
1383KVM_RUN invocation (KVM_EXIT_IO_IN). Data format is a packed array.
1384
1385 struct {
1386 struct kvm_debug_exit_arch arch;
1387 } debug;
1388
1389Unused.
1390
1391 /* KVM_EXIT_MMIO */
1392 struct {
1393 __u64 phys_addr;
1394 __u8 data[8];
1395 __u32 len;
1396 __u8 is_write;
1397 } mmio;
1398
1399If exit_reason is KVM_EXIT_MMIO, then the vcpu has
1400executed a memory-mapped I/O instruction which could not be satisfied
1401by kvm. The 'data' member contains the written data if 'is_write' is
1402true, and should be filled by application code otherwise.
1403
1404NOTE: For KVM_EXIT_IO, KVM_EXIT_MMIO and KVM_EXIT_OSI, the corresponding
1405operations are complete (and guest state is consistent) only after userspace
1406has re-entered the kernel with KVM_RUN. The kernel side will first finish
1407incomplete operations and then check for pending signals. Userspace
1408can re-enter the guest with an unmasked signal pending to complete
1409pending operations.
1410
1411 /* KVM_EXIT_HYPERCALL */
1412 struct {
1413 __u64 nr;
1414 __u64 args[6];
1415 __u64 ret;
1416 __u32 longmode;
1417 __u32 pad;
1418 } hypercall;
1419
1420Unused. This was once used for 'hypercall to userspace'. To implement
1421such functionality, use KVM_EXIT_IO (x86) or KVM_EXIT_MMIO (all except s390).
1422Note KVM_EXIT_IO is significantly faster than KVM_EXIT_MMIO.
1423
1424 /* KVM_EXIT_TPR_ACCESS */
1425 struct {
1426 __u64 rip;
1427 __u32 is_write;
1428 __u32 pad;
1429 } tpr_access;
1430
1431To be documented (KVM_TPR_ACCESS_REPORTING).
1432
1433 /* KVM_EXIT_S390_SIEIC */
1434 struct {
1435 __u8 icptcode;
1436 __u64 mask; /* psw upper half */
1437 __u64 addr; /* psw lower half */
1438 __u16 ipa;
1439 __u32 ipb;
1440 } s390_sieic;
1441
1442s390 specific.
1443
1444 /* KVM_EXIT_S390_RESET */
1445#define KVM_S390_RESET_POR 1
1446#define KVM_S390_RESET_CLEAR 2
1447#define KVM_S390_RESET_SUBSYSTEM 4
1448#define KVM_S390_RESET_CPU_INIT 8
1449#define KVM_S390_RESET_IPL 16
1450 __u64 s390_reset_flags;
1451
1452s390 specific.
1453
1454 /* KVM_EXIT_DCR */
1455 struct {
1456 __u32 dcrn;
1457 __u32 data;
1458 __u8 is_write;
1459 } dcr;
1460
1461powerpc specific.
1462
1463 /* KVM_EXIT_OSI */
1464 struct {
1465 __u64 gprs[32];
1466 } osi;
1467
1468MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
1469hypercalls and exit with this exit struct that contains all the guest gprs.
1470
1471If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
1472Userspace can now handle the hypercall and when it's done modify the gprs as
1473necessary. Upon guest entry all guest GPRs will then be replaced by the values
1474in this struct.
1475
1476 /* Fix the size of the union. */
1477 char padding[256];
1478 };
1479};
diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
new file mode 100644
index 000000000000..882068538c9c
--- /dev/null
+++ b/Documentation/virtual/kvm/cpuid.txt
@@ -0,0 +1,45 @@
1KVM CPUID bits
2Glauber Costa <glommer@redhat.com>, Red Hat Inc, 2010
3=====================================================
4
5A guest running on a kvm host, can check some of its features using
6cpuid. This is not always guaranteed to work, since userspace can
7mask-out some, or even all KVM-related cpuid features before launching
8a guest.
9
10KVM cpuid functions are:
11
12function: KVM_CPUID_SIGNATURE (0x40000000)
13returns : eax = 0,
14 ebx = 0x4b4d564b,
15 ecx = 0x564b4d56,
16 edx = 0x4d.
17Note that this value in ebx, ecx and edx corresponds to the string "KVMKVMKVM".
18This function queries the presence of KVM cpuid leafs.
19
20
21function: define KVM_CPUID_FEATURES (0x40000001)
22returns : ebx, ecx, edx = 0
23 eax = and OR'ed group of (1 << flag), where each flags is:
24
25
26flag || value || meaning
27=============================================================================
28KVM_FEATURE_CLOCKSOURCE || 0 || kvmclock available at msrs
29 || || 0x11 and 0x12.
30------------------------------------------------------------------------------
31KVM_FEATURE_NOP_IO_DELAY || 1 || not necessary to perform delays
32 || || on PIO operations.
33------------------------------------------------------------------------------
34KVM_FEATURE_MMU_OP || 2 || deprecated.
35------------------------------------------------------------------------------
36KVM_FEATURE_CLOCKSOURCE2 || 3 || kvmclock available at msrs
37 || || 0x4b564d00 and 0x4b564d01
38------------------------------------------------------------------------------
39KVM_FEATURE_ASYNC_PF || 4 || async pf can be enabled by
40 || || writing to msr 0x4b564d02
41------------------------------------------------------------------------------
42KVM_FEATURE_CLOCKSOURCE_STABLE_BIT || 24 || host will warn if no guest-side
43 || || per-cpu warps are expected in
44 || || kvmclock.
45------------------------------------------------------------------------------
diff --git a/Documentation/virtual/kvm/locking.txt b/Documentation/virtual/kvm/locking.txt
new file mode 100644
index 000000000000..3b4cd3bf5631
--- /dev/null
+++ b/Documentation/virtual/kvm/locking.txt
@@ -0,0 +1,25 @@
1KVM Lock Overview
2=================
3
41. Acquisition Orders
5---------------------
6
7(to be written)
8
92. Reference
10------------
11
12Name: kvm_lock
13Type: raw_spinlock
14Arch: any
15Protects: - vm_list
16 - hardware virtualization enable/disable
17Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
18 migration.
19
20Name: kvm_arch::tsc_write_lock
21Type: raw_spinlock
22Arch: x86
23Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
24 - tsc offset in vmcb
25Comment: 'raw' because updating the tsc offsets must not be preempted.
diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
new file mode 100644
index 000000000000..f46aa58389ca
--- /dev/null
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -0,0 +1,348 @@
1The x86 kvm shadow mmu
2======================
3
4The mmu (in arch/x86/kvm, files mmu.[ch] and paging_tmpl.h) is responsible
5for presenting a standard x86 mmu to the guest, while translating guest
6physical addresses to host physical addresses.
7
8The mmu code attempts to satisfy the following requirements:
9
10- correctness: the guest should not be able to determine that it is running
11 on an emulated mmu except for timing (we attempt to comply
12 with the specification, not emulate the characteristics of
13 a particular implementation such as tlb size)
14- security: the guest must not be able to touch host memory not assigned
15 to it
16- performance: minimize the performance penalty imposed by the mmu
17- scaling: need to scale to large memory and large vcpu guests
18- hardware: support the full range of x86 virtualization hardware
19- integration: Linux memory management code must be in control of guest memory
20 so that swapping, page migration, page merging, transparent
21 hugepages, and similar features work without change
22- dirty tracking: report writes to guest memory to enable live migration
23 and framebuffer-based displays
24- footprint: keep the amount of pinned kernel memory low (most memory
25 should be shrinkable)
26- reliability: avoid multipage or GFP_ATOMIC allocations
27
28Acronyms
29========
30
31pfn host page frame number
32hpa host physical address
33hva host virtual address
34gfn guest frame number
35gpa guest physical address
36gva guest virtual address
37ngpa nested guest physical address
38ngva nested guest virtual address
39pte page table entry (used also to refer generically to paging structure
40 entries)
41gpte guest pte (referring to gfns)
42spte shadow pte (referring to pfns)
43tdp two dimensional paging (vendor neutral term for NPT and EPT)
44
45Virtual and real hardware supported
46===================================
47
48The mmu supports first-generation mmu hardware, which allows an atomic switch
49of the current paging mode and cr3 during guest entry, as well as
50two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
51it exposes is the traditional 2/3/4 level x86 mmu, with support for global
52pages, pae, pse, pse36, cr0.wp, and 1GB pages. Work is in progress to support
53exposing NPT capable hardware on NPT capable hosts.
54
55Translation
56===========
57
58The primary job of the mmu is to program the processor's mmu to translate
59addresses for the guest. Different translations are required at different
60times:
61
62- when guest paging is disabled, we translate guest physical addresses to
63 host physical addresses (gpa->hpa)
64- when guest paging is enabled, we translate guest virtual addresses, to
65 guest physical addresses, to host physical addresses (gva->gpa->hpa)
66- when the guest launches a guest of its own, we translate nested guest
67 virtual addresses, to nested guest physical addresses, to guest physical
68 addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
69
70The primary challenge is to encode between 1 and 3 translations into hardware
71that support only 1 (traditional) and 2 (tdp) translations. When the
72number of required translations matches the hardware, the mmu operates in
73direct mode; otherwise it operates in shadow mode (see below).
74
75Memory
76======
77
78Guest memory (gpa) is part of the user address space of the process that is
79using kvm. Userspace defines the translation between guest addresses and user
80addresses (gpa->hva); note that two gpas may alias to the same hva, but not
81vice versa.
82
83These hvas may be backed using any method available to the host: anonymous
84memory, file backed memory, and device memory. Memory might be paged by the
85host at any time.
86
87Events
88======
89
90The mmu is driven by events, some from the guest, some from the host.
91
92Guest generated events:
93- writes to control registers (especially cr3)
94- invlpg/invlpga instruction execution
95- access to missing or protected translations
96
97Host generated events:
98- changes in the gpa->hpa translation (either through gpa->hva changes or
99 through hva->hpa changes)
100- memory pressure (the shrinker)
101
102Shadow pages
103============
104
105The principal data structure is the shadow page, 'struct kvm_mmu_page'. A
106shadow page contains 512 sptes, which can be either leaf or nonleaf sptes. A
107shadow page may contain a mix of leaf and nonleaf sptes.
108
109A nonleaf spte allows the hardware mmu to reach the leaf pages and
110is not related to a translation directly. It points to other shadow pages.
111
112A leaf spte corresponds to either one or two translations encoded into
113one paging structure entry. These are always the lowest level of the
114translation stack, with optional higher level translations left to NPT/EPT.
115Leaf ptes point at guest pages.
116
117The following table shows translations encoded by leaf ptes, with higher-level
118translations in parentheses:
119
120 Non-nested guests:
121 nonpaging: gpa->hpa
122 paging: gva->gpa->hpa
123 paging, tdp: (gva->)gpa->hpa
124 Nested guests:
125 non-tdp: ngva->gpa->hpa (*)
126 tdp: (ngva->)ngpa->gpa->hpa
127
128(*) the guest hypervisor will encode the ngva->gpa translation into its page
129 tables if npt is not present
130
131Shadow pages contain the following information:
132 role.level:
133 The level in the shadow paging hierarchy that this shadow page belongs to.
134 1=4k sptes, 2=2M sptes, 3=1G sptes, etc.
135 role.direct:
136 If set, leaf sptes reachable from this page are for a linear range.
137 Examples include real mode translation, large guest pages backed by small
138 host pages, and gpa->hpa translations when NPT or EPT is active.
139 The linear range starts at (gfn << PAGE_SHIFT) and its size is determined
140 by role.level (2MB for first level, 1GB for second level, 0.5TB for third
141 level, 256TB for fourth level)
142 If clear, this page corresponds to a guest page table denoted by the gfn
143 field.
144 role.quadrant:
145 When role.cr4_pae=0, the guest uses 32-bit gptes while the host uses 64-bit
146 sptes. That means a guest page table contains more ptes than the host,
147 so multiple shadow pages are needed to shadow one guest page.
148 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
149 first or second 512-gpte block in the guest page table. For second-level
150 page tables, each 32-bit gpte is converted to two 64-bit sptes
151 (since each first-level guest page is shadowed by two first-level
152 shadow pages) so role.quadrant takes values in the range 0..3. Each
153 quadrant maps 1GB virtual address space.
154 role.access:
155 Inherited guest access permissions in the form uwx. Note execute
156 permission is positive, not negative.
157 role.invalid:
158 The page is invalid and should not be used. It is a root page that is
159 currently pinned (by a cpu hardware register pointing to it); once it is
160 unpinned it will be destroyed.
161 role.cr4_pae:
162 Contains the value of cr4.pae for which the page is valid (e.g. whether
163 32-bit or 64-bit gptes are in use).
164 role.nxe:
165 Contains the value of efer.nxe for which the page is valid.
166 role.cr0_wp:
167 Contains the value of cr0.wp for which the page is valid.
168 gfn:
169 Either the guest page table containing the translations shadowed by this
170 page, or the base page frame for linear translations. See role.direct.
171 spt:
172 A pageful of 64-bit sptes containing the translations for this page.
173 Accessed by both kvm and hardware.
174 The page pointed to by spt will have its page->private pointing back
175 at the shadow page structure.
176 sptes in spt point either at guest pages, or at lower-level shadow pages.
177 Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
178 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
179 The spt array forms a DAG structure with the shadow page as a node, and
180 guest pages as leaves.
181 gfns:
182 An array of 512 guest frame numbers, one for each present pte. Used to
183 perform a reverse map from a pte to a gfn. When role.direct is set, any
184 element of this array can be calculated from the gfn field when used, in
185 this case, the array of gfns is not allocated. See role.direct and gfn.
186 slot_bitmap:
187 A bitmap containing one bit per memory slot. If the page contains a pte
188 mapping a page from memory slot n, then bit n of slot_bitmap will be set
189 (if a page is aliased among several slots, then it is not guaranteed that
190 all slots will be marked).
191 Used during dirty logging to avoid scanning a shadow page if none if its
192 pages need tracking.
193 root_count:
194 A counter keeping track of how many hardware registers (guest cr3 or
195 pdptrs) are now pointing at the page. While this counter is nonzero, the
196 page cannot be destroyed. See role.invalid.
197 multimapped:
198 Whether there exist multiple sptes pointing at this page.
199 parent_pte/parent_ptes:
200 If multimapped is zero, parent_pte points at the single spte that points at
201 this page's spt. Otherwise, parent_ptes points at a data structure
202 with a list of parent_ptes.
203 unsync:
204 If true, then the translations in this page may not match the guest's
205 translation. This is equivalent to the state of the tlb when a pte is
206 changed but before the tlb entry is flushed. Accordingly, unsync ptes
207 are synchronized when the guest executes invlpg or flushes its tlb by
208 other means. Valid for leaf pages.
209 unsync_children:
210 How many sptes in the page point at pages that are unsync (or have
211 unsynchronized children).
212 unsync_child_bitmap:
213 A bitmap indicating which sptes in spt point (directly or indirectly) at
214 pages that may be unsynchronized. Used to quickly locate all unsychronized
215 pages reachable from a given page.
216
217Reverse map
218===========
219
220The mmu maintains a reverse mapping whereby all ptes mapping a page can be
221reached given its gfn. This is used, for example, when swapping out a page.
222
223Synchronized and unsynchronized pages
224=====================================
225
226The guest uses two events to synchronize its tlb and page tables: tlb flushes
227and page invalidations (invlpg).
228
229A tlb flush means that we need to synchronize all sptes reachable from the
230guest's cr3. This is expensive, so we keep all guest page tables write
231protected, and synchronize sptes to gptes when a gpte is written.
232
233A special case is when a guest page table is reachable from the current
234guest cr3. In this case, the guest is obliged to issue an invlpg instruction
235before using the translation. We take advantage of that by removing write
236protection from the guest page, and allowing the guest to modify it freely.
237We synchronize modified gptes when the guest invokes invlpg. This reduces
238the amount of emulation we have to do when the guest modifies multiple gptes,
239or when the a guest page is no longer used as a page table and is used for
240random guest data.
241
242As a side effect we have to resynchronize all reachable unsynchronized shadow
243pages on a tlb flush.
244
245
246Reaction to events
247==================
248
249- guest page fault (or npt page fault, or ept violation)
250
251This is the most complicated event. The cause of a page fault can be:
252
253 - a true guest fault (the guest translation won't allow the access) (*)
254 - access to a missing translation
255 - access to a protected translation
256 - when logging dirty pages, memory is write protected
257 - synchronized shadow pages are write protected (*)
258 - access to untranslatable memory (mmio)
259
260 (*) not applicable in direct mode
261
262Handling a page fault is performed as follows:
263
264 - if needed, walk the guest page tables to determine the guest translation
265 (gva->gpa or ngpa->gpa)
266 - if permissions are insufficient, reflect the fault back to the guest
267 - determine the host page
268 - if this is an mmio request, there is no host page; call the emulator
269 to emulate the instruction instead
270 - walk the shadow page table to find the spte for the translation,
271 instantiating missing intermediate page tables as necessary
272 - try to unsynchronize the page
273 - if successful, we can let the guest continue and modify the gpte
274 - emulate the instruction
275 - if failed, unshadow the page and let the guest continue
276 - update any translations that were modified by the instruction
277
278invlpg handling:
279
280 - walk the shadow page hierarchy and drop affected translations
281 - try to reinstantiate the indicated translation in the hope that the
282 guest will use it in the near future
283
284Guest control register updates:
285
286- mov to cr3
287 - look up new shadow roots
288 - synchronize newly reachable shadow pages
289
290- mov to cr0/cr4/efer
291 - set up mmu context for new paging mode
292 - look up new shadow roots
293 - synchronize newly reachable shadow pages
294
295Host translation updates:
296
297 - mmu notifier called with updated hva
298 - look up affected sptes through reverse map
299 - drop (or update) translations
300
301Emulating cr0.wp
302================
303
304If tdp is not enabled, the host must keep cr0.wp=1 so page write protection
305works for the guest kernel, not guest guest userspace. When the guest
306cr0.wp=1, this does not present a problem. However when the guest cr0.wp=0,
307we cannot map the permissions for gpte.u=1, gpte.w=0 to any spte (the
308semantics require allowing any guest kernel access plus user read access).
309
310We handle this by mapping the permissions to two possible sptes, depending
311on fault type:
312
313- kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
314 disallows user access)
315- read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
316 write access)
317
318(user write faults generate a #PF)
319
320Large pages
321===========
322
323The mmu supports all combinations of large and small guest and host pages.
324Supported page sizes include 4k, 2M, 4M, and 1G. 4M pages are treated as
325two separate 2M pages, on both guest and host, since the mmu always uses PAE
326paging.
327
328To instantiate a large spte, four constraints must be satisfied:
329
330- the spte must point to a large host page
331- the guest pte must be a large pte of at least equivalent size (if tdp is
332 enabled, there is no guest pte and this condition is satisified)
333- if the spte will be writeable, the large page frame may not overlap any
334 write-protected pages
335- the guest page must be wholly contained by a single memory slot
336
337To check the last two conditions, the mmu maintains a ->write_count set of
338arrays for each memory slot and large page size. Every write protected page
339causes its write_count to be incremented, thus preventing instantiation of
340a large spte. The frames at the end of an unaligned memory slot have
341artificically inflated ->write_counts so they can never be instantiated.
342
343Further reading
344===============
345
346- NPT presentation from KVM Forum 2008
347 http://www.linux-kvm.org/wiki/images/c/c8/KvmForum2008%24kdf2008_21.pdf
348
diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
new file mode 100644
index 000000000000..d079aed27e03
--- /dev/null
+++ b/Documentation/virtual/kvm/msr.txt
@@ -0,0 +1,187 @@
1KVM-specific MSRs.
2Glauber Costa <glommer@redhat.com>, Red Hat Inc, 2010
3=====================================================
4
5KVM makes use of some custom MSRs to service some requests.
6
7Custom MSRs have a range reserved for them, that goes from
80x4b564d00 to 0x4b564dff. There are MSRs outside this area,
9but they are deprecated and their use is discouraged.
10
11Custom MSR list
12--------
13
14The current supported Custom MSR list is:
15
16MSR_KVM_WALL_CLOCK_NEW: 0x4b564d00
17
18 data: 4-byte alignment physical address of a memory area which must be
19 in guest RAM. This memory is expected to hold a copy of the following
20 structure:
21
22 struct pvclock_wall_clock {
23 u32 version;
24 u32 sec;
25 u32 nsec;
26 } __attribute__((__packed__));
27
28 whose data will be filled in by the hypervisor. The hypervisor is only
29 guaranteed to update this data at the moment of MSR write.
30 Users that want to reliably query this information more than once have
31 to write more than once to this MSR. Fields have the following meanings:
32
33 version: guest has to check version before and after grabbing
34 time information and check that they are both equal and even.
35 An odd version indicates an in-progress update.
36
37 sec: number of seconds for wallclock.
38
39 nsec: number of nanoseconds for wallclock.
40
41 Note that although MSRs are per-CPU entities, the effect of this
42 particular MSR is global.
43
44 Availability of this MSR must be checked via bit 3 in 0x4000001 cpuid
45 leaf prior to usage.
46
47MSR_KVM_SYSTEM_TIME_NEW: 0x4b564d01
48
49 data: 4-byte aligned physical address of a memory area which must be in
50 guest RAM, plus an enable bit in bit 0. This memory is expected to hold
51 a copy of the following structure:
52
53 struct pvclock_vcpu_time_info {
54 u32 version;
55 u32 pad0;
56 u64 tsc_timestamp;
57 u64 system_time;
58 u32 tsc_to_system_mul;
59 s8 tsc_shift;
60 u8 flags;
61 u8 pad[2];
62 } __attribute__((__packed__)); /* 32 bytes */
63
64 whose data will be filled in by the hypervisor periodically. Only one
65 write, or registration, is needed for each VCPU. The interval between
66 updates of this structure is arbitrary and implementation-dependent.
67 The hypervisor may update this structure at any time it sees fit until
68 anything with bit0 == 0 is written to it.
69
70 Fields have the following meanings:
71
72 version: guest has to check version before and after grabbing
73 time information and check that they are both equal and even.
74 An odd version indicates an in-progress update.
75
76 tsc_timestamp: the tsc value at the current VCPU at the time
77 of the update of this structure. Guests can subtract this value
78 from current tsc to derive a notion of elapsed time since the
79 structure update.
80
81 system_time: a host notion of monotonic time, including sleep
82 time at the time this structure was last updated. Unit is
83 nanoseconds.
84
85 tsc_to_system_mul: a function of the tsc frequency. One has
86 to multiply any tsc-related quantity by this value to get
87 a value in nanoseconds, besides dividing by 2^tsc_shift
88
89 tsc_shift: cycle to nanosecond divider, as a power of two, to
90 allow for shift rights. One has to shift right any tsc-related
91 quantity by this value to get a value in nanoseconds, besides
92 multiplying by tsc_to_system_mul.
93
94 With this information, guests can derive per-CPU time by
95 doing:
96
97 time = (current_tsc - tsc_timestamp)
98 time = (time * tsc_to_system_mul) >> tsc_shift
99 time = time + system_time
100
101 flags: bits in this field indicate extended capabilities
102 coordinated between the guest and the hypervisor. Availability
103 of specific flags has to be checked in 0x40000001 cpuid leaf.
104 Current flags are:
105
106 flag bit | cpuid bit | meaning
107 -------------------------------------------------------------
108 | | time measures taken across
109 0 | 24 | multiple cpus are guaranteed to
110 | | be monotonic
111 -------------------------------------------------------------
112
113 Availability of this MSR must be checked via bit 3 in 0x4000001 cpuid
114 leaf prior to usage.
115
116
117MSR_KVM_WALL_CLOCK: 0x11
118
119 data and functioning: same as MSR_KVM_WALL_CLOCK_NEW. Use that instead.
120
121 This MSR falls outside the reserved KVM range and may be removed in the
122 future. Its usage is deprecated.
123
124 Availability of this MSR must be checked via bit 0 in 0x4000001 cpuid
125 leaf prior to usage.
126
127MSR_KVM_SYSTEM_TIME: 0x12
128
129 data and functioning: same as MSR_KVM_SYSTEM_TIME_NEW. Use that instead.
130
131 This MSR falls outside the reserved KVM range and may be removed in the
132 future. Its usage is deprecated.
133
134 Availability of this MSR must be checked via bit 0 in 0x4000001 cpuid
135 leaf prior to usage.
136
137 The suggested algorithm for detecting kvmclock presence is then:
138
139 if (!kvm_para_available()) /* refer to cpuid.txt */
140 return NON_PRESENT;
141
142 flags = cpuid_eax(0x40000001);
143 if (flags & 3) {
144 msr_kvm_system_time = MSR_KVM_SYSTEM_TIME_NEW;
145 msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK_NEW;
146 return PRESENT;
147 } else if (flags & 0) {
148 msr_kvm_system_time = MSR_KVM_SYSTEM_TIME;
149 msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK;
150 return PRESENT;
151 } else
152 return NON_PRESENT;
153
154MSR_KVM_ASYNC_PF_EN: 0x4b564d02
155 data: Bits 63-6 hold 64-byte aligned physical address of a
156 64 byte memory area which must be in guest RAM and must be
157 zeroed. Bits 5-2 are reserved and should be zero. Bit 0 is 1
158 when asynchronous page faults are enabled on the vcpu 0 when
159 disabled. Bit 2 is 1 if asynchronous page faults can be injected
160 when vcpu is in cpl == 0.
161
162 First 4 byte of 64 byte memory location will be written to by
163 the hypervisor at the time of asynchronous page fault (APF)
164 injection to indicate type of asynchronous page fault. Value
165 of 1 means that the page referred to by the page fault is not
166 present. Value 2 means that the page is now available. Disabling
167 interrupt inhibits APFs. Guest must not enable interrupt
168 before the reason is read, or it may be overwritten by another
169 APF. Since APF uses the same exception vector as regular page
170 fault guest must reset the reason to 0 before it does
171 something that can generate normal page fault. If during page
172 fault APF reason is 0 it means that this is regular page
173 fault.
174
175 During delivery of type 1 APF cr2 contains a token that will
176 be used to notify a guest when missing page becomes
177 available. When page becomes available type 2 APF is sent with
178 cr2 set to the token associated with the page. There is special
179 kind of token 0xffffffff which tells vcpu that it should wake
180 up all processes waiting for APFs and no individual type 2 APFs
181 will be sent.
182
183 If APF is disabled while there are outstanding APFs, they will
184 not be delivered.
185
186 Currently type 2 APF will be always delivered on the same vcpu as
187 type 1 was, but guest should not rely on that.
diff --git a/Documentation/virtual/kvm/ppc-pv.txt b/Documentation/virtual/kvm/ppc-pv.txt
new file mode 100644
index 000000000000..3ab969c59046
--- /dev/null
+++ b/Documentation/virtual/kvm/ppc-pv.txt
@@ -0,0 +1,196 @@
1The PPC KVM paravirtual interface
2=================================
3
4The basic execution principle by which KVM on PowerPC works is to run all kernel
5space code in PR=1 which is user space. This way we trap all privileged
6instructions and can emulate them accordingly.
7
8Unfortunately that is also the downfall. There are quite some privileged
9instructions that needlessly return us to the hypervisor even though they
10could be handled differently.
11
12This is what the PPC PV interface helps with. It takes privileged instructions
13and transforms them into unprivileged ones with some help from the hypervisor.
14This cuts down virtualization costs by about 50% on some of my benchmarks.
15
16The code for that interface can be found in arch/powerpc/kernel/kvm*
17
18Querying for existence
19======================
20
21To find out if we're running on KVM or not, we leverage the device tree. When
22Linux is running on KVM, a node /hypervisor exists. That node contains a
23compatible property with the value "linux,kvm".
24
25Once you determined you're running under a PV capable KVM, you can now use
26hypercalls as described below.
27
28KVM hypercalls
29==============
30
31Inside the device tree's /hypervisor node there's a property called
32'hypercall-instructions'. This property contains at most 4 opcodes that make
33up the hypercall. To call a hypercall, just call these instructions.
34
35The parameters are as follows:
36
37 Register IN OUT
38
39 r0 - volatile
40 r3 1st parameter Return code
41 r4 2nd parameter 1st output value
42 r5 3rd parameter 2nd output value
43 r6 4th parameter 3rd output value
44 r7 5th parameter 4th output value
45 r8 6th parameter 5th output value
46 r9 7th parameter 6th output value
47 r10 8th parameter 7th output value
48 r11 hypercall number 8th output value
49 r12 - volatile
50
51Hypercall definitions are shared in generic code, so the same hypercall numbers
52apply for x86 and powerpc alike with the exception that each KVM hypercall
53also needs to be ORed with the KVM vendor code which is (42 << 16).
54
55Return codes can be as follows:
56
57 Code Meaning
58
59 0 Success
60 12 Hypercall not implemented
61 <0 Error
62
63The magic page
64==============
65
66To enable communication between the hypervisor and guest there is a new shared
67page that contains parts of supervisor visible register state. The guest can
68map this shared page using the KVM hypercall KVM_HC_PPC_MAP_MAGIC_PAGE.
69
70With this hypercall issued the guest always gets the magic page mapped at the
71desired location in effective and physical address space. For now, we always
72map the page to -4096. This way we can access it using absolute load and store
73functions. The following instruction reads the first field of the magic page:
74
75 ld rX, -4096(0)
76
77The interface is designed to be extensible should there be need later to add
78additional registers to the magic page. If you add fields to the magic page,
79also define a new hypercall feature to indicate that the host can give you more
80registers. Only if the host supports the additional features, make use of them.
81
82The magic page has the following layout as described in
83arch/powerpc/include/asm/kvm_para.h:
84
85struct kvm_vcpu_arch_shared {
86 __u64 scratch1;
87 __u64 scratch2;
88 __u64 scratch3;
89 __u64 critical; /* Guest may not get interrupts if == r1 */
90 __u64 sprg0;
91 __u64 sprg1;
92 __u64 sprg2;
93 __u64 sprg3;
94 __u64 srr0;
95 __u64 srr1;
96 __u64 dar;
97 __u64 msr;
98 __u32 dsisr;
99 __u32 int_pending; /* Tells the guest if we have an interrupt */
100};
101
102Additions to the page must only occur at the end. Struct fields are always 32
103or 64 bit aligned, depending on them being 32 or 64 bit wide respectively.
104
105Magic page features
106===================
107
108When mapping the magic page using the KVM hypercall KVM_HC_PPC_MAP_MAGIC_PAGE,
109a second return value is passed to the guest. This second return value contains
110a bitmap of available features inside the magic page.
111
112The following enhancements to the magic page are currently available:
113
114 KVM_MAGIC_FEAT_SR Maps SR registers r/w in the magic page
115
116For enhanced features in the magic page, please check for the existence of the
117feature before using them!
118
119MSR bits
120========
121
122The MSR contains bits that require hypervisor intervention and bits that do
123not require direct hypervisor intervention because they only get interpreted
124when entering the guest or don't have any impact on the hypervisor's behavior.
125
126The following bits are safe to be set inside the guest:
127
128 MSR_EE
129 MSR_RI
130 MSR_CR
131 MSR_ME
132
133If any other bit changes in the MSR, please still use mtmsr(d).
134
135Patched instructions
136====================
137
138The "ld" and "std" instructions are transormed to "lwz" and "stw" instructions
139respectively on 32 bit systems with an added offset of 4 to accommodate for big
140endianness.
141
142The following is a list of mapping the Linux kernel performs when running as
143guest. Implementing any of those mappings is optional, as the instruction traps
144also act on the shared page. So calling privileged instructions still works as
145before.
146
147From To
148==== ==
149
150mfmsr rX ld rX, magic_page->msr
151mfsprg rX, 0 ld rX, magic_page->sprg0
152mfsprg rX, 1 ld rX, magic_page->sprg1
153mfsprg rX, 2 ld rX, magic_page->sprg2
154mfsprg rX, 3 ld rX, magic_page->sprg3
155mfsrr0 rX ld rX, magic_page->srr0
156mfsrr1 rX ld rX, magic_page->srr1
157mfdar rX ld rX, magic_page->dar
158mfdsisr rX lwz rX, magic_page->dsisr
159
160mtmsr rX std rX, magic_page->msr
161mtsprg 0, rX std rX, magic_page->sprg0
162mtsprg 1, rX std rX, magic_page->sprg1
163mtsprg 2, rX std rX, magic_page->sprg2
164mtsprg 3, rX std rX, magic_page->sprg3
165mtsrr0 rX std rX, magic_page->srr0
166mtsrr1 rX std rX, magic_page->srr1
167mtdar rX std rX, magic_page->dar
168mtdsisr rX stw rX, magic_page->dsisr
169
170tlbsync nop
171
172mtmsrd rX, 0 b <special mtmsr section>
173mtmsr rX b <special mtmsr section>
174
175mtmsrd rX, 1 b <special mtmsrd section>
176
177[Book3S only]
178mtsrin rX, rY b <special mtsrin section>
179
180[BookE only]
181wrteei [0|1] b <special wrteei section>
182
183
184Some instructions require more logic to determine what's going on than a load
185or store instruction can deliver. To enable patching of those, we keep some
186RAM around where we can live translate instructions to. What happens is the
187following:
188
189 1) copy emulation code to memory
190 2) patch that code to fit the emulated instruction
191 3) patch that code to return to the original pc + 4
192 4) patch the original instruction to branch to the new code
193
194That way we can inject an arbitrary amount of code as replacement for a single
195instruction. This allows us to check for pending interrupts when setting EE=1
196for example.
diff --git a/Documentation/virtual/kvm/review-checklist.txt b/Documentation/virtual/kvm/review-checklist.txt
new file mode 100644
index 000000000000..a850986ed684
--- /dev/null
+++ b/Documentation/virtual/kvm/review-checklist.txt
@@ -0,0 +1,38 @@
1Review checklist for kvm patches
2================================
3
41. The patch must follow Documentation/CodingStyle and
5 Documentation/SubmittingPatches.
6
72. Patches should be against kvm.git master branch.
8
93. If the patch introduces or modifies a new userspace API:
10 - the API must be documented in Documentation/virtual/kvm/api.txt
11 - the API must be discoverable using KVM_CHECK_EXTENSION
12
134. New state must include support for save/restore.
14
155. New features must default to off (userspace should explicitly request them).
16 Performance improvements can and should default to on.
17
186. New cpu features should be exposed via KVM_GET_SUPPORTED_CPUID2
19
207. Emulator changes should be accompanied by unit tests for qemu-kvm.git
21 kvm/test directory.
22
238. Changes should be vendor neutral when possible. Changes to common code
24 are better than duplicating changes to vendor code.
25
269. Similarly, prefer changes to arch independent code than to arch dependent
27 code.
28
2910. User/kernel interfaces and guest/host interfaces must be 64-bit clean
30 (all variables and sizes naturally aligned on 64-bit; use specific types
31 only - u64 rather than ulong).
32
3311. New guest visible features must either be documented in a hardware manual
34 or be accompanied by documentation.
35
3612. Features must be robust against reset and kexec - for example, shared
37 host/guest memory must be unshared to prevent the host from writing to
38 guest memory that the guest has not reserved for this purpose.
diff --git a/Documentation/virtual/kvm/timekeeping.txt b/Documentation/virtual/kvm/timekeeping.txt
new file mode 100644
index 000000000000..df8946377cb6
--- /dev/null
+++ b/Documentation/virtual/kvm/timekeeping.txt
@@ -0,0 +1,612 @@
1
2 Timekeeping Virtualization for X86-Based Architectures
3
4 Zachary Amsden <zamsden@redhat.com>
5 Copyright (c) 2010, Red Hat. All rights reserved.
6
71) Overview
82) Timing Devices
93) TSC Hardware
104) Virtualization Problems
11
12=========================================================================
13
141) Overview
15
16One of the most complicated parts of the X86 platform, and specifically,
17the virtualization of this platform is the plethora of timing devices available
18and the complexity of emulating those devices. In addition, virtualization of
19time introduces a new set of challenges because it introduces a multiplexed
20division of time beyond the control of the guest CPU.
21
22First, we will describe the various timekeeping hardware available, then
23present some of the problems which arise and solutions available, giving
24specific recommendations for certain classes of KVM guests.
25
26The purpose of this document is to collect data and information relevant to
27timekeeping which may be difficult to find elsewhere, specifically,
28information relevant to KVM and hardware-based virtualization.
29
30=========================================================================
31
322) Timing Devices
33
34First we discuss the basic hardware devices available. TSC and the related
35KVM clock are special enough to warrant a full exposition and are described in
36the following section.
37
382.1) i8254 - PIT
39
40One of the first timer devices available is the programmable interrupt timer,
41or PIT. The PIT has a fixed frequency 1.193182 MHz base clock and three
42channels which can be programmed to deliver periodic or one-shot interrupts.
43These three channels can be configured in different modes and have individual
44counters. Channel 1 and 2 were not available for general use in the original
45IBM PC, and historically were connected to control RAM refresh and the PC
46speaker. Now the PIT is typically integrated as part of an emulated chipset
47and a separate physical PIT is not used.
48
49The PIT uses I/O ports 0x40 - 0x43. Access to the 16-bit counters is done
50using single or multiple byte access to the I/O ports. There are 6 modes
51available, but not all modes are available to all timers, as only timer 2
52has a connected gate input, required for modes 1 and 5. The gate line is
53controlled by port 61h, bit 0, as illustrated in the following diagram.
54
55 -------------- ----------------
56| | | |
57| 1.1932 MHz |---------->| CLOCK OUT | ---------> IRQ 0
58| Clock | | | |
59 -------------- | +->| GATE TIMER 0 |
60 | ----------------
61 |
62 | ----------------
63 | | |
64 |------>| CLOCK OUT | ---------> 66.3 KHZ DRAM
65 | | | (aka /dev/null)
66 | +->| GATE TIMER 1 |
67 | ----------------
68 |
69 | ----------------
70 | | |
71 |------>| CLOCK OUT | ---------> Port 61h, bit 5
72 | | |
73Port 61h, bit 0 ---------->| GATE TIMER 2 | \_.---- ____
74 ---------------- _| )--|LPF|---Speaker
75 / *---- \___/
76Port 61h, bit 1 -----------------------------------/
77
78The timer modes are now described.
79
80Mode 0: Single Timeout. This is a one-shot software timeout that counts down
81 when the gate is high (always true for timers 0 and 1). When the count
82 reaches zero, the output goes high.
83
84Mode 1: Triggered One-shot. The output is initially set high. When the gate
85 line is set high, a countdown is initiated (which does not stop if the gate is
86 lowered), during which the output is set low. When the count reaches zero,
87 the output goes high.
88
89Mode 2: Rate Generator. The output is initially set high. When the countdown
90 reaches 1, the output goes low for one count and then returns high. The value
91 is reloaded and the countdown automatically resumes. If the gate line goes
92 low, the count is halted. If the output is low when the gate is lowered, the
93 output automatically goes high (this only affects timer 2).
94
95Mode 3: Square Wave. This generates a high / low square wave. The count
96 determines the length of the pulse, which alternates between high and low
97 when zero is reached. The count only proceeds when gate is high and is
98 automatically reloaded on reaching zero. The count is decremented twice at
99 each clock to generate a full high / low cycle at the full periodic rate.
100 If the count is even, the clock remains high for N/2 counts and low for N/2
101 counts; if the clock is odd, the clock is high for (N+1)/2 counts and low
102 for (N-1)/2 counts. Only even values are latched by the counter, so odd
103 values are not observed when reading. This is the intended mode for timer 2,
104 which generates sine-like tones by low-pass filtering the square wave output.
105
106Mode 4: Software Strobe. After programming this mode and loading the counter,
107 the output remains high until the counter reaches zero. Then the output
108 goes low for 1 clock cycle and returns high. The counter is not reloaded.
109 Counting only occurs when gate is high.
110
111Mode 5: Hardware Strobe. After programming and loading the counter, the
112 output remains high. When the gate is raised, a countdown is initiated
113 (which does not stop if the gate is lowered). When the counter reaches zero,
114 the output goes low for 1 clock cycle and then returns high. The counter is
115 not reloaded.
116
117In addition to normal binary counting, the PIT supports BCD counting. The
118command port, 0x43 is used to set the counter and mode for each of the three
119timers.
120
121PIT commands, issued to port 0x43, using the following bit encoding:
122
123Bit 7-4: Command (See table below)
124Bit 3-1: Mode (000 = Mode 0, 101 = Mode 5, 11X = undefined)
125Bit 0 : Binary (0) / BCD (1)
126
127Command table:
128
1290000 - Latch Timer 0 count for port 0x40
130 sample and hold the count to be read in port 0x40;
131 additional commands ignored until counter is read;
132 mode bits ignored.
133
1340001 - Set Timer 0 LSB mode for port 0x40
135 set timer to read LSB only and force MSB to zero;
136 mode bits set timer mode
137
1380010 - Set Timer 0 MSB mode for port 0x40
139 set timer to read MSB only and force LSB to zero;
140 mode bits set timer mode
141
1420011 - Set Timer 0 16-bit mode for port 0x40
143 set timer to read / write LSB first, then MSB;
144 mode bits set timer mode
145
1460100 - Latch Timer 1 count for port 0x41 - as described above
1470101 - Set Timer 1 LSB mode for port 0x41 - as described above
1480110 - Set Timer 1 MSB mode for port 0x41 - as described above
1490111 - Set Timer 1 16-bit mode for port 0x41 - as described above
150
1511000 - Latch Timer 2 count for port 0x42 - as described above
1521001 - Set Timer 2 LSB mode for port 0x42 - as described above
1531010 - Set Timer 2 MSB mode for port 0x42 - as described above
1541011 - Set Timer 2 16-bit mode for port 0x42 as described above
155
1561101 - General counter latch
157 Latch combination of counters into corresponding ports
158 Bit 3 = Counter 2
159 Bit 2 = Counter 1
160 Bit 1 = Counter 0
161 Bit 0 = Unused
162
1631110 - Latch timer status
164 Latch combination of counter mode into corresponding ports
165 Bit 3 = Counter 2
166 Bit 2 = Counter 1
167 Bit 1 = Counter 0
168
169 The output of ports 0x40-0x42 following this command will be:
170
171 Bit 7 = Output pin
172 Bit 6 = Count loaded (0 if timer has expired)
173 Bit 5-4 = Read / Write mode
174 01 = MSB only
175 10 = LSB only
176 11 = LSB / MSB (16-bit)
177 Bit 3-1 = Mode
178 Bit 0 = Binary (0) / BCD mode (1)
179
1802.2) RTC
181
182The second device which was available in the original PC was the MC146818 real
183time clock. The original device is now obsolete, and usually emulated by the
184system chipset, sometimes by an HPET and some frankenstein IRQ routing.
185
186The RTC is accessed through CMOS variables, which uses an index register to
187control which bytes are read. Since there is only one index register, read
188of the CMOS and read of the RTC require lock protection (in addition, it is
189dangerous to allow userspace utilities such as hwclock to have direct RTC
190access, as they could corrupt kernel reads and writes of CMOS memory).
191
192The RTC generates an interrupt which is usually routed to IRQ 8. The interrupt
193can function as a periodic timer, an additional once a day alarm, and can issue
194interrupts after an update of the CMOS registers by the MC146818 is complete.
195The type of interrupt is signalled in the RTC status registers.
196
197The RTC will update the current time fields by battery power even while the
198system is off. The current time fields should not be read while an update is
199in progress, as indicated in the status register.
200
201The clock uses a 32.768kHz crystal, so bits 6-4 of register A should be
202programmed to a 32kHz divider if the RTC is to count seconds.
203
204This is the RAM map originally used for the RTC/CMOS:
205
206Location Size Description
207------------------------------------------
20800h byte Current second (BCD)
20901h byte Seconds alarm (BCD)
21002h byte Current minute (BCD)
21103h byte Minutes alarm (BCD)
21204h byte Current hour (BCD)
21305h byte Hours alarm (BCD)
21406h byte Current day of week (BCD)
21507h byte Current day of month (BCD)
21608h byte Current month (BCD)
21709h byte Current year (BCD)
2180Ah byte Register A
219 bit 7 = Update in progress
220 bit 6-4 = Divider for clock
221 000 = 4.194 MHz
222 001 = 1.049 MHz
223 010 = 32 kHz
224 10X = test modes
225 110 = reset / disable
226 111 = reset / disable
227 bit 3-0 = Rate selection for periodic interrupt
228 000 = periodic timer disabled
229 001 = 3.90625 uS
230 010 = 7.8125 uS
231 011 = .122070 mS
232 100 = .244141 mS
233 ...
234 1101 = 125 mS
235 1110 = 250 mS
236 1111 = 500 mS
2370Bh byte Register B
238 bit 7 = Run (0) / Halt (1)
239 bit 6 = Periodic interrupt enable
240 bit 5 = Alarm interrupt enable
241 bit 4 = Update-ended interrupt enable
242 bit 3 = Square wave interrupt enable
243 bit 2 = BCD calendar (0) / Binary (1)
244 bit 1 = 12-hour mode (0) / 24-hour mode (1)
245 bit 0 = 0 (DST off) / 1 (DST enabled)
246OCh byte Register C (read only)
247 bit 7 = interrupt request flag (IRQF)
248 bit 6 = periodic interrupt flag (PF)
249 bit 5 = alarm interrupt flag (AF)
250 bit 4 = update interrupt flag (UF)
251 bit 3-0 = reserved
252ODh byte Register D (read only)
253 bit 7 = RTC has power
254 bit 6-0 = reserved
25532h byte Current century BCD (*)
256 (*) location vendor specific and now determined from ACPI global tables
257
2582.3) APIC
259
260On Pentium and later processors, an on-board timer is available to each CPU
261as part of the Advanced Programmable Interrupt Controller. The APIC is
262accessed through memory-mapped registers and provides interrupt service to each
263CPU, used for IPIs and local timer interrupts.
264
265Although in theory the APIC is a safe and stable source for local interrupts,
266in practice, many bugs and glitches have occurred due to the special nature of
267the APIC CPU-local memory-mapped hardware. Beware that CPU errata may affect
268the use of the APIC and that workarounds may be required. In addition, some of
269these workarounds pose unique constraints for virtualization - requiring either
270extra overhead incurred from extra reads of memory-mapped I/O or additional
271functionality that may be more computationally expensive to implement.
272
273Since the APIC is documented quite well in the Intel and AMD manuals, we will
274avoid repetition of the detail here. It should be pointed out that the APIC
275timer is programmed through the LVT (local vector timer) register, is capable
276of one-shot or periodic operation, and is based on the bus clock divided down
277by the programmable divider register.
278
2792.4) HPET
280
281HPET is quite complex, and was originally intended to replace the PIT / RTC
282support of the X86 PC. It remains to be seen whether that will be the case, as
283the de facto standard of PC hardware is to emulate these older devices. Some
284systems designated as legacy free may support only the HPET as a hardware timer
285device.
286
287The HPET spec is rather loose and vague, requiring at least 3 hardware timers,
288but allowing implementation freedom to support many more. It also imposes no
289fixed rate on the timer frequency, but does impose some extremal values on
290frequency, error and slew.
291
292In general, the HPET is recommended as a high precision (compared to PIT /RTC)
293time source which is independent of local variation (as there is only one HPET
294in any given system). The HPET is also memory-mapped, and its presence is
295indicated through ACPI tables by the BIOS.
296
297Detailed specification of the HPET is beyond the current scope of this
298document, as it is also very well documented elsewhere.
299
3002.5) Offboard Timers
301
302Several cards, both proprietary (watchdog boards) and commonplace (e1000) have
303timing chips built into the cards which may have registers which are accessible
304to kernel or user drivers. To the author's knowledge, using these to generate
305a clocksource for a Linux or other kernel has not yet been attempted and is in
306general frowned upon as not playing by the agreed rules of the game. Such a
307timer device would require additional support to be virtualized properly and is
308not considered important at this time as no known operating system does this.
309
310=========================================================================
311
3123) TSC Hardware
313
314The TSC or time stamp counter is relatively simple in theory; it counts
315instruction cycles issued by the processor, which can be used as a measure of
316time. In practice, due to a number of problems, it is the most complicated
317timekeeping device to use.
318
319The TSC is represented internally as a 64-bit MSR which can be read with the
320RDMSR, RDTSC, or RDTSCP (when available) instructions. In the past, hardware
321limitations made it possible to write the TSC, but generally on old hardware it
322was only possible to write the low 32-bits of the 64-bit counter, and the upper
32332-bits of the counter were cleared. Now, however, on Intel processors family
3240Fh, for models 3, 4 and 6, and family 06h, models e and f, this restriction
325has been lifted and all 64-bits are writable. On AMD systems, the ability to
326write the TSC MSR is not an architectural guarantee.
327
328The TSC is accessible from CPL-0 and conditionally, for CPL > 0 software by
329means of the CR4.TSD bit, which when enabled, disables CPL > 0 TSC access.
330
331Some vendors have implemented an additional instruction, RDTSCP, which returns
332atomically not just the TSC, but an indicator which corresponds to the
333processor number. This can be used to index into an array of TSC variables to
334determine offset information in SMP systems where TSCs are not synchronized.
335The presence of this instruction must be determined by consulting CPUID feature
336bits.
337
338Both VMX and SVM provide extension fields in the virtualization hardware which
339allows the guest visible TSC to be offset by a constant. Newer implementations
340promise to allow the TSC to additionally be scaled, but this hardware is not
341yet widely available.
342
3433.1) TSC synchronization
344
345The TSC is a CPU-local clock in most implementations. This means, on SMP
346platforms, the TSCs of different CPUs may start at different times depending
347on when the CPUs are powered on. Generally, CPUs on the same die will share
348the same clock, however, this is not always the case.
349
350The BIOS may attempt to resynchronize the TSCs during the poweron process and
351the operating system or other system software may attempt to do this as well.
352Several hardware limitations make the problem worse - if it is not possible to
353write the full 64-bits of the TSC, it may be impossible to match the TSC in
354newly arriving CPUs to that of the rest of the system, resulting in
355unsynchronized TSCs. This may be done by BIOS or system software, but in
356practice, getting a perfectly synchronized TSC will not be possible unless all
357values are read from the same clock, which generally only is possible on single
358socket systems or those with special hardware support.
359
3603.2) TSC and CPU hotplug
361
362As touched on already, CPUs which arrive later than the boot time of the system
363may not have a TSC value that is synchronized with the rest of the system.
364Either system software, BIOS, or SMM code may actually try to establish the TSC
365to a value matching the rest of the system, but a perfect match is usually not
366a guarantee. This can have the effect of bringing a system from a state where
367TSC is synchronized back to a state where TSC synchronization flaws, however
368small, may be exposed to the OS and any virtualization environment.
369
3703.3) TSC and multi-socket / NUMA
371
372Multi-socket systems, especially large multi-socket systems are likely to have
373individual clocksources rather than a single, universally distributed clock.
374Since these clocks are driven by different crystals, they will not have
375perfectly matched frequency, and temperature and electrical variations will
376cause the CPU clocks, and thus the TSCs to drift over time. Depending on the
377exact clock and bus design, the drift may or may not be fixed in absolute
378error, and may accumulate over time.
379
380In addition, very large systems may deliberately slew the clocks of individual
381cores. This technique, known as spread-spectrum clocking, reduces EMI at the
382clock frequency and harmonics of it, which may be required to pass FCC
383standards for telecommunications and computer equipment.
384
385It is recommended not to trust the TSCs to remain synchronized on NUMA or
386multiple socket systems for these reasons.
387
3883.4) TSC and C-states
389
390C-states, or idling states of the processor, especially C1E and deeper sleep
391states may be problematic for TSC as well. The TSC may stop advancing in such
392a state, resulting in a TSC which is behind that of other CPUs when execution
393is resumed. Such CPUs must be detected and flagged by the operating system
394based on CPU and chipset identifications.
395
396The TSC in such a case may be corrected by catching it up to a known external
397clocksource.
398
3993.5) TSC frequency change / P-states
400
401To make things slightly more interesting, some CPUs may change frequency. They
402may or may not run the TSC at the same rate, and because the frequency change
403may be staggered or slewed, at some points in time, the TSC rate may not be
404known other than falling within a range of values. In this case, the TSC will
405not be a stable time source, and must be calibrated against a known, stable,
406external clock to be a usable source of time.
407
408Whether the TSC runs at a constant rate or scales with the P-state is model
409dependent and must be determined by inspecting CPUID, chipset or vendor
410specific MSR fields.
411
412In addition, some vendors have known bugs where the P-state is actually
413compensated for properly during normal operation, but when the processor is
414inactive, the P-state may be raised temporarily to service cache misses from
415other processors. In such cases, the TSC on halted CPUs could advance faster
416than that of non-halted processors. AMD Turion processors are known to have
417this problem.
418
4193.6) TSC and STPCLK / T-states
420
421External signals given to the processor may also have the effect of stopping
422the TSC. This is typically done for thermal emergency power control to prevent
423an overheating condition, and typically, there is no way to detect that this
424condition has happened.
425
4263.7) TSC virtualization - VMX
427
428VMX provides conditional trapping of RDTSC, RDMSR, WRMSR and RDTSCP
429instructions, which is enough for full virtualization of TSC in any manner. In
430addition, VMX allows passing through the host TSC plus an additional TSC_OFFSET
431field specified in the VMCS. Special instructions must be used to read and
432write the VMCS field.
433
4343.8) TSC virtualization - SVM
435
436SVM provides conditional trapping of RDTSC, RDMSR, WRMSR and RDTSCP
437instructions, which is enough for full virtualization of TSC in any manner. In
438addition, SVM allows passing through the host TSC plus an additional offset
439field specified in the SVM control block.
440
4413.9) TSC feature bits in Linux
442
443In summary, there is no way to guarantee the TSC remains in perfect
444synchronization unless it is explicitly guaranteed by the architecture. Even
445if so, the TSCs in multi-sockets or NUMA systems may still run independently
446despite being locally consistent.
447
448The following feature bits are used by Linux to signal various TSC attributes,
449but they can only be taken to be meaningful for UP or single node systems.
450
451X86_FEATURE_TSC : The TSC is available in hardware
452X86_FEATURE_RDTSCP : The RDTSCP instruction is available
453X86_FEATURE_CONSTANT_TSC : The TSC rate is unchanged with P-states
454X86_FEATURE_NONSTOP_TSC : The TSC does not stop in C-states
455X86_FEATURE_TSC_RELIABLE : TSC sync checks are skipped (VMware)
456
4574) Virtualization Problems
458
459Timekeeping is especially problematic for virtualization because a number of
460challenges arise. The most obvious problem is that time is now shared between
461the host and, potentially, a number of virtual machines. Thus the virtual
462operating system does not run with 100% usage of the CPU, despite the fact that
463it may very well make that assumption. It may expect it to remain true to very
464exacting bounds when interrupt sources are disabled, but in reality only its
465virtual interrupt sources are disabled, and the machine may still be preempted
466at any time. This causes problems as the passage of real time, the injection
467of machine interrupts and the associated clock sources are no longer completely
468synchronized with real time.
469
470This same problem can occur on native harware to a degree, as SMM mode may
471steal cycles from the naturally on X86 systems when SMM mode is used by the
472BIOS, but not in such an extreme fashion. However, the fact that SMM mode may
473cause similar problems to virtualization makes it a good justification for
474solving many of these problems on bare metal.
475
4764.1) Interrupt clocking
477
478One of the most immediate problems that occurs with legacy operating systems
479is that the system timekeeping routines are often designed to keep track of
480time by counting periodic interrupts. These interrupts may come from the PIT
481or the RTC, but the problem is the same: the host virtualization engine may not
482be able to deliver the proper number of interrupts per second, and so guest
483time may fall behind. This is especially problematic if a high interrupt rate
484is selected, such as 1000 HZ, which is unfortunately the default for many Linux
485guests.
486
487There are three approaches to solving this problem; first, it may be possible
488to simply ignore it. Guests which have a separate time source for tracking
489'wall clock' or 'real time' may not need any adjustment of their interrupts to
490maintain proper time. If this is not sufficient, it may be necessary to inject
491additional interrupts into the guest in order to increase the effective
492interrupt rate. This approach leads to complications in extreme conditions,
493where host load or guest lag is too much to compensate for, and thus another
494solution to the problem has risen: the guest may need to become aware of lost
495ticks and compensate for them internally. Although promising in theory, the
496implementation of this policy in Linux has been extremely error prone, and a
497number of buggy variants of lost tick compensation are distributed across
498commonly used Linux systems.
499
500Windows uses periodic RTC clocking as a means of keeping time internally, and
501thus requires interrupt slewing to keep proper time. It does use a low enough
502rate (ed: is it 18.2 Hz?) however that it has not yet been a problem in
503practice.
504
5054.2) TSC sampling and serialization
506
507As the highest precision time source available, the cycle counter of the CPU
508has aroused much interest from developers. As explained above, this timer has
509many problems unique to its nature as a local, potentially unstable and
510potentially unsynchronized source. One issue which is not unique to the TSC,
511but is highlighted because of its very precise nature is sampling delay. By
512definition, the counter, once read is already old. However, it is also
513possible for the counter to be read ahead of the actual use of the result.
514This is a consequence of the superscalar execution of the instruction stream,
515which may execute instructions out of order. Such execution is called
516non-serialized. Forcing serialized execution is necessary for precise
517measurement with the TSC, and requires a serializing instruction, such as CPUID
518or an MSR read.
519
520Since CPUID may actually be virtualized by a trap and emulate mechanism, this
521serialization can pose a performance issue for hardware virtualization. An
522accurate time stamp counter reading may therefore not always be available, and
523it may be necessary for an implementation to guard against "backwards" reads of
524the TSC as seen from other CPUs, even in an otherwise perfectly synchronized
525system.
526
5274.3) Timespec aliasing
528
529Additionally, this lack of serialization from the TSC poses another challenge
530when using results of the TSC when measured against another time source. As
531the TSC is much higher precision, many possible values of the TSC may be read
532while another clock is still expressing the same value.
533
534That is, you may read (T,T+10) while external clock C maintains the same value.
535Due to non-serialized reads, you may actually end up with a range which
536fluctuates - from (T-1.. T+10). Thus, any time calculated from a TSC, but
537calibrated against an external value may have a range of valid values.
538Re-calibrating this computation may actually cause time, as computed after the
539calibration, to go backwards, compared with time computed before the
540calibration.
541
542This problem is particularly pronounced with an internal time source in Linux,
543the kernel time, which is expressed in the theoretically high resolution
544timespec - but which advances in much larger granularity intervals, sometimes
545at the rate of jiffies, and possibly in catchup modes, at a much larger step.
546
547This aliasing requires care in the computation and recalibration of kvmclock
548and any other values derived from TSC computation (such as TSC virtualization
549itself).
550
5514.4) Migration
552
553Migration of a virtual machine raises problems for timekeeping in two ways.
554First, the migration itself may take time, during which interrupts cannot be
555delivered, and after which, the guest time may need to be caught up. NTP may
556be able to help to some degree here, as the clock correction required is
557typically small enough to fall in the NTP-correctable window.
558
559An additional concern is that timers based off the TSC (or HPET, if the raw bus
560clock is exposed) may now be running at different rates, requiring compensation
561in some way in the hypervisor by virtualizing these timers. In addition,
562migrating to a faster machine may preclude the use of a passthrough TSC, as a
563faster clock cannot be made visible to a guest without the potential of time
564advancing faster than usual. A slower clock is less of a problem, as it can
565always be caught up to the original rate. KVM clock avoids these problems by
566simply storing multipliers and offsets against the TSC for the guest to convert
567back into nanosecond resolution values.
568
5694.5) Scheduling
570
571Since scheduling may be based on precise timing and firing of interrupts, the
572scheduling algorithms of an operating system may be adversely affected by
573virtualization. In theory, the effect is random and should be universally
574distributed, but in contrived as well as real scenarios (guest device access,
575causes of virtualization exits, possible context switch), this may not always
576be the case. The effect of this has not been well studied.
577
578In an attempt to work around this, several implementations have provided a
579paravirtualized scheduler clock, which reveals the true amount of CPU time for
580which a virtual machine has been running.
581
5824.6) Watchdogs
583
584Watchdog timers, such as the lock detector in Linux may fire accidentally when
585running under hardware virtualization due to timer interrupts being delayed or
586misinterpretation of the passage of real time. Usually, these warnings are
587spurious and can be ignored, but in some circumstances it may be necessary to
588disable such detection.
589
5904.7) Delays and precision timing
591
592Precise timing and delays may not be possible in a virtualized system. This
593can happen if the system is controlling physical hardware, or issues delays to
594compensate for slower I/O to and from devices. The first issue is not solvable
595in general for a virtualized system; hardware control software can't be
596adequately virtualized without a full real-time operating system, which would
597require an RT aware virtualization platform.
598
599The second issue may cause performance problems, but this is unlikely to be a
600significant issue. In many cases these delays may be eliminated through
601configuration or paravirtualization.
602
6034.8) Covert channels and leaks
604
605In addition to the above problems, time information will inevitably leak to the
606guest about the host in anything but a perfect implementation of virtualized
607time. This may allow the guest to infer the presence of a hypervisor (as in a
608red-pill type detection), and it may allow information to leak between guests
609by using CPU utilization itself as a signalling channel. Preventing such
610problems would require completely isolated virtual time which may not track
611real time any longer. This may be useful in certain security or QA contexts,
612but in general isn't recommended for real-world deployment scenarios.