aboutsummaryrefslogtreecommitdiffstats
path: root/virt/kvm/kvm_main.c
Commit message (Collapse)AuthorAge
* anonfd: Allow making anon files read-onlyRoland Dreier2009-12-22
| | | | | | | | | | | It seems a couple places such as arch/ia64/kernel/perfmon.c and drivers/infiniband/core/uverbs_main.c could use anon_inode_getfile() instead of a private pseudo-fs + alloc_file(), if only there were a way to get a read-only file. So provide this by having anon_inode_getfile() create a read-only file if we pass O_RDONLY in flags. Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Merge commit 'origin/master' into nextBenjamin Herrenschmidt2009-12-09
|\ | | | | | | | | Conflicts: include/linux/kvm.h
| * KVM: Allow internal errors reported to userspace to carry extra dataAvi Kivity2009-12-03
| | | | | | | | | | | | | | | | | | Usually userspace will freeze the guest so we can inspect it, but some internal state is not available. Add extra data to internal error reporting so we can expose it to the debugger. Extra data is specific to the suberror. Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Enable 32bit dirty log pointers on 64bit hostArnd Bergmann2009-12-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With big endian userspace, we can't quite figure out if a pointer is 32 bit (shifted >> 32) or 64 bit when we read a 64 bit pointer. This is what happens with dirty logging. To get the pointer interpreted correctly, we thus need Arnd's patch to implement a compat layer for the ioctl: A better way to do this is to add a separate compat_ioctl() method that converts this for you. Based on initial patch from Arnd Bergmann. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: introduce kvm_vcpu_on_spinZhai, Edwin2009-12-03
| | | | | | | | | | | | | | | | Introduce kvm_vcpu_on_spin, to be used by VMX/SVM to yield processing once the cpu detects pause-based looping. Signed-off-by: "Zhai, Edwin" <edwin.zhai@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * KVM: Activate Virtualization On DemandAlexander Graf2009-12-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X86 CPUs need to have some magic happening to enable the virtualization extensions on them. This magic can result in unpleasant results for users, like blocking other VMMs from working (vmx) or using invalid TLB entries (svm). Currently KVM activates virtualization when the respective kernel module is loaded. This blocks us from autoloading KVM modules without breaking other VMMs. To circumvent this problem at least a bit, this patch introduces on demand activation of virtualization. This means, that instead virtualization is enabled on creation of the first virtual machine and disabled on destruction of the last one. So using this, KVM can be easily autoloaded, while keeping other hypervisors usable. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Move assigned device code to own fileAvi Kivity2009-12-03
| | | | | | | | Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Drop kvm->irq_lock lock from irq injection pathGleb Natapov2009-12-03
| | | | | | | | | | | | | | | | | | | | The only thing it protects now is interrupt injection into lapic and this can work lockless. Even now with kvm->irq_lock in place access to lapic is not entirely serialized since vcpu access doesn't take kvm->irq_lock. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Move irq ack notifier list to arch independent codeGleb Natapov2009-12-03
| | | | | | | | | | | | | | Mask irq notifier list is already there. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Change irq routing table to use gsi indexed arrayGleb Natapov2009-12-03
| | | | | | | | | | | | | | | | Use gsi indexed array instead of scanning all entries on each interrupt injection. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Don't wrap schedule() with vcpu_put()/vcpu_load()Avi Kivity2009-12-03
| | | | | | | | | | | | Preemption notifiers will do that for us automatically. Signed-off-by: Avi Kivity <avi@redhat.com>
* | Use Little Endian for Dirty BitmapAlexander Graf2009-11-05
|/ | | | | | | | | | | | | | | | | | | We currently use host endian long types to store information in the dirty bitmap. This works reasonably well on Little Endian targets, because the u32 after the first contains the next 32 bits. On Big Endian this breaks completely though, forcing us to be inventive here. So Ben suggested to always use Little Endian, which looks reasonable. We only have dirty bitmap implemented in Little Endian targets so far and since PowerPC would be the first Big Endian platform, we can just as well switch to Little Endian always with little effort without breaking existing targets. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* KVM: Prevent kvm_init from corrupting debugfs structuresDarrick J. Wong2009-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I'm seeing an oops condition when kvm-intel and kvm-amd are modprobe'd during boot (say on an Intel system) and then rmmod'd: # modprobe kvm-intel kvm_init() kvm_init_debug() kvm_arch_init() <-- stores debugfs dentries internally (success, etc) # modprobe kvm-amd kvm_init() kvm_init_debug() <-- second initialization clobbers kvm's internal pointers to dentries kvm_arch_init() kvm_exit_debug() <-- and frees them # rmmod kvm-intel kvm_exit() kvm_exit_debug() <-- double free of debugfs files! *BOOM* If execution gets to the end of kvm_init(), then the calling module has been established as the kvm provider. Move the debugfs initialization to the end of the function, and remove the now-unnecessary call to kvm_exit_debug() from the error path. That way we avoid trampling on the debugfs entries and freeing them twice. Cc: stable@kernel.org Signed-off-by: Darrick J. Wong <djwong@us.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: add support for change_pte mmu notifiersIzik Eidus2009-10-04
| | | | | | | | | | this is needed for kvm if it want ksm to directly map pages into its shadow page tables. [marcelo: cast pfn assignment to u64] Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* const: constify remaining file_operationsAlexey Dobriyan2009-10-01
| | | | | | | | [akpm@linux-foundation.org: fix KVM] Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* const: mark struct vm_struct_operationsAlexey Dobriyan2009-09-27
| | | | | | | | | | | * mark struct vm_area_struct::vm_ops as const * mark vm_ops in AGP code But leave TTM code alone, something is fishy there with global vm_ops being used. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* cpumask: use zalloc_cpumask_var() where possibleLi Zefan2009-09-23
| | | | | | | Remove open-coded zalloc_cpumask_var() and zalloc_cpumask_var_node(). Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* KVM: fix compile warnings on s390Heiko Carstens2009-09-10
| | | | | | | | | | | | CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function '__kvm_set_memory_region': arch/s390/kvm/../../../virt/kvm/kvm_main.c:485: warning: unused variable 'j' arch/s390/kvm/../../../virt/kvm/kvm_main.c:484: warning: unused variable 'lpages' arch/s390/kvm/../../../virt/kvm/kvm_main.c:483: warning: unused variable 'ugfn' Cc: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Move #endif KVM_CAP_IRQ_ROUTING to correct placeAvi Kivity2009-09-10
| | | | | | The symbol only controls irq routing, not MSI-X. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: fix kvm_init() error handlingXiao Guangrong2009-09-10
| | | | | | | Remove debugfs file if kvm_arch_init() return error Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Drop obsolete cpu_get/put in make_all_cpus_requestJan Kiszka2009-09-10
| | | | | | | spin_lock disables preemption, so we can simply read the current cpu. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Reduce runnability interface with arch support codeGleb Natapov2009-09-10
| | | | | | | | | Remove kvm_cpu_has_interrupt() and kvm_arch_interrupt_allowed() from interface between general code and arch code. kvm_arch_vcpu_runnable() checks for interrupts instead. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: add ioeventfd supportGregory Haskins2009-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ioeventfd is a mechanism to register PIO/MMIO regions to trigger an eventfd signal when written to by a guest. Host userspace can register any arbitrary IO address with a corresponding eventfd and then pass the eventfd to a specific end-point of interest for handling. Normal IO requires a blocking round-trip since the operation may cause side-effects in the emulated model or may return data to the caller. Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's device model synchronously before returning control back to the vcpu. However, there is a subclass of IO which acts purely as a trigger for other IO (such as to kick off an out-of-band DMA request, etc). For these patterns, the synchronous call is particularly expensive since we really only want to simply get our notification transmitted asychronously and return as quickly as possible. All the sychronous infrastructure to ensure proper data-dependencies are met in the normal IO case are just unecessary overhead for signalling. This adds additional computational load on the system, as well as latency to the signalling path. Therefore, we provide a mechanism for registration of an in-kernel trigger point that allows the VCPU to only require a very brief, lightweight exit just long enough to signal an eventfd. This also means that any clients compatible with the eventfd interface (which includes userspace and kernelspace equally well) can now register to be notified. The end result should be a more flexible and higher performance notification API for the backend KVM hypervisor and perhipheral components. To test this theory, we built a test-harness called "doorbell". This module has a function called "doorbell_ring()" which simply increments a counter for each time the doorbell is signaled. It supports signalling from either an eventfd, or an ioctl(). We then wired up two paths to the doorbell: One via QEMU via a registered io region and through the doorbell ioctl(). The other is direct via ioeventfd. You can download this test harness here: ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2 The measured results are as follows: qemu-mmio: 110000 iops, 9.09us rtt ioeventfd-mmio: 200100 iops, 5.00us rtt ioeventfd-pio: 367300 iops, 2.72us rtt I didn't measure qemu-pio, because I have to figure out how to register a PIO region with qemu's device model, and I got lazy. However, for now we can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO, and -350ns for HC, we get: qemu-pio: 153139 iops, 6.53us rtt ioeventfd-hc: 412585 iops, 2.37us rtt these are just for fun, for now, until I can gather more data. Here is a graph for your convenience: http://developer.novell.com/wiki/images/7/76/Iofd-chart.png The conclusion to draw is that we save about 4us by skipping the userspace hop. -------------------- Signed-off-by: Gregory Haskins <ghaskins@novell.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: make io_bus interface more robustGregory Haskins2009-09-10
| | | | | | | | | | | | | | | | Today kvm_io_bus_regsiter_dev() returns void and will internally BUG_ON if it fails. We want to create dynamic MMIO/PIO entries driven from userspace later in the series, so we need to enhance the code to be more robust with the following changes: 1) Add a return value to the registration function 2) Fix up all the callsites to check the return code, handle any failures, and percolate the error up to the caller. 3) Add an unregister function that collapses holes in the array Signed-off-by: Gregory Haskins <ghaskins@novell.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: document lock nesting ruleMichael S. Tsirkin2009-09-10
| | | | | | | Document kvm->lock nesting within kvm->slots_lock Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: remove in_range from io devicesMichael S. Tsirkin2009-09-10
| | | | | | | | | | | | This changes bus accesses to use high-level kvm_io_bus_read/kvm_io_bus_write functions. in_range now becomes unused so it is removed from device ops in favor of read/write callbacks performing range checks internally. This allows aliasing (mostly for in-kernel virtio), as well as better error handling by making it possible to pass errors up to userspace. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: convert bus to slots_lockMichael S. Tsirkin2009-09-10
| | | | | | | | | | Use slots_lock to protect device list on the bus. slots_lock is already taken for read everywhere, so we only need to take it for write when registering devices. This is in preparation to removing in_range and kvm->lock around it. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: remove old KVMTRACE support codeMarcelo Tosatti2009-09-10
| | | | | | | Return EOPNOTSUPP for KVM_TRACE_ENABLE/PAUSE/DISABLE ioctls. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: x86: missing locking in PIT/IRQCHIP/SET_BSP_CPU ioctl pathsMarcelo Tosatti2009-09-10
| | | | | | | Correct missing locking in a few places in x86's vm_ioctl handling path. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Prepare memslot data structures for multiple hugepage sizesJoerg Roedel2009-09-10
| | | | | | | [avi: fix build on non-x86] Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: s390: Fix memslot initialization for userspace_addr != 0Christian Borntraeger2009-09-10
| | | | | | | | | | | | | | | | | | | | | | Since commit 854b5338196b1175706e99d63be43a4f8d8ab607 Author: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> KVM: s390: streamline memslot handling s390 uses the values of the memslot instead of doing everything in the arch ioctl handler of the KVM_SET_USER_MEMORY_REGION. Unfortunately we missed to set the userspace_addr of our memslot due to our s390 ifdef in __kvm_set_memory_region. Old s390 userspace launchers did not notice, since they started the guest at userspace address 0. Because of CONFIG_DEFAULT_MMAP_MIN_ADDR we now put the guest at 1M userspace, which does not work. This patch makes sure that new.userspace_addr is set on s390. This fix should go in quickly. Nevertheless, looking at the code we should clean up that ifdef in the long term. Any kernel janitors? Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: convert custom marker based tracing to event tracesMarcelo Tosatti2009-09-10
| | | | | | | | | | | | | | This allows use of the powerful ftrace infrastructure. See Documentation/trace/ for usage information. [avi, stephen: various build fixes] [sheng: fix control register breakage] Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: conditionally disable 2M pagesMarcelo Tosatti2009-09-10
| | | | | | | | | | Disable usage of 2M pages if VMX_EPT_2MB_PAGE_BIT (bit 16) is clear in MSR_IA32_VMX_EPT_VPID_CAP and EPT is enabled. [avi: s/largepages_disabled/largepages_enabled/ to avoid negative logic] Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Use macro to iterate over vcpus.Gleb Natapov2009-09-10
| | | | | | | | [christian: remove unused variables on s390] Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Break dependency between vcpu index in vcpus array and vcpu_id.Gleb Natapov2009-09-10
| | | | | | | | | Archs are free to use vcpu_id as they see fit. For x86 it is used as vcpu's apic id. New ioctl is added to configure boot vcpu id that was assumed to be 0 till now. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Introduce kvm_vcpu_is_bsp() function.Gleb Natapov2009-09-10
| | | | | | | Use it instead of open code "vcpu_id zero is BSP" assumption. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: switch irq injection/acking data structures to irq_lockMarcelo Tosatti2009-09-10
| | | | | | | | | | | | | | | | | | | Protect irq injection/acking data structures with a separate irq_lock mutex. This fixes the following deadlock: CPU A CPU B kvm_vm_ioctl_deassign_dev_irq() mutex_lock(&kvm->lock); worker_thread() -> kvm_deassign_irq() -> kvm_assigned_dev_interrupt_work_handler() -> deassign_host_irq() mutex_lock(&kvm->lock); -> cancel_work_sync() [blocked] [gleb: fix ia64 path] Reported-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: introduce irq_lock, use it to protect ioapicMarcelo Tosatti2009-09-10
| | | | | | | Introduce irq_lock, and use to protect ioapic data structures. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: cleanup io_device codeGregory Haskins2009-09-10
| | | | | | | | | | | We modernize the io_device code so that we use container_of() instead of dev->private, and move the vtable to a separate ops structure (theoretically allows better caching for multiple instances of the same ops structure) Signed-off-by: Gregory Haskins <ghaskins@novell.com> Acked-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: No disable_irq for MSI/MSI-X interrupt on device assignmentSheng Yang2009-09-10
| | | | | | | | | | | | | | | | Disable interrupt at interrupt handler and enable it when guest ack is for the level triggered interrupt, to prevent reinjected interrupt. MSI/MSI-X don't need it. One possible problem is multiply same vector interrupt injected between irq handler and scheduled work handler would be merged as one for MSI/MSI-X. But AFAIK, the drivers handle it well. The patch fixed the oplin card performance issue(MSI-X performance is half of MSI/INTx). Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: irqfdGregory Haskins2009-09-10
| | | | | | | | | | | | | | | | KVM provides a complete virtual system environment for guests, including support for injecting interrupts modeled after the real exception/interrupt facilities present on the native platform (such as the IDT on x86). Virtual interrupts can come from a variety of sources (emulated devices, pass-through devices, etc) but all must be injected to the guest via the KVM infrastructure. This patch adds a new mechanism to inject a specific interrupt to a guest using a decoupled eventfd mechnanism: Any legal signal on the irqfd (using eventfd semantics from either userspace or kernel) will translate into an injected interrupt in the guest at the next available interrupt window. Signed-off-by: Gregory Haskins <ghaskins@novell.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: protect concurrent make_all_cpus_requestMarcelo Tosatti2009-06-28
| | | | | | | | | | | | | | | | | | | make_all_cpus_request contains a race condition which can trigger false request completed status, as follows: CPU0 CPU1 if (test_and_set_bit(req,&vcpu->requests)) .... if (test_and_set_bit(req,&vcpu->requests)) .. return proceed to smp_call_function_many(wait=1) Use a spinlock to serialize concurrent CPUs. Cc: stable@kernel.org Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Fix dirty bit tracking for slots with large pagesIzik Eidus2009-06-28
| | | | | | | | | | | | When slot is already allocated and being asked to be tracked we need to break the large pages. This code flush the mmu when someone ask a slot to start dirty bit tracking. Cc: stable@kernel.org Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* kvm: remove the duplicated cpumask_clearYinghai Lu2009-06-11
| | | | | | | zalloc_cpumask_var already cleared it. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: Prevent overflow in largepages calculationAvi Kivity2009-06-10
| | | | | | | | If userspace specifies a memory slot that is larger than 8 petabytes, it could overflow the largepages variable. Cc: stable@kernel.org Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Disable large pages on misaligned memory slotsAvi Kivity2009-06-10
| | | | | | | | | | If a slots guest physical address and host virtual address unequal (mod large page size), then we would erronously try to back guest large pages with host large pages. Detect this misalignment and diable large page support for the trouble slot. Cc: stable@kernel.org Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: take mmu_lock when updating a deleted slotMarcelo Tosatti2009-06-10
| | | | | | | | kvm_handle_hva relies on mmu_lock protection to safely access the memslot structures. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: protect assigned dev workqueue, int handler and irq ackerMarcelo Tosatti2009-06-10
| | | | | | | | | | | | | | | | | | | | | | | | | kvm_assigned_dev_ack_irq is vulnerable to a race condition with the interrupt handler function. It does: if (dev->host_irq_disabled) { enable_irq(dev->host_irq); dev->host_irq_disabled = false; } If an interrupt triggers before the host->dev_irq_disabled assignment, it will disable the interrupt and set dev->host_irq_disabled to true. On return to kvm_assigned_dev_ack_irq, dev->host_irq_disabled is set to false, and the next kvm_assigned_dev_ack_irq call will fail to reenable it. Other than that, having the interrupt handler and work handlers run in parallel sounds like asking for trouble (could not spot any obvious problem, but better not have to, its fragile). CC: sheng.yang@intel.com Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: Disable VMX when system shutdownSheng Yang2009-06-10
| | | | | | | | | Intel TXT(Trusted Execution Technology) required VMX off for all cpu to work when system shutdown. CC: Joseph Cihula <joseph.cihula@intel.com> Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Fix interrupt unhalting a vcpu when it shouldn'tGleb Natapov2009-06-10
| | | | | | | | kvm_vcpu_block() unhalts vpu on an interrupt/timer without checking if interrupt window is actually opened. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>