aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/kvm
Commit message (Collapse)AuthorAge
...
| * kvm-arm: Remove kvm_pud_huge()Suzuki K Poulose2016-04-21
| | | | | | | | | | | | | | | | | | Get rid of kvm_pud_huge() which falls back to pud_huge. Use pud_huge instead. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
| * kvm-arm: Replace kvm_pmd_huge with pmd_thp_or_hugeSuzuki K Poulose2016-04-21
| | | | | | | | | | | | | | | | | | | | | | | | Both arm and arm64 now provides a helper, pmd_thp_or_huge() to check if the given pmd represents a huge page. Use that instead of our own custom check. Suggested-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
| * kvm arm: Move fake PGD handling to arch specific filesSuzuki K Poulose2016-04-21
| | | | | | | | | | | | | | | | | | | | Rearrange the code for fake pgd handling, which is applicable only for arm64. This will later be removed once we introduce the stage2 page table walker macros. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
* | Merge tag 'arm64-upstream' of ↵Linus Torvalds2016-05-16
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: - virt_to_page/page_address optimisations - support for NUMA systems described using device-tree - support for hibernate/suspend-to-disk - proper support for maxcpus= command line parameter - detection and graceful handling of AArch64-only CPUs - miscellaneous cleanups and non-critical fixes * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits) arm64: do not enforce strict 16 byte alignment to stack pointer arm64: kernel: Fix incorrect brk randomization arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str arm64: secondary_start_kernel: Remove unnecessary barrier arm64: Ensure pmd_present() returns false after pmd_mknotpresent() arm64: Replace hard-coded values in the pmd/pud_bad() macros arm64: Implement pmdp_set_access_flags() for hardware AF/DBM arm64: Fix typo in the pmdp_huge_get_and_clear() definition arm64: mm: remove unnecessary EXPORT_SYMBOL_GPL arm64: always use STRICT_MM_TYPECHECKS arm64: kvm: Fix kvm teardown for systems using the extended idmap arm64: kaslr: increase randomization granularity arm64: kconfig: drop CONFIG_RTC_LIB dependency arm64: make ARCH_SUPPORTS_DEBUG_PAGEALLOC depend on !HIBERNATION arm64: hibernate: Refuse to hibernate if the boot cpu is offline arm64: kernel: Add support for hibernate/suspend-to-disk PM / Hibernate: Call flush_icache_range() on pages restored in-place arm64: Add new asm macro copy_page arm64: Promote KERNEL_START/KERNEL_END definitions to a header file arm64: kernel: Include _AC definition in page.h ...
| * | arm64: kvm: allows kvm cpu hotplugAKASHI Takahiro2016-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents kexec from rebooting the system at EL2. This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need the arm64 specific cpu hotplug hook any more. Since this patch modifies common code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compilation errors. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> [Rebase, added separate VHE init/exit path, changed resets use of kvm_call_hyp() to the __version, en/disabled hardware in init_subsystems(), added icache maintenance to __kvm_hyp_reset() and removed lr restore, removed guest-enter after teardown handling] Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | mm: thp: kvm: fix memory corruption in KVM with THP enabledAndrea Arcangeli2016-05-05
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After the THP refcounting change, obtaining a compound pages from get_user_pages() no longer allows us to assume the entire compound page is immediately mappable from a secondary MMU. A secondary MMU doesn't want to call get_user_pages() more than once for each compound page, in order to know if it can map the whole compound page. So a secondary MMU needs to know from a single get_user_pages() invocation when it can map immediately the entire compound page to avoid a flood of unnecessary secondary MMU faults and spurious atomic_inc()/atomic_dec() (pages don't have to be pinned by MMU notifier users). Ideally instead of the page->_mapcount < 1 check, get_user_pages() should return the granularity of the "page" mapping in the "mm" passed to get_user_pages(). However it's non trivial change to pass the "pmd" status belonging to the "mm" walked by get_user_pages up the stack (up to the caller of get_user_pages). So the fix just checks if there is not a single pte mapping on the page returned by get_user_pages, and in turn if the caller can assume that the whole compound page is mapped in the current "mm" (in a pmd_trans_huge()). In such case the entire compound page is safe to map into the secondary MMU without additional get_user_pages() calls on the surrounding tail/head pages. In addition of being faster, not having to run other get_user_pages() calls also reduces the memory footprint of the secondary MMU fault in case the pmd split happened as result of memory pressure. Without this fix after a MADV_DONTNEED (like invoked by QEMU during postcopy live migration or balloning) or after generic swapping (with a failure in split_huge_page() that would only result in pmd splitting and not a physical page split), KVM would map the whole compound page into the shadow pagetables, despite regular faults or userfaults (like UFFDIO_COPY) may map regular pages into the primary MMU as result of the pte faults, leading to the guest mode and userland mode going out of sync and not working on the same memory at all times. Any other secondary MMU notifier manager (KVM is just one of the many MMU notifier users) will need the same information if it doesn't want to run a flood of get_user_pages_fast and it can support multiple granularity in the secondary MMU mappings, so I think it is justified to be exposed not just to KVM. The other option would be to move transparent_hugepage_adjust to mm/huge_memory.c but that currently has all kind of KVM data structures in it, so it's definitely not a cut-and-paste work, so I couldn't do a fix as cleaner as this one for 4.6. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: "Li, Liang Z" <liang.z.li@intel.com> Cc: Amit Shah <amit.shah@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | arm64: KVM: unregister notifiers in hyp mode teardown pathSudeep Holla2016-04-06
|/ | | | | | | | | | | | | | | | | | Commit 1e947bad0b63 ("arm64: KVM: Skip HYP setup when already running in HYP") re-organized the hyp init code and ended up leaving the CPU hotplug and PM notifier even if hyp mode initialization fails. Since KVM is not yet supported with ACPI, the above mentioned commit breaks CPU hotplug in ACPI boot. This patch fixes teardown_hyp_mode to properly unregister both CPU hotplug and PM notifiers in the teardown path. Fixes: 1e947bad0b63 ("arm64: KVM: Skip HYP setup when already running in HYP") Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
* arm64: KVM: Register CPU notifiers when the kernel runs at HYPJames Morse2016-03-31
| | | | | | | | | | | | | | | | | When the kernel is running at EL2, it doesn't need init_hyp_mode() to configure page tables for HYP. This function also registers the CPU hotplug and lower power notifiers that cause HYP to be re-initialised after the CPU has been reset. To avoid losing the register state that controls stage2 translation, move the registering of these notifiers into init_subsystems(), and add a is_kernel_in_hyp_mode() path to each callback. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Fixes: 1e947bad0b6 ("arm64: KVM: Skip HYP setup when already running in HYP") Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
* KVM: arm/arm64: disable preemption when calling smp_call_function_manyEric Auger2016-03-21
| | | | | | | | Preemption must be disabled when calling smp_call_function_many Reported-by: bartosz.wawrzyniak@tieto.com Signed-off-by: Eric Auger <eric.auger@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2016-03-17
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "Here are the main arm64 updates for 4.6. There are some relatively intrusive changes to support KASLR, the reworking of the kernel virtual memory layout and initial page table creation. Summary: - Initial page table creation reworked to avoid breaking large block mappings (huge pages) into smaller ones. The ARM architecture requires break-before-make in such cases to avoid TLB conflicts but that's not always possible on live page tables - Kernel virtual memory layout: the kernel image is no longer linked to the bottom of the linear mapping (PAGE_OFFSET) but at the bottom of the vmalloc space, allowing the kernel to be loaded (nearly) anywhere in physical RAM - Kernel ASLR: position independent kernel Image and modules being randomly mapped in the vmalloc space with the randomness is provided by UEFI (efi_get_random_bytes() patches merged via the arm64 tree, acked by Matt Fleming) - Implement relative exception tables for arm64, required by KASLR (initial code for ARCH_HAS_RELATIVE_EXTABLE added to lib/extable.c but actual x86 conversion to deferred to 4.7 because of the merge dependencies) - Support for the User Access Override feature of ARMv8.2: this allows uaccess functions (get_user etc.) to be implemented using LDTR/STTR instructions. Such instructions, when run by the kernel, perform unprivileged accesses adding an extra level of protection. The set_fs() macro is used to "upgrade" such instruction to privileged accesses via the UAO bit - Half-precision floating point support (part of ARMv8.2) - Optimisations for CPUs with or without a hardware prefetcher (using run-time code patching) - copy_page performance improvement to deal with 128 bytes at a time - Sanity checks on the CPU capabilities (via CPUID) to prevent incompatible secondary CPUs from being brought up (e.g. weird big.LITTLE configurations) - valid_user_regs() reworked for better sanity check of the sigcontext information (restored pstate information) - ACPI parking protocol implementation - CONFIG_DEBUG_RODATA enabled by default - VDSO code marked as read-only - DEBUG_PAGEALLOC support - ARCH_HAS_UBSAN_SANITIZE_ALL enabled - Erratum workaround Cavium ThunderX SoC - set_pte_at() fix for PROT_NONE mappings - Code clean-ups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (99 commits) arm64: kasan: Fix zero shadow mapping overriding kernel image shadow arm64: kasan: Use actual memory node when populating the kernel image shadow arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission arm64: Fix misspellings in comments. arm64: efi: add missing frame pointer assignment arm64: make mrs_s prefixing implicit in read_cpuid arm64: enable CONFIG_DEBUG_RODATA by default arm64: Rework valid_user_regs arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly arm64: KVM: Move kvm_call_hyp back to its original localtion arm64: mm: treat memstart_addr as a signed quantity arm64: mm: list kernel sections in order arm64: lse: deal with clobbered IP registers after branch via PLT arm64: mm: dump: Use VA_START directly instead of private LOWEST_ADDR arm64: kconfig: add submenu for 8.2 architectural features arm64: kernel: acpi: fix ioremap in ACPI parking protocol cpu_postboot arm64: Add support for Half precision floating point arm64: Remove fixmap include fragility arm64: Add workaround for Cavium erratum 27456 arm64: mm: Mark .rodata as RO ...
| * arm64: kvm: deal with kernel symbols outside of linear mappingArd Biesheuvel2016-02-18
| | | | | | | | | | | | | | | | | | | | | | | | | | KVM on arm64 uses a fixed offset between the linear mapping at EL1 and the HYP mapping at EL2. Before we can move the kernel virtual mapping out of the linear mapping, we have to make sure that references to kernel symbols that are accessed via the HYP mapping are translated to their linear equivalent. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2016-03-16
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull KVM updates from Paolo Bonzini: "One of the largest releases for KVM... Hardly any generic changes, but lots of architecture-specific updates. ARM: - VHE support so that we can run the kernel at EL2 on ARMv8.1 systems - PMU support for guests - 32bit world switch rewritten in C - various optimizations to the vgic save/restore code. PPC: - enabled KVM-VFIO integration ("VFIO device") - optimizations to speed up IPIs between vcpus - in-kernel handling of IOMMU hypercalls - support for dynamic DMA windows (DDW). s390: - provide the floating point registers via sync regs; - separated instruction vs. data accesses - dirty log improvements for huge guests - bugfixes and documentation improvements. x86: - Hyper-V VMBus hypercall userspace exit - alternative implementation of lowest-priority interrupts using vector hashing (for better VT-d posted interrupt support) - fixed guest debugging with nested virtualizations - improved interrupt tracking in the in-kernel IOAPIC - generic infrastructure for tracking writes to guest memory - currently its only use is to speedup the legacy shadow paging (pre-EPT) case, but in the future it will be used for virtual GPUs as well - much cleanup (LAPIC, kvmclock, MMU, PIT), including ubsan fixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (217 commits) KVM: x86: remove eager_fpu field of struct kvm_vcpu_arch KVM: x86: disable MPX if host did not enable MPX XSAVE features arm64: KVM: vgic-v3: Only wipe LRs on vcpu exit arm64: KVM: vgic-v3: Reset LRs at boot time arm64: KVM: vgic-v3: Do not save an LR known to be empty arm64: KVM: vgic-v3: Save maintenance interrupt state only if required arm64: KVM: vgic-v3: Avoid accessing ICH registers KVM: arm/arm64: vgic-v2: Make GICD_SGIR quicker to hit KVM: arm/arm64: vgic-v2: Only wipe LRs on vcpu exit KVM: arm/arm64: vgic-v2: Reset LRs at boot time KVM: arm/arm64: vgic-v2: Do not save an LR known to be empty KVM: arm/arm64: vgic-v2: Move GICH_ELRSR saving to its own function KVM: arm/arm64: vgic-v2: Save maintenance interrupt state only if required KVM: arm/arm64: vgic-v2: Avoid accessing GICH registers KVM: s390: allocate only one DMA page per VM KVM: s390: enable STFLE interpretation only if enabled for the guest KVM: s390: wake up when the VCPU cpu timer expires KVM: s390: step the VCPU timer while in enabled wait KVM: s390: protect VCPU cpu timer with a seqcount KVM: s390: step VCPU cpu timer during kvm_run ioctl ...
| * | KVM: arm/arm64: timer: Add active state cachingMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Programming the active state in the (re)distributor can be an expensive operation so it makes some sense to try and reduce the number of accesses as much as possible. So far, we program the active state on each VM entry, but there is some opportunity to do less. An obvious solution is to cache the active state in memory, and only program it in the HW when conditions change. But because the HW can also change things under our feet (the active state can transition from 1 to 0 when the guest does an EOI), some precautions have to be taken, which amount to only caching an "inactive" state, and always programing it otherwise. With this in place, we observe a reduction of around 700 cycles on a 2GHz GICv2 platform for a NULL hypercall. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Switch the CP reg search to be a binary searchMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | Doing a linear search is a bit silly when we can do a binary search. Not that we trap that so many things that it has become a burden yet, but it makes sense to align it with the arm64 code. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Rename struct coproc_reg::is_64 to is_64bitMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | As we're going to play some tricks on the struct coproc_reg, make sure its 64bit indicator field matches that of coproc_params. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Enforce sorting of all CP tablesMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since we're obviously terrible at sorting the CP tables, make sure we're going to do it properly (or fail to boot). arm64 has had the same mechanism for a while, and nobody ever broke it... Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Properly sort the invariant tableMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | Not having the invariant table properly sorted is an oddity, and may get in the way of future optimisations. Let's fix it. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | arm64: KVM: Add a new vcpu device control group for PMUv3Shannon Zhao2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To configure the virtual PMUv3 overflow interrupt number, we use the vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group. After configuring the PMUv3, call the vcpu ioctl with attribute KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Acked-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | arm64: KVM: Introduce per-vcpu kvm device controlsShannon Zhao2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases it needs to get/set attributes specific to a vcpu and so needs something else than ONE_REG. Let's copy the KVM_DEVICE approach, and define the respective ioctls for the vcpu file descriptor. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Acked-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | arm64: KVM: Free perf event of PMU when destroying vcpuShannon Zhao2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | When KVM frees VCPU, it needs to free the perf_event of PMU. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | arm64: KVM: Add PMU overflow interrupt routingShannon Zhao2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When calling perf_event_create_kernel_counter to create perf_event, assign a overflow handler. Then when the perf event overflows, set the corresponding bit of guest PMOVSSET register. If this counter is enabled and its interrupt is enabled as well, kick the vcpu to sync the interrupt. On VM entry, if there is counter overflowed and interrupt level is changed, inject the interrupt with corresponding level. On VM exit, sync the interrupt level as well if it has been changed. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Use common version of timer-sr.cMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | Using the common HYP timer code is a bit more tricky, since we use system register names. Nothing a set of macros cannot work around... Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Use common version of vgic-v2-sr.cMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | No need to keep our own private version, the common one is strictly identical. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Move kvm/hyp/hyp.h to include/asm/kvm_hyp.hMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | In order to be able to use the code located in virt/kvm/arm/hyp, we need to make the global hyp.h file accessible from include/asm, similar to what we did for arm64. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | arm64: KVM: Skip HYP setup when already running in HYPMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the kernel running at EL2, there is no point trying to configure page tables for HYP, as the kernel is already mapped. Take this opportunity to refactor the whole init a bit, allowing the various parts of the hypervisor bringup to be split across multiple functions. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | arm/arm64: KVM: Handle out-of-RAM cache maintenance as a NOPMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So far, our handling of cache maintenance by VA has been pretty simple: Either the access is in the guest RAM and generates a S2 fault, which results in the page being mapped RW, or we go down the io_mem_abort() path, and nuke the guest. The first one is fine, but the second one is extremely weird. Treating the CM as an I/O is wrong, and nothing in the ARM ARM indicates that we should generate a fault for something that cannot end-up in the cache anyway (even if the guest maps it, it will keep on faulting at stage-2 for emulation). So let's just skip this instruction, and let the guest get away with it. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORTMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | These are now handled as a panic, so there is little point in keeping them around. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Remove unused hyp_pc fieldMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | This field was never populated, and the panic code already does something similar. Delete the related code. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Turn CP15 defines to an enumMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | Just like on arm64, having the CP15 registers expressed as a set of #defines has been very conflict-prone. Let's turn it into an enum, which should make it more manageable. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Remove __weak attributesMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | Now that the old code is long gone, we can remove all the weak attributes, as there is only one version of the code. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Switch to C-based stage2 initMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | As we now have hooks to setup VTCR from C code, let's drop the original VTCR setup and reimplement it as part of the HYP code. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Remove the old world switchMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | As we now have a full reimplementation of the world switch, it is time to kiss the old stuff goodbye. I'm not sure we'll miss it. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Change kvm_call_hyp return type to unsigned longMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Having u64 as the kvm_call_hyp return type is problematic, as it forces all kind of tricks for the return values from HYP to be promoted to 64bit (LE has the LSB in r0, and BE has them in r1). Since the only user of the return value is perfectly happy with a 32bit value, let's make kvm_call_hyp return an unsigned long, which is 32bit on ARM. This solves yet another headache. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add panic handling codeMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of spinning forever, let's "properly" handle any unexpected exception ("properly" meaning "print a spat on the console and die"). This has proved useful quite a few times... Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add HYP mode entry codeMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | This part is almost entierely borrowed from the existing code, just slightly simplifying the HYP function call (as we now save SPSR_hyp in the world switch). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add populating of fault data structureMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On guest exit, we must take care of populating our fault data structure so that the host code can handle it. This includes resolving the IPA for permission faults, which can result in restarting the guest. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add the new world switch implementationMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | The new world switch implementation is modeled after the arm64 one, calling the various save/restore functions in turn, and having as little state as possible. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add VFP lazy save/restore handlerMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | Similar to the arm64 version, add the code that deals with VFP traps, re-enabling VFP, save/restoring the registers and resuming the guest. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add guest entry codeMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the very minimal piece of code that is now required to jump into the guest (and return from it). This code is only concerned with save/restoring the USR registers (r0-r12+lr for the guest, r4-r12+lr for the host), as everything else is dealt with in C (VFP is another matter though). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add banked registers save/restoreMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Banked registers are one of the many perks of the 32bit architecture, and the world switch needs to cope with it. This requires some "special" accessors, as these are not accessed using a standard coprocessor instruction. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add VFP save/restoreMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is almost a copy/paste of the existing version, with a couple of subtle differences: - Only write to FPEXC once on the save path - Add an isb when enabling VFP access The patch also defines a few sysreg accessors and a __vfp_enabled predicate that test the VFP trapping state. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add vgic v2 save/restoreMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch shouldn't exist, as we should be able to reuse the arm64 version for free. I'll get there eventually, but in the meantime I need an interrupt controller. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add timer save/restoreMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch shouldn't exist, as we should be able to reuse the arm64 version for free. I'll get there eventually, but in the meantime I need a timer ticking. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add CP15 save/restore codeMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | Concert the CP15 save/restore code to C. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add TLB invalidation codeMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | Convert the TLB invalidation code to C, hooking it into the build system whilst we're at it. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add system register accessor macrosMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to move system register (CP15, mostly) access to C code, add a few macros to facilitate this, and minimize the difference between 32 and 64bit CP15 registers. This will get heavily used in the following patches. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Add a HYP-specific header fileMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to expose the various HYP services that are private to the hypervisor, add a new hyp.h file. So far, it only contains mundane things such as section annotation and VA manipulation. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Move GP registers into the CPU context structureMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | Continuing our rework of the CPU context, we now move the GP registers into the CPU context structure. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Move CP15 array into the CPU context structureMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Continuing our rework of the CPU context, we now move the CP15 array into the CPU context structure. As this causes quite a bit of churn, we introduce the vcpu_cp15() macro that abstract the location of the actual array. This will probably help next time we have to revisit that code. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | ARM: KVM: Move VFP registers to a CPU context structureMarc Zyngier2016-02-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | In order to turn the WS code into something that looks a bit more like the arm64 version, move the VFP registers into a CPU context container for both the host and the guest. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>