aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include/asm
Commit message (Collapse)AuthorAge
...
| | * | | KVM: PPC: Book3S: 64-bit CONFIG_RELOCATABLE support for interruptsNicholas Piggin2017-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 64-bit Book3S exception handlers must find the dynamic kernel base to add to the target address when branching beyond __end_interrupts, in order to support kernel running at non-0 physical address. Support this in KVM by branching with CTR, similarly to regular interrupt handlers. The guest CTR saved in HSTATE_SCRATCH1 and restored after the branch. Without this, the host kernel hangs and crashes randomly when it is running at a non-0 address and a KVM guest is started. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| | * | | KVM: PPC: Book3S: Move 64-bit KVM interrupt handler out from alt sectionNicholas Piggin2017-01-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A subsequent patch to make KVM handlers relocation-safe makes them unusable from within alt section "else" cases (due to the way fixed addresses are taken from within fixed section head code). Stop open-coding the KVM handlers, and add them both as normal. A more optimal fix may be to allow some level of alternate feature patching in the exception macros themselves, but for now this will do. The TRAMP_KVM handlers must be moved to the "virt" fixed section area (name is arbitrary) in order to be closer to .text and avoid the dreaded "relocation truncated to fit" error. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| | * | | KVM: PPC: Book3S: Change interrupt call to reduce scratch space use on HVNicholas Piggin2017-01-26
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change the calling convention to put the trap number together with CR in two halves of r12, which frees up HSTATE_SCRATCH2 in the HV handler. The 64-bit PR handler entry translates the calling convention back to match the previous call convention (i.e., shared with 32-bit), for simplicity. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm: Fix build break with RADIX=y & HUGETLBFS=nMichael Ellerman2017-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we enable RADIX but disable HUGETLBFS, the build breaks with: arch/powerpc/mm/pgtable-radix.c:557:7: error: implicit declaration of function 'pmd_huge' arch/powerpc/mm/pgtable-radix.c:588:7: error: implicit declaration of function 'pud_huge' Fix it by stubbing those functions when HUGETLBFS=n. Fixes: 4b5d62ca17a1 ("powerpc/mm: add radix__remove_section_mapping()") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc: Add a prototype for mcount() so it can be versionedMichael Ellerman2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we get a warning that _mcount() can't be versioned: WARNING: EXPORT symbol "_mcount" [vmlinux] version generation failed, symbol will not be versioned. Add a prototype to asm-prototypes.h to fix it. The prototype is not really correct, mcount() is not a normal function, it has a special ABI. But for the purpose of versioning it doesn't matter. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/kprobes: Implement OptprobesAnju T2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current infrastructure of kprobe uses the unconditional trap instruction to probe a running kernel. Optprobe allows kprobe to replace the trap with a branch instruction to a detour buffer. Detour buffer contains instructions to create an in memory pt_regs. Detour buffer also has a call to optimized_callback() which in turn call the pre_handler(). After the execution of the pre-handler, a call is made for instruction emulation. The NIP is determined in advanced through dummy instruction emulation and a branch instruction is created to the NIP at the end of the trampoline. To address the limitation of branch instruction in POWER architecture, detour buffer slot is allocated from a reserved area. For the time being, 64KB is reserved in memory for this purpose. Instructions which can be emulated using analyse_instr() are the candidates for optimization. Before optimization ensure that the address range between the detour buffer allocated and the instruction being probed is within +/- 32MB. Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/kprobes: Fixes for kprobe_lookup_name() on BENaveen N. Rao2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix two issues with kprobes.h on BE which were exposed with the optprobes work: - one, having to do with a missing include for linux/module.h for MODULE_NAME_LEN -- this didn't show up previously since the only users of kprobe_lookup_name were in kprobes.c, which included linux/module.h through other headers, and - two, with a missing const qualifier for a local variable which ends up referring a string literal. Again, this is unique to how kprobe_lookup_name is being invoked in optprobes.c Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc: Add helper to check if offset is within relative branch rangeAnju T2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To permit the use of relative branch instruction in powerpc, the target address has to be relatively nearby, since the address is specified in an immediate field (24 bit filed) in the instruction opcode itself. Here nearby refers to 32MB on either side of the current instruction. This patch verifies whether the target address is within +/- 32MB range or not. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/bpf: Introduce __PPC_SH64()Naveen N. Rao2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce __PPC_SH64() as a 64-bit variant to encode shift field in some of the shift and rotate instructions operating on double-words. Convert some of the BPF instruction macros to use the same. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/pseries: Automatically resize HPT for memory hot add/removeDavid Gibson2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've now implemented code in the pseries platform to use the new PAPR interface to allow resizing the hash page table (HPT) at runtime. This patch uses that interface to automatically attempt to resize the HPT when memory is hot added or removed. This tries to always keep the HPT at a reasonable size for our current memory size. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/pseries: Advertise HPT resizing support via CASDavid Gibson2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The hypervisor needs to know a guest is capable of using the HPT resizing PAPR extension in order to make full advantage of it for memory hotplug. If the hypervisor knows the guest is HPT resize aware, it can size the initial HPT based on the initial guest RAM size, relying on the guest to resize the HPT when more memory is hot-added. Without this, the hypervisor must size the HPT for the maximum possible guest RAM, which can lead to a huge waste of space if the guest never actually expends to that maximum size. This patch advertises the guest's support for HPT resizing via the ibm,client-architecture-support OF interface. We use bit 5 of byte 6 of option vector 5 for this purpose, as defined in the PAPR ACR "HPT resizing option". Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/pseries: Add support for hash table resizingDavid Gibson2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for using two hypercalls to change the size of the main hash page table while running as a PAPR guest. For now these hypercalls are only in experimental qemu versions. The interface is two part: first H_RESIZE_HPT_PREPARE is used to allocate and prepare the new hash table. This may be slow, but can be done asynchronously. Then, H_RESIZE_HPT_COMMIT is used to switch to the new hash table. This requires that no CPUs be concurrently updating the HPT, and so must be run under stop_machine(). This also adds a debugfs file which can be used to manually control HPT resizing or testing purposes. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> [mpe: Rename the debugfs file to "hpt_order"] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/pseries: Add hypercall wrappers for hash page table resizingDavid Gibson2017-02-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the hypercall numbers and wrapper functions for the hash page table resizing hypercalls. These hypercall numbers are defined in the PAPR ACR "HPT resizing option". It also adds a new firmware feature flag to track the presence of the HPT resizing calls. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm: Add MMU_FTR_KERNEL_RO to possible feature maskAneesh Kumar K.V2017-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without this we will always find the feature disabled. Fixes: 984d7a1ec6 ("powerpc/mm: Fixup kernel read only mapping") Cc: stable@vger.kernel.org # v4.7+ Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64s: Use (start, size) rather than (start, end) for exception handlersNicholas Piggin2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | start,size has the benefit of being easier to search for (start,end usually gives you the preceeding vector from the one you want, as first result). Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64s: Tidy up after exception handler reworkNicholas Piggin2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Somewhere along the line, search/replace left some naming garbled, and untidy alignment (aka. mpe stuffed it up). Might as well fix them all up now while git blame history doesn't extend too far. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc: Add new cache geometry aux vectorsBenjamin Herrenschmidt2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds AUX vectors for the L1I,D, L2 and L3 cache levels providing for each cache level the size of the cache in bytes and the geometry (line size and number of ways). We chose to not use the existing alpha/sh definition which packs all the information in a single entry per cache level as it is too restricted to represent some of the geometries used on POWER. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64: Add L2 and L3 cache shape infoBenjamin Herrenschmidt2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Retrieved from device-tree when available Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64: Clean up ppc64_caches using a struct per cacheBenjamin Herrenschmidt2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have two set of identical struct members for the I and D sides and mostly identical bunches of code to parse the device-tree to populate them. Instead make a ppc_cache_info structure with one copy for I and one for D Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64: Retrieve number of L1 cache sets from device-treeBenjamin Herrenschmidt2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It will be used to calculate the associativity Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64: Fix naming of cache block vs. cache lineBenjamin Herrenschmidt2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a number of places we called "cache line size" what is actually the cache block size, which in the powerpc architecture, means the effective size to use with cache management instructions (it can be different from the actual cache line size). We fix the naming across the board and properly retrieve both pieces of information when available in the device-tree. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc: Move ARCH_DLINFO out of uapiBenjamin Herrenschmidt2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's an kernel private macro, it doesn't belong there Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/pseries: Introduce memory hotplug READD operationJohn Allen2017-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, memory must be hot removed and subsequently re-added in order to dynamically update the affinity of LMBs specified by a PRRN event. Earlier implementations of the PRRN event handler ran into issues in which the hot remove would occur successfully, but a hotplug event would be initiated from another source and grab the hotplug lock preventing the hot add from occurring. To prevent this situation, this patch introduces the notion of a hot "readd" action for memory which atomizes a hot remove and a hot add into a single, serialized operation on the hotplug queue. Signed-off-by: John Allen <jallen@linux.vnet.ibm.com> Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/sparse: Constify the address pointer in __get_user_nosleep()Daniel Axtens2017-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In __get_user_nosleep, we create an intermediate pointer for the user address we're about to fetch. We currently don't tag this pointer as const. Make it const, as we are simply dereferencing it, and it's scope is limited to the __get_user_nosleep macro. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/sparse: Constify the address pointer in __get_user_nocheck()Daniel Axtens2017-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In __get_user_nocheck, we create an intermediate pointer for the user address we're about to fetch. We currently don't tag this pointer as const. Make it const, as we are simply dereferencing it, and it's scope is limited to the __get_user_nocheck macro. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/sparse: Constify the address pointer in __get_user_check()Daniel Axtens2017-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In __get_user_check, we create an intermediate pointer for the user address we're about to fetch. We currently don't tag this pointer as const. Make it const, as we are simply dereferencing it, and it's scope is limited to the __get_user_check macro. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm: add radix__remove_section_mapping()Reza Arbab2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tear down and free the four-level page tables of physical mappings during memory hotremove. Borrow the basic structure of remove_pagetable() and friends from the identically-named x86 functions. Reduce the frequency of tlb flushes and page_table_lock spinlocks by only doing them in the outermost function. There was some question as to whether the locking is needed at all. Leave it for now, but we could consider dropping it. Memory must be offline to be removed, thus not in use. So there shouldn't be the sort of concurrent page walking activity here that might prompt us to use RCU. Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm: add radix__create_section_mapping()Reza Arbab2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wire up memory hotplug page mapping for radix. Share the mapping function already used by radix_init_pgtable(). Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc: Add support for non-PCI ISA bridgesBenjamin Herrenschmidt2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The POWER9 chip supports an LPC bus that isn't hanging off a PCI bus, so let's add support for that, mapping it to the reserved space at ISA_IO_BASE Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc: Move isa bridge definitions to separate includeBenjamin Herrenschmidt2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We'll be adding non-PCI isa bridge support so let's not have all the definition in pci-bridge.h Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powernv: Pass PSSCR value and mask to power9_idle_stopGautham R. Shenoy2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The power9_idle_stop method currently takes only the requested stop level as a parameter and picks up the rest of the PSSCR bits from a hand-coded macro. This is not a very flexible design, especially when the firmware has the capability to communicate the psscr value and the mask associated with a particular stop state via device tree. This patch modifies the power9_idle_stop API to take as parameters the PSSCR value and the PSSCR mask corresponding to the stop state that needs to be set. These PSSCR value and mask are respectively obtained by parsing the "ibm,cpu-idle-state-psscr" and "ibm,cpu-idle-state-psscr-mask" fields from the device tree. In addition to this, the patch adds support for handling stop states for which ESL and EC bits in the PSSCR are zero. As per the architecture, a wakeup from these stop states resumes execution from the subsequent instruction as opposed to waking up at the System Vector. The older firmware sets only the Requested Level (RL) field in the psscr and psscr-mask exposed in the device tree. For older firmware where psscr-mask=0xf, this patch will set the default sane values that the set for for remaining PSSCR fields (i.e PSLL, MTL, ESL, EC, and TR). For the new firmware, the patch will validate that the invariants required by the ISA for the psscr values are maintained by the firmware. This skiboot patch that exports fully populated PSSCR values and the mask for all the stop states can be found here: https://lists.ozlabs.org/pipermail/skiboot/2016-September/004869.html [Optimize the number of instructions before entering STOP with ESL=EC=0, validate the PSSCR values provided by the firimware maintains the invariants required as per the ISA suggested by Balbir Singh] Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powernv:idle: Add IDLE_STATE_ENTER_SEQ_NORET macroGautham R. Shenoy2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently all the low-power idle states are expected to wake up at reset vector 0x100. Which is why the macro IDLE_STATE_ENTER_SEQ that puts the CPU to an idle state and never returns. On ISA v3.0, when the ESL and EC bits in the PSSCR are zero, the CPU is expected to wake up at the next instruction of the idle instruction. This patch adds a new macro named IDLE_STATE_ENTER_SEQ_NORET for the no-return variant and reuses the name IDLE_STATE_ENTER_SEQ for a variant that allows resuming operation at the instruction next to the idle-instruction. Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/powernv: Initialise nest mmuAlistair Popple2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | POWER9 contains an off core mmu called the nest mmu (NMMU). This is used by other hardware units on the chip to translate virtual addresses into real addresses. The unit attempting an address translation provides the majority of the context required for the translation request except for the base address of the partition table (ie. the PTCR) which needs to be programmed into the NMMU. This patch adds a call to OPAL to set the PTCR for the nest mmu in opal_init(). Signed-off-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm/hash: Properly mask the ESID bits when building proto VSIDAneesh Kumar K.V2017-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The proto VSID is built using both the MMU context id and effective segment ID (ESID). We should not have overlapping bits between those. That could result in us having a VSID collision. With the current code we missed masking the top bits of the ESID. This implies for kernel address we ended up using the top 4 bits of the ESID as part of the proto VSID, which is wrong. The current code use the top 4 context values (0x7fffc - 0x7ffff) for the kernel. With those context IDs used for the kernel, we don't run into VSID collisions because we get the same proto VSID irrespective of whether we mask the ESID bits or not. eg: ea = 0xf000000000000000 context = 0x7ffff w/out masking: proto_vsid = (0x7ffff << 6 | 0xf000000000000000 >> 40) = (0x1ffffc0 | 0xf00000) = 0x1ffffc0 with masking: proto_vsid = (0x7ffff << 6 | ((0xf000000000000000 >> 40) & 0x3f)) = (0x1ffffc0 | (0xf00000 & 0x3f)) = 0x1ffffc0 | 0) = 0x1ffffc0 So although there is no bug, the code is still overly subtle, so fix it to save ourselves pain in future. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm: Fixup wrong LPCR_VRMASD valueAneesh Kumar K.V2017-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit a4b349540a26af ("powerpc/mm: Cleanup LPCR defines") we updated LPCR_VRMASD wrongly as below. -#define LPCR_VRMASD (0x1ful << (63-16)) +#define LPCR_VRMASD_SH 47 +#define LPCR_VRMASD (ASM_CONST(1) << LPCR_VRMASD_SH) We initialize the VRMA bits in LPCR to 0x00 in kvm. Hence using a different mask value as above while updating lpcr should not have any impact. This patch updates it to the correct value. Fixes: a4b349540a26 ("powerpc/mm: Cleanup LPCR defines") Reported-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Jia He <hejianet@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64: Use optimized checksum routines on little-endianPaul Mackerras2017-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we have optimized hand-coded assembly checksum routines for big-endian 64-bit systems, but for little-endian we use the generic C routines. This modifies the optimized routines to work for little-endian. With this, we no longer need to enable CONFIG_GENERIC_CSUM. This also fixes a couple of comments in checksum_64.S so they accurately reflect what the associated instruction does. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> [mpe: Use the more common __BIG_ENDIAN__] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/64: Fix checksum folding in csum_tcpudp_nofold and ip_fast_csum_nofoldPaul Mackerras2017-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These functions compute an IP checksum by computing a 64-bit sum and folding it to 32 bits (the "nofold" in their names refers to folding down to 16 bits). However, doing (u32) (s + (s >> 32)) is not sufficient to fold a 64-bit sum to 32 bits correctly. The addition can produce a carry out from bit 31, which needs to be added in to the sum to produce the correct result. To fix this, we copy the from64to32() function from lib/checksum.c and use that. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm/4k: don't allocate larger pmd page table for 4kAneesh Kumar K.V2017-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now support THP with both 64k and 4K page size configuration for radix. (hash only support THP with 64K page size). Hence we will have CONFIG_TRANSPARENT_HUGEPAGE enabled for both PPC_64K and PPC_4K config. Since we only need large pmd page table with hash configuration (to store the slot information in the second half of the table) restrict the large pmd page table to THP and 64K configs. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Reviewed-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | | powerpc/mm: Remove the debug hugepd_ok checkAneesh Kumar K.V2017-01-23
| | |/ | |/| | | | | | | | | | | | | | | | | | | We don't do this for other page table entries. So lets keep this simple and always return false for hugepd check on a 64K page size config. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* | | Merge branch 'for-linus' of ↵Linus Torvalds2017-02-20
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching Pull livepatching updates from Jiri Kosina: - removal of dead code (Kamalesh Babulal) - documentation update (Miroslav Benes) * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching: livepatch: doc: remove the limitation for schedule() patching powerpc/livepatch: Remove klp_write_module_reloc() stub
| * | | powerpc/livepatch: Remove klp_write_module_reloc() stubKamalesh Babulal2016-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 425595a7fc20 ("livepatch: reuse module loader code to write relocations") offloads livepatch module relocation write to arch specific module loader code. Remove unused klp_write_module_reloc() function stub. Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | | | Merge branch 'sched-core-for-linus' of ↵Linus Torvalds2017-02-20
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "The main changes in this (fairly busy) cycle were: - There was a class of scheduler bugs related to forgetting to update the rq-clock timestamp which can cause weird and hard to debug problems, so there's a new debug facility for this: which uncovered a whole lot of bugs which convinced us that we want to keep the debug facility. (Peter Zijlstra, Matt Fleming) - Various cputime related updates: eliminate cputime and use u64 nanoseconds directly, simplify and improve the arch interfaces, implement delayed accounting more widely, etc. - (Frederic Weisbecker) - Move code around for better structure plus cleanups (Ingo Molnar) - Move IO schedule accounting deeper into the scheduler plus related changes to improve the situation (Tejun Heo) - ... plus a round of sched/rt and sched/deadline fixes, plus other fixes, updats and cleanups" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (85 commits) sched/core: Remove unlikely() annotation from sched_move_task() sched/autogroup: Rename auto_group.[ch] to autogroup.[ch] sched/topology: Split out scheduler topology code from core.c into topology.c sched/core: Remove unnecessary #include headers sched/rq_clock: Consolidate the ordering of the rq_clock methods delayacct: Include <uapi/linux/taskstats.h> sched/core: Clean up comments sched/rt: Show the 'sched_rr_timeslice' SCHED_RR timeslice tuning knob in milliseconds sched/clock: Add dummy clear_sched_clock_stable() stub function sched/cputime: Remove generic asm headers sched/cputime: Remove unused nsec_to_cputime() s390, sched/cputime: Remove unused cputime definitions powerpc, sched/cputime: Remove unused cputime definitions s390, sched/cputime: Make arch_cpu_idle_time() to return nsecs ia64, sched/cputime: Remove unused cputime definitions ia64: Convert vtime to use nsec units directly ia64, sched/cputime: Move the nsecs based cputime headers to the last arch using it sched/cputime: Remove jiffies based cputime sched/cputime, vtime: Return nsecs instead of cputime_t to account sched/cputime: Complete nsec conversion of tick based accounting ...
| * | | | sched/cputime: Remove generic asm headersFrederic Weisbecker2017-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cputime_t is now only used by two architectures: * powerpc (when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y) * s390 And since the core doesn't use it anymore, we don't need any arch support from the others. So we can remove their stub implementations. A final cleanup would be to provide an efficient pure arch implementation of cputime_to_nsec() for s390 and powerpc and finally remove include/linux/cputime.h . Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-36-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | powerpc, sched/cputime: Remove unused cputime definitionsFrederic Weisbecker2017-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the core doesn't deal with cputime_t anymore, most of these APIs have been left unused. Lets remove these. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-33-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | Merge branch 'linus' into sched/core, to pick up fixes and refresh the branchIngo Molnar2017-02-01
| |\ \ \ \ | | | |/ / | | |/| | | | | | | Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | sched/cputime, powerpc: Migrate stolen_time field to the accounting structureFrederic Weisbecker2017-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | That in order to gather all cputime accumulation to the same place. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1483636310-6557-7-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | sched/cputime, powerpc: Prepare accounting structure for cputime flush on tickFrederic Weisbecker2017-01-14
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to prepare for CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y to delay cputime accounting to the tick, provide finegrained accumulators to powerpc in order to store the cputime until flushing. While at it, normalize the name of several fields according to common cputime naming. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1483636310-6557-6-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | Merge tag 'powerpc-4.10-4' of ↵Linus Torvalds2017-02-10
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes friom Michael Ellerman: "Apologies for the late pull request, but Ben has been busy finding bugs. - Userspace was semi-randomly segfaulting on radix due to us incorrectly handling a fault triggered by autonuma, caused by a patch we merged earlier in v4.10 to prevent the kernel executing userspace. - We weren't marking host IPIs properly for KVM in the OPAL ICP backend. - The ERAT flushing on radix was missing an isync and was incorrectly marked as DD1 only. - The powernv CPU hotplug code was missing a wakeup type and failing to flush the interrupt correctly when using OPAL ICP Thanks to Benjamin Herrenschmidt" * tag 'powerpc-4.10-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/powernv: Properly set "host-ipi" on IPIs powerpc/powernv: Fix CPU hotplug to handle waking on HVI powerpc/mm/radix: Update ERAT flushes when invalidating TLB powerpc/mm: Fix spurrious segfaults on radix with autonuma
| * | | | powerpc/powernv: Fix CPU hotplug to handle waking on HVIBenjamin Herrenschmidt2017-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IPIs come in as HVI not EE, so we need to test the appropriate SRR1 bits. The encoding is such that it won't have false positives on P7 and P8 so we can just test it like that. We also need to handle the icp-opal variant of the flush. Fixes: d74361881f0d ("powerpc/xics: Add ICP OPAL backend") Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* | | | | Merge tag 'powerpc-4.10-3' of ↵Linus Torvalds2017-02-03
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "The main change is we're reverting the initial stack protector support we merged this cycle. It turns out to not work on toolchains built with libc support, and fixing it will be need to wait for another release. And the rest are all fairly minor: - Some pasemi machines were not booting due to a missing error check in prom_find_boot_cpu() - In EEH we were checking a pointer rather than the bool it pointed to - The clang build was broken by a BUILD_BUG_ON() we added. - The radix (Power9 only) version of map_kernel_page() was broken if our memory size was a multiple of 2MB, which it generally isn't Thanks to: Darren Stevens, Gavin Shan, Reza Arbab" * tag 'powerpc-4.10-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/mm: Use the correct pointer when setting a 2MB pte powerpc: Fix build failure with clang due to BUILD_BUG_ON() powerpc: Revert the initial stack protector support powerpc/eeh: Fix wrong flag passed to eeh_unfreeze_pe() powerpc: Add missing error check to prom_find_boot_cpu()