aboutsummaryrefslogtreecommitdiffstats
path: root/arch/mips/kernel
Commit message (Collapse)AuthorAge
...
| * | MIPS: pm-cps: Use MIPS standard lightweight ordering barrierMatt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since R2 of the MIPS architecture, SYNC(0x10) has been an optional but architecturally defined ordering barrier. If a CPU does not implement it, the arch specifies that it must fall back to SYNC(0). In places where we require that the instruction stream not be reordered, but do not require that loads / stores are gloablly completed, use the defined standard sync stype. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14221/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: pm-cps: Update comments on barrier instructionsMatt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This code makes large use of barriers, which had quite vague descriptions. Update the comments to make the choice of barrier and reason for it more clear. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14220/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: pm-cps: Change FSB workaround to CPU blacklistMatt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The check for whether a CPU required the FSB flush workaround previously required every CPU not requiring it to be whitelisted. That approach does not scale well as new CPUs are introduced so change the default from a WARN and returning an error to just returning 0. Any CPUs requiring the workaround can then be added to the blacklist. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14218/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: CPC: Avoid lock when MIPS CM >= 3 is presentMatt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIPS CM version 3 removed the CPC_CL_OTHER register and instead the CM_CL_OTHER register is used to redirect the CPC_OTHER region. As such, we should not write the unimplmented register and can avoid the spinlock as well. These lock functions should aleady be called within the context of a mips_cm_{lock,unlock}_other pair ensuring the correct CPC_OTHER region will be accessed. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14219/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: CPC: Convert bare 'unsigned' to 'unsigned int'Matt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Checkpatch complains about use of bare unsigned type. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14217/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Migrate exception table users off module.h and onto extable.hPaul Gortmaker2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. In the case of traps.c we can't dump the module.h include since it is also used to provide "print_modules". Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13934/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Delete unused file smp-gic.cMatt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 7eb8c99db26c ("MIPS: Delete smp-gic.c") removed the file from the Makefile and the option to build it from KConfig, but left the file itself floating in the tree. Remove the unused source file. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Cc: paul.burton@imgtec.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13883/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Move identification of VP(E) into proc.c from smp-mt.cMatt Redfearn2016-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The addition of VPE information to /proc/cpuinfo used to be in smp-mt.c. This file is not used by MIPS r6 kernels, so the Virtual Processor information was not present for these CPU types. Move the code to print VPE information into proc.c, add a case for MIPS r6 CPUS, and remove the block from smp-mt.c. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13847/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: Wire up new pkey_{mprotect,alloc,free} syscallsRalf Baechle2016-10-14
| |/ |/| | | | | | | | | | | | | Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14380/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | mips/panic: replace smp_send_stop() with kdump friendly version in panic pathHidehiro Kawai2016-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Daniel Walker reported problems which happens when crash_kexec_post_notifiers kernel option is enabled (https://lkml.org/lkml/2015/6/24/44). In that case, smp_send_stop() is called before entering kdump routines which assume other CPUs are still online. As the result, kdump routines fail to save other CPUs' registers. Additionally for MIPS OCTEON, it misses to stop the watchdog timer. To fix this problem, call a new kdump friendly function, crash_smp_send_stop(), instead of the smp_send_stop() when crash_kexec_post_notifiers is enabled. crash_smp_send_stop() is a weak function, and it just call smp_send_stop(). Architecture codes should override it so that kdump can work appropriately. This patch provides MIPS version. Fixes: f06e5153f4ae (kernel/panic.c: add "crash_kexec_post_notifiers" option) Link: http://lkml.kernel.org/r/20160810080950.11028.28000.stgit@sysi4-13.yrl.intra.hitachi.co.jp Signed-off-by: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Reported-by: Daniel Walker <dwalker@fifo99.com> Cc: Dave Young <dyoung@redhat.com> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Daniel Walker <dwalker@fifo99.com> Cc: Xunlei Pang <xpang@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: "Steven J. Hill" <steven.hill@cavium.com> Cc: Corey Minyard <cminyard@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | nmi_backtrace: generate one-line reports for idle cpusChris Metcalf2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing an nmi backtrace of many cores, most of which are idle, the output is a little overwhelming and very uninformative. Suppress messages for cpus that are idling when they are interrupted and just emit one line, "NMI backtrace for N skipped: idling at pc 0xNNN". We do this by grouping all the cpuidle code together into a new .cpuidle.text section, and then checking the address of the interrupted PC to see if it lies within that section. This commit suitably tags x86 and tile idle routines, and only adds in the minimal framework for other architectures. Link: http://lkml.kernel.org/r/1472487169-14923-5-git-send-email-cmetcalf@mellanox.com Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Tested-by: Petr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | nmi_backtrace: add more trigger_*_cpu_backtrace() methodsChris Metcalf2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch series "improvements to the nmi_backtrace code" v9. This patch series modifies the trigger_xxx_backtrace() NMI-based remote backtracing code to make it more flexible, and makes a few small improvements along the way. The motivation comes from the task isolation code, where there are scenarios where we want to be able to diagnose a case where some cpu is about to interrupt a task-isolated cpu. It can be helpful to see both where the interrupting cpu is, and also an approximation of where the cpu that is being interrupted is. The nmi_backtrace framework allows us to discover the stack of the interrupted cpu. I've tested that the change works as desired on tile, and build-tested x86, arm, mips, and sparc64. For x86 I confirmed that the generic cpuidle stuff as well as the architecture-specific routines are in the new cpuidle section. For arm, mips, and sparc I just build-tested it and made sure the generic cpuidle routines were in the new cpuidle section, but I didn't attempt to figure out which the platform-specific idle routines might be. That might be more usefully done by someone with platform experience in follow-up patches. This patch (of 4): Currently you can only request a backtrace of either all cpus, or all cpus but yourself. It can also be helpful to request a remote backtrace of a single cpu, and since we want that, the logical extension is to support a cpumask as the underlying primitive. This change modifies the existing lib/nmi_backtrace.c code to take a cpumask as its basic primitive, and modifies the linux/nmi.h code to use the new "cpumask" method instead. The existing clients of nmi_backtrace (arm and x86) are converted to using the new cpumask approach in this change. The other users of the backtracing API (sparc64 and mips) are converted to use the cpumask approach rather than the all/allbutself approach. The mips code ignored the "include_self" boolean but with this change it will now also dump a local backtrace if requested. Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.com Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Reviewed-by: Aaron Tomlin <atomlin@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'x86-asm-for-linus' of ↵Linus Torvalds2016-10-03
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull low-level x86 updates from Ingo Molnar: "In this cycle this topic tree has become one of those 'super topics' that accumulated a lot of changes: - Add CONFIG_VMAP_STACK=y support to the core kernel and enable it on x86 - preceded by an array of changes. v4.8 saw preparatory changes in this area already - this is the rest of the work. Includes the thread stack caching performance optimization. (Andy Lutomirski) - switch_to() cleanups and all around enhancements. (Brian Gerst) - A large number of dumpstack infrastructure enhancements and an unwinder abstraction. The secret long term plan is safe(r) live patching plus maybe another attempt at debuginfo based unwinding - but all these current bits are standalone enhancements in a frame pointer based debug environment as well. (Josh Poimboeuf) - More __ro_after_init and const annotations. (Kees Cook) - Enable KASLR for the vmemmap memory region. (Thomas Garnier)" [ The virtually mapped stack changes are pretty fundamental, and not x86-specific per se, even if they are only used on x86 right now. ] * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits) x86/asm: Get rid of __read_cr4_safe() thread_info: Use unsigned long for flags x86/alternatives: Add stack frame dependency to alternative_call_2() x86/dumpstack: Fix show_stack() task pointer regression x86/dumpstack: Remove dump_trace() and related callbacks x86/dumpstack: Convert show_trace_log_lvl() to use the new unwinder oprofile/x86: Convert x86_backtrace() to use the new unwinder x86/stacktrace: Convert save_stack_trace_*() to use the new unwinder perf/x86: Convert perf_callchain_kernel() to use the new unwinder x86/unwind: Add new unwind interface and implementations x86/dumpstack: Remove NULL task pointer convention fork: Optimize task creation by caching two thread stacks per CPU if CONFIG_VMAP_STACK=y sched/core: Free the stack early if CONFIG_THREAD_INFO_IN_TASK lib/syscall: Pin the task stack in collect_syscall() x86/process: Pin the target stack in get_wchan() x86/dumpstack: Pin the target stack when dumping it kthread: Pin the stack via try_get_task_stack()/put_task_stack() in to_live_kthread() function sched/core: Add try_get_task_stack() and put_task_stack() x86/entry/64: Fix a minor comment rebase error iommu/amd: Don't put completion-wait semaphore on stack ...
| * Merge branch 'x86/urgent' into x86/asmThomas Gleixner2016-09-30
| |\ | | | | | | | | | Get the cr4 fixes so we can apply the final cleanup
| * | ftrace: Add return address pointer to ftrace_ret_stackJosh Poimboeuf2016-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Storing this value will help prevent unwinders from getting out of sync with the function graph tracer ret_stack. Now instead of needing a stateful iterator, they can compare the return address pointer to find the right ret_stack entry. Note that an array of 50 ftrace_ret_stack structs is allocated for every task. So when an arch implements this, it will add either 200 or 400 bytes of memory usage per task (depending on whether it's a 32-bit or 64-bit platform). Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Byungchul Park <byungchul.park@lge.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nilay Vaish <nilayvaish@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/a95cfcc39e8f26b89a430c56926af0bb217bc0a1.1471607358.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | MIPS: Fix detection of unsupported highmem with cache aliasesPaul Burton2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The paging_init() function contains code which detects that highmem is in use but unsupported due to dcache aliasing. However this code was ineffective because it was being run before the caches are probed, meaning that cpu_has_dc_aliases would always evaluate to false (unless a platform overrides it to a compile-time constant) and the detection of the unsupported case is never triggered. The kernel would then go on to attempt to use highmem & either hit coherency issues or trigger the BUG_ON in flush_kernel_dcache_page(). Fix this by running paging_init() later than cpu_cache_init(), such that the cpu_has_dc_aliases macro will evaluate correctly & the unsupported highmem case will be detected successfully. This then leads to a formerly hidden issue in that mem_init_free_highmem() will attempt to free all highmem pages, even though we're avoiding use of them & don't have valid page structs for them. This leads to an invalid pointer dereference & a TLB exception. Avoid this by skipping the loop in mem_init_free_highmem() if cpu_has_dc_aliases evaluates true. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Rabin Vincent <rabinv@axis.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Alexander Sverdlin <alexander.sverdlin@gmail.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Jaedon Shin <jaedon.shin@gmail.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Sergey Ryazanov <ryazanov.s.a@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14184/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: Fix BUILD_ROLLBACK_PROLOGUE for microMIPSPaul Burton2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the kernel is built for microMIPS, branches targets need to be known to be microMIPS code in order to result in bit 0 of the PC being set. The branch target in the BUILD_ROLLBACK_PROLOGUE macro was simply the end of the macro, which may be pointing at padding rather than at code. This results in recent enough GNU linkers complaining like so: mips-img-linux-gnu-ld: arch/mips/built-in.o: .text+0x3e3c: Unsupported branch between ISA modes. mips-img-linux-gnu-ld: final link failed: Bad value Makefile:936: recipe for target 'vmlinux' failed make: *** [vmlinux] Error 1 Fix this by changing the branch target to be the start of the appropriate handler, skipping over any padding. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14019/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: clear execution hazard after changing FTLB enablePaul Burton2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On current P-series cores from Imagination the FTLB can be enabled or disabled via a bit in the Config6 register, and an execution hazard is created by changing the value of bit. The ftlb_disable function already cleared that hazard but that does no good for other callers. Clear the hazard in the set_ftlb_enable function that creates it, and only for the cores where it applies. This has the effect of reverting c982c6d6c48b ("MIPS: cpu-probe: Remove cp0 hazard barrier when enabling the FTLB") which was incorrect. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: c982c6d6c48b ("MIPS: cpu-probe: Remove cp0 hazard barrier when enabling the FTLB") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14023/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: Configure FTLB after probing TLB sizes from config4Paul Burton2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some cores (proAptiv, P5600) we make use of the sizes of the TLBs to determine the desired FTLB:VTLB write ratio. However set_ftlb_enable & thus calculate_ftlb_probability is called before decode_config4. This results in us calculating a probability based on zero sizes, and we end up setting FTLBP=3 for a 3:1 FTLB:VTLB write ratio in all cases. This will make abysmal use of the available FTLB resources in the affected cores. Fix this by configuring the FTLB probability after having decoded config4. However we do need to have enabled the FTLB before that point such that fields in config4 actually reflect that an FTLB is present. So set_ftlb_enable is now called twice, with flags indicating that it should configure the write probability only the second time. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: cf0a8aa0226d ("MIPS: cpu-probe: Set the FTLB probability bit on supported cores") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14022/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: Stop setting I6400 FTLBPPaul Burton2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The FTLBP field in Config7 for the I6400 is intended as chicken bits for debugging rather than as a field that software actually makes use of. For best performance, FTLBP should be left at its default value of 0 with all TLB writes hitting the FTLB by default. Additionally, since set_ftlb_enable is called from decode_configs before decode_config4 which determines the size of the TLBs, this was previously always setting FTLBP=3 for a 3:1 FTLB:VTLB write ratio which makes abysmal use of the available FTLB resources. This effectively reverts b0c4e1b79d8a ("MIPS: Set up FTLB probability for I6400"). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b0c4e1b79d8a ("MIPS: Set up FTLB probability for I6400") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14021/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: uprobes: fix use of uninitialised variableMarcin Nowakowski2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arch_uprobe_pre_xol needs to emulate a branch if a branch instruction has been replaced with a breakpoint, but in fact an uninitialised local variable was passed to the emulator routine instead of the original instruction Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Fixes: 40e084a506eb ('MIPS: Add uprobes support.') Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14300/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: uprobes: remove incorrect set_orig_insnMarcin Nowakowski2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generic kernel code implements a weak version of set_orig_insn that moves cached 'insn' from arch_uprobe to the original code location when the trap is removed. MIPS variant used arch_uprobe->orig_inst which was never initialised properly, so this code only inserted a nop instead of the original instruction. With that change orig_inst can also be safely removed. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Fixes: 40e084a506eb ('MIPS: Add uprobes support.') Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14299/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: fix uretprobe implementationMarcin Nowakowski2016-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arch_uretprobe_hijack_return_addr should replace the return address for a call with a trampoline address. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Fixes: 40e084a506eb ('MIPS: Add uprobes support.') Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14298/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: smp-cps: Avoid BUG() when offlining pre-r6 CPUsMatt Redfearn2016-09-29
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 0d2808f338c7 ("MIPS: smp-cps: Add support for CPU hotplug of MIPSr6 processors") added a call to mips_cm_lock_other in order to lock the CPC in CPUs containing a version 3 or higher Coherence Manager, which use the general CM core other register, where previous CMs had a dedicated core other register for the CPC. A kernel BUG() is triggered, however, if mips_cm_lock_other is called with a VP other than 0 on a CPU with CM < 3, a condition introduced by 0d2808f338c7. Avoid the BUG() by always locking VP0 when locking the CPC, since the required register, cpc_stat_conf, is shared by all vps in a core. Fixes: 0d2808f338c7 ("MIPS: smp-cps: Add support for CPU hotplug...) Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14297/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | MIPS: SMP: Fix possibility of deadlock when bringing CPUs onlineMatt Redfearn2016-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the possibility of a deadlock when bringing up secondary CPUs. The deadlock occurs because the set_cpu_online() is called before synchronise_count_slave(). This can cause a deadlock if the boot CPU, having scheduled another thread, attempts to send an IPI to the secondary CPU, which it sees has been marked online. The secondary is blocked in synchronise_count_slave() waiting for the boot CPU to enter synchronise_count_master(), but the boot cpu is blocked in smp_call_function_many() waiting for the secondary to respond to it's IPI request. Fix this by marking the CPU online in cpu_callin_map and synchronising counters before declaring the CPU online and calculating the maps for IPIs. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reported-by: Justin Chen <justinpopo6@gmail.com> Tested-by: Justin Chen <justinpopo6@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14302/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | MIPS: Fix pre-r6 emulation FPU initialisationPaul Burton2016-09-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the mipsr2_decoder() function, used to emulate pre-MIPSr6 instructions that were removed in MIPSr6, the init_fpu() function is called if a removed pre-MIPSr6 floating point instruction is the first floating point instruction used by the task. However, init_fpu() performs varous actions that rely upon not being migrated. For example in the most basic case it sets the coprocessor 0 Status.CU1 bit to enable the FPU & then loads FP register context into the FPU registers. If the task were to migrate during this time, it may end up attempting to load FP register context on a different CPU where it hasn't set the CU1 bit, leading to errors such as: do_cpu invoked from kernel context![#2]: CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82 #2 task: 838e4000 ti: 88d38000 task.ti: 88d38000 $ 0 : 00000000 00000001 ffffffff 88d3fef8 $ 4 : 838e4000 88d38004 00000000 00000001 $ 8 : 3400fc01 801f8020 808e9100 24000000 $12 : dbffffff 807b69d8 807b0000 00000000 $16 : 00000000 80786150 00400fc4 809c0398 $20 : 809c0338 0040273c 88d3ff28 808e9d30 $24 : 808e9d30 00400fb4 $28 : 88d38000 88d3fe88 00000000 8011a2ac Hi : 0040273c Lo : 88d3ff28 epc : 80114178 _restore_fp+0x10/0xa0 ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660 Status: 1400fc03 KERNEL EXL IE Cause : 1080002c (ExcCode 0b) PrId : 0001a920 (MIPS I6400) Modules linked in: Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0) Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338 808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000 004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28 808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000 00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20 ... Call Trace: [<80114178>] _restore_fp+0x10/0xa0 [<8011a2ac>] mipsr2_decoder+0xd5c/0x1660 [<8010de18>] do_ri+0x90/0x6b8 [<80105c20>] ret_from_exception+0x0/0x10 Fix this by disabling preemption around the call to init_fpu(), ensuring that it starts & completes on one CPU. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b0a668fb2038 ("MIPS: kernel: mips-r2-to-r6-emul: Add R2 emulator for MIPS R6") Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.0+ Patchwork: https://patchwork.linux-mips.org/patch/14305/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | MIPS: vDSO: Fix Malta EVA mapping to vDSO page structsJames Hogan2016-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The page structures associated with the vDSO pages in the kernel image are calculated using virt_to_page(), which uses __pa() under the hood to find the pfn associated with the virtual address. The vDSO data pointers however point to kernel symbols, so __pa_symbol() should really be used instead. Since there is no equivalent to virt_to_page() which uses __pa_symbol(), fix init_vdso_image() to work directly with pfns, calculated with __phys_to_pfn(__pa_symbol(...)). This issue broke the Malta Enhanced Virtual Addressing (EVA) configuration which has a non-default implementation of __pa_symbol(). This is because it uses a physical alias so that the kernel executes from KSeg0 (VA 0x80000000 -> PA 0x00000000), while RAM is provided to the kernel in the KUSeg range (VA 0x00000000 -> PA 0x80000000) which uses the same underlying RAM. Since there are no page structures associated with the low physical address region, some arbitrary kernel memory would be interpreted as a page structure for the vDSO pages and badness ensues. Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.4.x- Patchwork: https://patchwork.linux-mips.org/patch/14229/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)Marcin Nowakowski2016-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpu_has_fpu macro uses smp_processor_id() and is currently executed with preemption enabled, that triggers the warning at runtime. It is assumed throughout the kernel that if any CPU has an FPU, then all CPUs would have an FPU as well, so it is safe to perform the check with preemption enabled - change the code to use raw_ variant of the check to avoid the warning. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 4.0+ Patchwork: https://patchwork.linux-mips.org/patch/14125/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | MIPS: Fix memory regions reaching top of physicalJames Hogan2016-09-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory regions added with add_memory_region() at the top of the physical address space will have their end address overflow to 0. This causes them to be rejected as invalid, and would cause various other issues later on. This causes issues on Malta and Boston platforms when wanting to use all 2GB of RAM on a 32-bit kernel, either via highmem (using physical addresses 0x90000000..0xFFFFFFFF), or with the Malta Enhanced Virtual Addressing (EVA) layout which exposes the whole 0x80000000..0xFFFFFFFF physical address range to kernel mode at 0x00000000..0x7FFFFFFF. Due to the abundance of these non-overflow assumptions and the fact that memblock already avoids the arithmetic overflow by limiting the size of new memory regions without the arch code knowing it (in particular mem_init_free_highmem() will trigger a page dump due to nonzero mapcount on the last page), it is simpler and safer to just limit the size of the region in a similar way to memblock but at the arch level to allow most of the RAM to be used without arithmetic overflows. Therefore we detect this case specifically and reduce the size of the region slightly to avoid the arithmetic overflows and cause the last page to be ignored. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13857/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | MIPS: uprobes: fix incorrect uprobe brk handlingMarcin Nowakowski2016-09-13
|/ | | | | | | | | | | | | | | When a uprobe-replacement breakpoint instruction is handled, a notifier is called with DIE_UPROBE argument, but a corresponding exception notify handler for MIPS attempts to handle DIE_BREAK instead. As a result the breakpoint instruction isn't handled by the uprobe code and the probed application terminates with SIGTRAP. Fix this by changing arch_uprobe_exception_notify code to handle DIE_UPROBE as a pre-singlestep condition instead of DIE_BREAK. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13884/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds2016-08-06
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull MIPS updates from Ralf Baechle: "This is the main pull request for MIPS for 4.8. Also includes is a minor SSB cleanup as SSB code traditionally is merged through the MIPS tree: ATH25: - MIPS: Add default configuration for ath25 Boot: - For zboot, copy appended dtb to the end of the kernel - store the appended dtb address in a variable BPF: - Fix off by one error in offset allocation Cobalt code: - Fix typos Core code: - debugfs_create_file returns NULL on error, so don't use IS_ERR for testing for errors. - Fix double locking issue in RM7000 S-cache code. This would only affect RM7000 ARC systems on reboot. - Fix page table corruption on THP permission changes. - Use compat_sys_keyctl for 32 bit userspace on 64 bit kernels. David says, there are no compatibility issues raised by this fix. - Move some signal code around. - Rewrite r4k count/compare clockevent device registration such that min_delta_ticks/max_delta_ticks files are guaranteed to be initialized. - Only register r4k count/compare as clockevent device if we can assume the clock to be constant. - Fix MSA asm warnings in control reg accessors - uasm and tlbex fixes and tweaking. - Print segment physical address when EU=1. - Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO. - CP: Allow booting by VP other than VP 0 - Cache handling fixes and optimizations for r4k class caches - Add hotplug support for R6 processors - Cleanup hotplug bits in kconfig - traps: return correct si code for accessing nonmapped addresses - Remove cpu_has_safe_index_cacheops Lantiq: - Register IRQ handler for virtual IRQ number - Fix EIU interrupt loading code - Use the real EXIN count - Fix build error. Loongson 3: - Increase HPET_MIN_PROG_DELTA and decrease HPET_MIN_CYCLES Octeon: - Delete built-in DTB pruning code for D-Link DSR-1000N. - Clean up GPIO definitions in dlink_dsr-1000n.dts. - Add more LEDs to the DSR-100n DTS - Fix off by one in octeon_irq_gpio_map() - Typo fixes - Enable SATA by default in cavium_octeon_defconfig - Support readq/writeq() - Remove forced mappings of USB interrupts. - Ensure DMA descriptors are always in the low 4GB - Improve USB reset code for OCTEON II. Pistachio: - Add maintainers entry for pistachio SoC Support - Remove plat_setup_iocoherency Ralink: - Fix pwm UART in spis group pinmux. SSB: - Change bare unsigned to unsigned int to suit coding style Tools: - Fix reloc tool compiler warnings. Other: - Delete use of ARCH_WANT_OPTIONAL_GPIOLIB" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (61 commits) MIPS: mm: Fix definition of R6 cache instruction MIPS: tools: Fix relocs tool compiler warnings MIPS: Cobalt: Fix typo MIPS: Octeon: Fix typo MIPS: Lantiq: Fix build failure MIPS: Use CPHYSADDR to implement mips32 __pa MIPS: Octeon: Dlink_dsr-1000n.dts: add more leds. MIPS: Octeon: Clean up GPIO definitions in dlink_dsr-1000n.dts. MIPS: Octeon: Delete built-in DTB pruning code for D-Link DSR-1000N. MIPS: store the appended dtb address in a variable MIPS: ZBOOT: copy appended dtb to the end of the kernel MIPS: ralink: fix spis group pinmux MIPS: Factor o32 specific code into signal_o32.c MIPS: non-exec stack & heap when non-exec PT_GNU_STACK is present MIPS: Use per-mm page to execute branch delay slot instructions MIPS: Modify error handling MIPS: c-r4k: Use SMP calls for CM indexed cache ops MIPS: c-r4k: Avoid small flush_icache_range SMP calls MIPS: c-r4k: Local flush_icache_range cache op override MIPS: c-r4k: Split r4k_flush_kernel_vmap_range() ...
| * Merge branch '4.7-fixes' into mips-for-linux-nextRalf Baechle2016-08-03
| |\
| | * MIPS: Don't register r4k sched clock when CPUFREQ enabledHuacai Chen2016-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't register r4k sched clock when CPUFREQ enabled because sched clock need a constant frequency. Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13820/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * MIPS: Fix r4k clockevents registrationHuacai Chen2016-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPUFreq need min_delta_ticks/max_delta_ticks to be initialized, and this can be done by clockevents_config_and_register(). Cc: stable@vger.kernel.org Signed-off-by: Heiher <r@hev.cc> Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J . Hill <Steven.Hill@imgtec.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13817/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * MIPS: traps: return correct si code for accessing nonmapped addressesPetar Jovanovic2016-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | find_vma() returns the first VMA which satisfies fault_addr < vm_end, but it does not guarantee fault_addr is actually within VMA. Therefore, kernel has to check that before it chooses correct si code on return. Signed-off-by: Petar Jovanovic <petar.jovanovic@rt-rk.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13808/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: store the appended dtb address in a variableJonas Gorski2016-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of rewriting the arguments to match the UHI spec, store the address of a appended or UHI supplied dtb in fw_supplied_dtb. That way the original bootloader arugments are kept intact while still making the use of an appended dtb invisible for mach code. Mach code can still find out if it is an appended dtb by comparing fw_arg1 with fw_supplied_dtb. Signed-off-by: Jonas Gorski <jogo@openwrt.org> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: John Crispin <john@phrozen.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Alban Bedel <albeu@free.fr> Cc: Daniel Gimpelevich <daniel@gimpelevich.san-francisco.ca.us> Cc: Antony Pavlov <antonynpavlov@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13699/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Factor o32 specific code into signal_o32.cHarvey Hunt2016-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The commit ebb5e78cc634 ("MIPS: Initial implementation of a VDSO") caused building a 64 bit kernel with support for n32 and not o32 to produce a build error: arch/mips/kernel/signal32.c:415:11: error: ‘vdso_image_o32’ undeclared here (not in a function) .vdso = &vdso_image_o32, Fix this by moving the o32 specific code into signal_o32.c and updating the Makefile accordingly. Signed-off-by: Harvey Hunt <harvey.hunt@imgtec.com> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13690/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: non-exec stack & heap when non-exec PT_GNU_STACK is presentPaul Burton2016-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The stack and heap have both been executable by default on MIPS until now. This patch changes the default to be non-executable, but only for ELF binaries with a non-executable PT_GNU_STACK header present. This does apply to both the heap & the stack, despite the name PT_GNU_STACK, and this matches the behaviour of other architectures like ARM & x86. Current MIPS toolchains do not produce the PT_GNU_STACK header, which means that we can rely upon this patch not changing the behaviour of existing binaries. The new default will only take effect for newly compiled binaries once toolchains are updated to support PT_GNU_STACK, and since those binaries are newly compiled they can be compiled expecting the change in default behaviour. Again this matches the way in which the ARM & x86 architectures handled their implementations of non-executable memory. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Maciej Rozycki <maciej.rozycki@imgtec.com> Cc: Faraz Shahbazker <faraz.shahbazker@imgtec.com> Cc: Raghu Gandham <raghu.gandham@imgtec.com> Cc: Matthew Fortune <matthew.fortune@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13765/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Use per-mm page to execute branch delay slot instructionsPaul Burton2016-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases the kernel needs to execute an instruction from the delay slot of an emulated branch instruction. These cases include: - Emulated floating point branch instructions (bc1[ft]l?) for systems which don't include an FPU, or upon which the kernel is run with the "nofpu" parameter. - MIPSr6 systems running binaries targeting older revisions of the architecture, which may include branch instructions whose encodings are no longer valid in MIPSr6. Executing instructions from such delay slots is done by writing the instruction to memory followed by a trap, as part of an "emuframe", and executing it. This avoids the requirement of an emulator for the entire MIPS instruction set. Prior to this patch such emuframes are written to the user stack and executed from there. This patch moves FP branch delay emuframes off of the user stack and into a per-mm page. Allocating a page per-mm leaves userland with access to only what it had access to previously, and compared to other solutions is relatively simple. When a thread requires a delay slot emulation, it is allocated a frame. A thread may only have one frame allocated at any one time, since it may only ever be executing one instruction at any one time. In order to ensure that we can free up allocated frame later, its index is recorded in struct thread_struct. In the typical case, after executing the delay slot instruction we'll execute a break instruction with the BRK_MEMU code. This traps back to the kernel & leads to a call to do_dsemulret which frees the allocated frame & moves the user PC back to the instruction that would have executed following the emulated branch. In some cases the delay slot instruction may be invalid, such as a branch, or may trigger an exception. In these cases the BRK_MEMU break instruction will not be hit. In order to ensure that frames are freed this patch introduces dsemul_thread_cleanup() and calls it to free any allocated frame upon thread exit. If the instruction generated an exception & leads to a signal being delivered to the thread, or indeed if a signal simply happens to be delivered to the thread whilst it is executing from the struct emuframe, then we need to take care to exit the frame appropriately. This is done by either rolling back the user PC to the branch or advancing it to the continuation PC prior to signal delivery, using dsemul_thread_rollback(). If this were not done then a sigreturn would return to the struct emuframe, and if that frame had meanwhile been used in response to an emulated branch instruction within the signal handler then we would execute the wrong user code. Whilst a user could theoretically place something like a compact branch to self in a delay slot and cause their thread to become stuck in an infinite loop with the frame never being deallocated, this would: - Only affect the users single process. - Be architecturally invalid since there would be a branch in the delay slot, which is forbidden. - Be extremely unlikely to happen by mistake, and provide a program with no more ability to harm the system than a simple infinite loop would. If a thread requires a delay slot emulation & no frame is available to it (ie. the process has enough other threads that all frames are currently in use) then the thread joins a waitqueue. It will sleep until a frame is freed by another thread in the process. Since we now know whether a thread has an allocated frame due to our tracking of its index, the cookie field of struct emuframe is removed as we can be more certain whether we have a valid frame. Since a thread may only ever have a single frame at any given time, the epc field of struct emuframe is also removed & the PC to continue from is instead stored in struct thread_struct. Together these changes simplify & shrink struct emuframe somewhat, allowing twice as many frames to fit into the page allocated for them. The primary benefit of this patch is that we are now free to mark the user stack non-executable where that is possible. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Maciej Rozycki <maciej.rozycki@imgtec.com> Cc: Faraz Shahbazker <faraz.shahbazker@imgtec.com> Cc: Raghu Gandham <raghu.gandham@imgtec.com> Cc: Matthew Fortune <matthew.fortune@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13764/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: c-r4k: Exclude sibling CPUs in SMP callsJames Hogan2016-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When performing SMP calls to foreign cores, exclude sibling CPUs from the provided map, as we already handle the local core on the current CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0 on the same core. In the process the cpu_foreign_map cpumask is turned into an array of cpumasks, so that each CPU has its own version of it which excludes sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache management SMP calls are not needed when all CPUs are siblings (i.e. there are no foreign CPUs according to the new cpu_foreign_map[] semantics which exclude siblings). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Felix Fietkau <nbd@nbd.name> Cc: Jayachandran C. <jchandra@broadcom.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13801/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: SMP: Drop stop_this_cpu() cpu_foreign_map hackJames Hogan2016-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") added the cpu_foreign_map cpumask containing a single VPE from each online core, and recalculated it when secondary CPUs are brought up. stop_this_cpu() was also updated to recalculate cpu_foreign_map, but with an additional hack before marking the CPU as offline to copy cpu_online_mask into cpu_foreign_map and perform an SMP memory barrier. This appears to have been intended to prevent cache management IPIs being missed when the VPE representing the core in cpu_foreign_map is taken offline while other VPEs remain online. Unfortunately there is nothing in this hack to prevent r4k_on_each_cpu() from reading the old cpu_foreign_map, and smp_call_function_many() from reading that new cpu_online_mask with the core's representative VPE marked offline. It then wouldn't send an IPI to any online VPEs of that core. stop_this_cpu() is only actually called in panic and system shutdown / halt / reboot situations, in which case all CPUs are going down and we don't really need to care about cache management, so drop this hack. Note that the __cpu_disable() case for CPU hotplug is handled in the previous commit, and no synchronisation is needed there due to the use of stop_machine() which prevents hotplug from taking place while any CPU has disabled preemption (as r4k_on_each_cpu() does). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13796/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: SMP: Update cpu_foreign_map on CPU disableJames Hogan2016-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated. This could result in cache management SMP calls being sent to offline CPUs instead of online siblings in the same core. Add a call to calculate_cpu_foreign_map() in the various MIPS cpu disable callbacks after set_cpu_online(). All cases are updated for consistency and to keep cpu_foreign_map strictly up to date, not just those which may support hardware multithreading. Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Hongliang Tao <taohl@lemote.com> Cc: Hua Yan <yanh@lemote.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13799/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: SMP: Clear ASID without confusing has_valid_asid()James Hogan2016-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SMP flush_tlb_*() functions may clear the memory map's ASIDs for other CPUs if the mm has only a single user (the current CPU) in order to avoid SMP calls. However this makes it appear to has_valid_asid(), which is used by various cache flush functions, as if the CPUs have never run in the mm, and therefore can't have cached any of its memory. For flush_tlb_mm() this doesn't sound unreasonable. flush_tlb_range() corresponds to flush_cache_range() which does do full indexed cache flushes, but only on the icache if the specified mapping is executable, otherwise it doesn't guarantee that there are no cache contents left for the mm. flush_tlb_page() corresponds to flush_cache_page(), which will perform address based cache ops on the specified page only, and also only touches the icache if the page is executable. It does not guarantee that there are no cache contents left for the mm. For example, this affects flush_cache_range() which uses the has_valid_asid() optimisation. It is required to flush the icache when mappings are made executable (e.g. using mprotect) so they are immediately usable. If some code is changed to non executable in order to be modified then it will not be flushed from the icache during that time, but the ASID on other CPUs may still be cleared for TLB flushing. When the code is changed back to executable, flush_cache_range() will assume the code hasn't run on those other CPUs due to the zero ASID, and won't invalidate the icache on them. This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the above two flush_tlb_*() functions when the corresponding cache flushes are likely to be incomplete (non executable range flush, or any page flush). This ASID appears valid to has_valid_asid(), but still triggers ASID regeneration due to the upper ASID version bits being 0, which is less than the minimum ASID version of 1 and so always treated as stale. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13795/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | KEYS: 64-bit MIPS needs to use compat_sys_keyctl for 32-bit userspaceDavid Howells2016-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIPS64 needs to use compat_sys_keyctl for 32-bit userspace rather than calling sys_keyctl. The latter will work in a lot of cases, thereby hiding the issue. Reported-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: David Howells <dhowells@redhat.com> cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: linux-security-module@vger.kernel.org Cc: keyrings@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13832/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Print segment physical address when EU=1James Hogan2016-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the debugfs interface to print the segment configuration refuses to print the physical address of mapped segments. However if the EU bit is set these become unmapped at error level (when CP0_Status.ERL=1), so the physical address is still relevant. Update the logic to print the physical address of mapped segments when the EU bit is set, while still hiding the Cache Coherency Attribute (since EU overrides that to uncached when ERL=1 too). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13833/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: smp-cps: Add support for CPU hotplug of MIPSr6 processorsMatt Redfearn2016-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce support for hotplug of Virtual Processors in MIPSr6 systems. The method is simpler than the VPE parallel from the now-deprecated MT ASE, it can now simply write the VP_STOP register with the mask of VPs to halt, and use the VP_RUNNING register to determine when the VP has halted. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13752/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: smp-cps: Allow booting of CPU other than VP0 within a coreMatt Redfearn2016-07-24
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The boot_core function was hardcoded to always start VP0 when starting a core via the CPC. When hotplugging a CPU this may not be the desired behaviour. Make boot_core receive the VP ID to start running on the core, such that alternate VPs can be started via CPU hotplug. Also ensure that all other VPs within the core are stopped before bringing the core out of reset so that only the desired VP starts. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13750/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | tree-wide: replace config_enabled() with IS_ENABLED()Masahiro Yamada2016-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of config_enabled() against config options is ambiguous. In practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the author might have used it for the meaning of IS_ENABLED(). Using IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc. makes the intention clearer. This commit replaces config_enabled() with IS_ENABLED() where possible. This commit is only touching bool config options. I noticed two cases where config_enabled() is used against a tristate option: - config_enabled(CONFIG_HWMON) [ drivers/net/wireless/ath/ath10k/thermal.c ] - config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE) [ drivers/gpu/drm/gma500/opregion.c ] I did not touch them because they should be converted to IS_BUILTIN() in order to keep the logic, but I was not sure it was the authors' intention. Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.com Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: Stas Sergeev <stsp@list.ru> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Jiri Slaby <jslaby@suse.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Borislav Petkov <bp@suse.de> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: "Dmitry V. Levin" <ldv@altlinux.org> Cc: yu-cheng yu <yu-cheng.yu@intel.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Will Drewry <wad@chromium.org> Cc: Nikolay Martynov <mar.kolya@gmail.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Rafal Milecki <zajec5@gmail.com> Cc: James Cowgill <James.Cowgill@imgtec.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Mikko Rapeli <mikko.rapeli@iki.fi> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Roland McGrath <roland@hack.frob.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Kalle Valo <kvalo@qca.qualcomm.com> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: Tony Wu <tung7970@gmail.com> Cc: Huaitong Han <huaitong.han@intel.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Juergen Gross <jgross@suse.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andrea Gelmini <andrea.gelmini@gelma.net> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Rabin Vincent <rabin@rab.in> Cc: "Maciej W. Rozycki" <macro@imgtec.com> Cc: David Daney <david.daney@cavium.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2016-08-02
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull KVM updates from Paolo Bonzini: - ARM: GICv3 ITS emulation and various fixes. Removal of the old VGIC implementation. - s390: support for trapping software breakpoints, nested virtualization (vSIE), the STHYI opcode, initial extensions for CPU model support. - MIPS: support for MIPS64 hosts (32-bit guests only) and lots of cleanups, preliminary to this and the upcoming support for hardware virtualization extensions. - x86: support for execute-only mappings in nested EPT; reduced vmexit latency for TSC deadline timer (by about 30%) on Intel hosts; support for more than 255 vCPUs. - PPC: bugfixes. * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (302 commits) KVM: PPC: Introduce KVM_CAP_PPC_HTM MIPS: Select HAVE_KVM for MIPS64_R{2,6} MIPS: KVM: Reset CP0_PageMask during host TLB flush MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX() MIPS: KVM: Sign extend MFC0/RDHWR results MIPS: KVM: Fix 64-bit big endian dynamic translation MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase MIPS: KVM: Use 64-bit CP0_EBase when appropriate MIPS: KVM: Set CP0_Status.KX on MIPS64 MIPS: KVM: Make entry code MIPS64 friendly MIPS: KVM: Use kmap instead of CKSEG0ADDR() MIPS: KVM: Use virt_to_phys() to get commpage PFN MIPS: Fix definition of KSEGX() for 64-bit KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLD kvm: x86: nVMX: maintain internal copy of current VMCS KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures KVM: arm64: vgic-its: Simplify MAPI error handling KVM: arm64: vgic-its: Make vgic_its_cmd_handle_mapi similar to other handlers KVM: arm64: vgic-its: Turn device_id validation into generic ID validation ...
| * | MIPS: inst.h: Rename cbcond{0,1}_op to pop{1,3}0_opPaul Burton2016-07-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The opcodes currently defined in inst.h as cbcond0_op & cbcond1_op are actually defined in the MIPS base instruction set manuals as pop10 & pop30 respectively. Rename them as such, for consistency with the documentation. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>