aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel
Commit message (Collapse)AuthorAge
...
| | * | ARM: perf: consistently use arm_pmu->name for PMU nameWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Perf has three ways to name a PMU: either by passing an explicit char *, reading arm_pmu->name or accessing arm_pmu->pmu.name. Just use arm_pmu->name consistently in the ARM backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: return NOTIFY_DONE from cpu notifier when no available PMUWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When attempting to reset the PMU state for either a NULL PMU or a PMU implementation without a reset function, return NOTIFY_DONE from the CPU notifier as we don't care about the hotplug event. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: register cpu_notifier at driver initMark Rutland2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current practice of registering the cpu hotplug notifier at PMU registration time won't be safe with multiple PMUs, as we'll repeatedly attempt to register the notifier. This has the unfortunate effect of silently corrupting the notifier list, leading to boot stalling. Instead, register the notifier at init time. Its sanity checks will prevent anything bad from happening if the notifier is called before we have any PMUs registered. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: check ARMv7 counter validity on a per-pmu basisSudeep KarkadaNagesha2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Multi-cluster ARMv7 systems may have CPU PMUs with different number of counters. This patch updates armv7_pmnc_counter_valid so that it takes a pmu argument and checks the counter validity against that. We also remove a number of redundant counter checks whether the current PMU is not easily retrievable. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: consistently use struct perf_event in arm_pmu functionsSudeep KarkadaNagesha2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm_pmu functions have wildly varied parameters which can often be derived from struct perf_event. This patch changes the arm_pmu function prototypes so that struct perf_event pointers are passed in preference to fields that can be derived from the event. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: allocate CPU PMU dynamically at probe timeSudeep KarkadaNagesha2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Supporting multiple, heterogeneous CPU PMUs requires us to allocate the arm_pmu structures dynamically as the devices are probed. This patch removes the static structure definitions for each CPU PMU type and instead passes pointers to the PMU-specific init functions. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: add guest vs host discriminationMarc Zyngier2012-11-09
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add minimal guest support to perf, so it can distinguish whether the PMU interrupt was in the host or the guest, as well as collecting some very basic information (guest PC, user vs kernel mode). This is not feature complete though, as it doesn't support backtracing in the guest. Based on the x86 implementation, tested with KVM/ARM. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | |
| \ \
| \ \
| \ \
*---. \ \ Merge branches 'cache-l2x0', 'fixes', 'hdrs', 'misc', 'mmci', 'vic' and ↵Russell King2012-12-10
|\ \ \ \ \ | | | | |/ | | | |/| | | | | | 'warnings' into for-next
| | * | | ARM: 7595/1: syscall: rework ordering in syscall_trace_exitWill Deacon2012-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall_trace_exit is currently doing things back-to-front; invoking the audit hook *after* signalling the debugger, which presents an opportunity for the registers to be re-written by userspace in order to bypass auditing constaints. This patch fixes the ordering by moving the audit code first and the tracehook code last. On the face of it, it looks like current_thread_info()->syscall may be incorrect for the sys_exit tracepoint, but that's actually not an issue because it will have been set during syscall entry and cannot have changed since then. Reported-by: Andrew Gabbasov <Andrew_Gabbasov@mentor.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7591/1: nommu: Enable the strict alignment (CR_A) bit only if ARCH < v6Armando Visconti2012-12-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch keeps disabled the strict alignment CP15 bit for all armv6 and armv7 processor without the mmu. This behaviour is now same as in the mmu case. Signed-off-by: Armando Visconti <armando.visconti@st.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7579/1: arch/allow a scno of -1 to not cause a SIGILLKees Cook2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On tracehook-friendly platforms, a system call number of -1 falls through without running much code or taking much action. ARM is different. This adds a short-circuit check in the trace path to avoid any additional work, as suggested by Russell King, to make sure that ARM behaves the same way as other platforms. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Will Drewry <wad@chromium.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7578/1: arch/move secure_computing into traceKees Cook2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is very little difference in the TIF_SECCOMP and TIF_SYSCALL_WORK path in entry-common.S, so merge TIF_SECCOMP into TIF_SYSCALL_WORK and move seccomp into the syscall_trace_enter() handler. Expanded some of the tracehook logic into the callers to make this code more readable. Since tracehook needs to do register changing, this portion is best left in its own function instead of copy/pasting into the callers. Additionally, the return value for secure_computing() is now checked and a -1 value will result in the system call being skipped. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Will Drewry <wad@chromium.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7574/1: kernel/process.c: include idmap.h instead of redeclaring ↵Nicolas Pitre2012-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setup_mm_for_reboot() Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7571/1: SMP: add function arch_send_wakeup_ipi_mask()Shawn Guo2012-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add function arch_send_wakeup_ipi_mask(), so that platform code can use it as an easy way to wake up cores that are in WFI. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7568/1: Sort exception table at compile timeStephen Boyd2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ARM machine identifier to sortextable and select the config option so that we can sort the exception table at compile time. sortextable relies on a section named __ex_table existing in the vmlinux, but ARM's linker script places the exception table in the data section. Give the exception table its own section so that sortextable can find it. This allows us to skip the sorting step during boot. Cc: David Daney <david.daney@cavium.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7563/1: SMP_TWD: make setup()/stop() reentrantLinus Walleij2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It has been brought to my knowledge that the .setup()/.stop() function pair in the SMP TWD is going to be called from atomic contexts for CPUs coming and going, and then the clk_prepare()/clk_unprepare() calls cannot be called on subsequent .setup()/.stop() iterations. This is however just the tip of an iceberg as the function pair is not designed to be reentrant at all. This change makes the SMP_TWD clock .setup()/.stop() pair reentrant by splitting the .setup() function in three parts: - One COMMON part that is executed the first time the first CPU in the TWD cluster is initialized. This will fetch the TWD clk for the cluster and prepare+enable it. If no clk is available it will calibrate the rate instead. - One part that is executed the FIRST TIME a certain CPU is brought on-line. This initializes and sets up the clock event for a certain CPU. - One part that is executed on every subsequent .setup() call. This will re-initialize the clock event. This is augmented to call the clk_enable()/clk_disable() pair properly. Cc: Shawn Guo <shawn.guo@linaro.org> Reported-by: Peter Chen <peter.chen@freescale.com> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7561/1: SMP_TWD: use clk_prepare_enable()Linus Walleij2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A minor code refactoring saving a few lines by merging prepare() and enable() calls. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7544/1: Add BUG_ON when hlt counter is wrongly usedfwu2012-10-18
| |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. On ARM platform, "nohlt" can be used to prevent core from idle process, returning immediately. 2. There are two interfaces, exported for other modules, named "disable_hlt" and "enable_hlt" are used to enable/disable the cpuidle mechanism by increasing/decreasing "hlt_counter". Disable_hlt and enable_hlt are paired operation, when you first call disable_hlt and then enable_hlt, the semantics are right. 3. There is no obvious constraint to prevent user(driver/module) code to prevent the case that enable_hlt is ahead of disable_hlt, which is a fatal operation on kernel state change from user, and there is no any WARNING or notification if the case happens in current kernel code. This patch aims to report BUG when the case happens, just like what the kernel do when enable_irq is ahead of disable_irq. Link: https://patchwork.kernel.org/patch/1527881/ Signed-off-by: fwu <fwu@marvell.com> Signed-off-by: YiLu Mao <ylmao@marvell.com> Signed-off-by: Ning Jiang <ning.jiang@marvell.com> Acked-by: Nicolas Pitre Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | ARM: 7590/1: /proc/interrupts: limit the display of IPIs to online CPUs onlyNicolas Pitre2012-12-07
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | This is what is done for the regular interrupts in kernel/irqs/proc.c already, before calling arch_show_interrupts(). Not doing so for the IPIs causes the column headers not to match with the content whenever some CPUs are offline. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-11-06
| |\ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull arm fixes from Russell King: "Not much here again. The two most notable things here are the sched_clock() fix, which was causing problems with the scheduling of threaded IRQs after a suspend event, and the vfp fix, which afaik has only been seen on some older OMAP boards. Nevertheless, both are fairly important fixes." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7569/1: mm: uninitialized warning corrections ARM: 7567/1: io: avoid GCC's offsettable addressing modes for halfword accesses ARM: 7566/1: vfp: fix save and restore when running on pre-VFPv3 and CONFIG_VFPv3 set ARM: 7565/1: sched: stop sched_clock() during suspend
| | * ARM: 7565/1: sched: stop sched_clock() during suspendFelipe Balbi 22012-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The scheduler imposes a requirement to sched_clock() which is to stop the clock during suspend, if we don't do that any RT thread will be rescheduled in the future which might cause any sort of problems. This became an issue on OMAP when we converted omap-i2c.c to use threaded IRQs, it turned out that depending on how much time we spent on suspend, the I2C IRQ thread would end up being rescheduled so far in the future that I2C transfers would timeout and, because omap_hsmmc depends on an I2C-connected device to detect if an MMC card is inserted in the slot, our rootfs would just vanish. arch/arm/kernel/sched_clock.c already had an optional implementation (sched_clock_needs_suspend()) which would handle scheduler's requirement properly, what this patch does is simply to make that implementation non-optional. Note that this has the side-effect that printk timings won't reflect the actual time spent on suspend so other methods to measure that will have to be used. This has been tested with beagleboard XM (OMAP3630) and pandaboard rev A3 (OMAP4430). Suspend to RAM is now working after this patch. Thanks to Kevin Hilman for helping out with debugging. Acked-by: Kevin Hilman <khilman@ti.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-10-25
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM fixes from Russell King: "A random collection of various fixes, mainly from Arnd and a few other people. Not thing really stands out here." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: drop experimental status for hotplug and Thumb2 ARM: 7560/1: SMP_TWD: use DIV_ROUND_CLOSEST() for periodic mode ARM: 7559/1: smp: switch away from the idmap before updating init_mm.mm_count ARM: 7556/1: perf: fix updated event period in response to PERF_EVENT_IOC_PERIOD ARM: 7555/1: kexec: fix segment memory addresses check ARM: warnings in arch/arm/include/asm/uaccess.h ARM: binfmt_flat: unused variable 'persistent' ARM: be really quiet when building with 'make -s' ARM: pass -marm to gcc by default for both C and assembler ARM: Xen: fix initial build problems ARM: export default read_current_timer ARM: Fix another build warning in arch/arm/mm/alignment.c ARM: export set_irq_flags ARM: kprobes: make more tests conditional
| | * Merge tag 'fixes-for-rmk' of ↵Russell King2012-10-22
| | |\ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc into fixes
| | | * ARM: export set_irq_flagsArnd Bergmann2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recently added Emma Mobile GPIO driver calls set_irq_flags and irq_set_chip_and_handler for the interrupts it exports and it can be built as a module, which currently fails with ERROR: "set_irq_flags" [drivers/gpio/gpio-em.ko] undefined! We either need to replace the call to set_irq_flags with something else or export that function. This patch does the latter. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Magnus Damm <damm@opensource.se> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * ARM: kprobes: make more tests conditionalArnd Bergmann2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mls instruction is not available in ARMv6K or below, so we should make the test conditional on at least ARMv7. ldrexd/strexd are available in ARMv6K or ARMv7, which we can test by checking the CONFIG_CPU_32v6K symbol. /tmp/ccuMTZ8D.s: Assembler messages: /tmp/ccuMTZ8D.s:22188: Error: selected processor does not support ARM mode `mls r0,r1,r2,r3' /tmp/ccuMTZ8D.s:22222: Error: selected processor does not support ARM mode `mlshi r7,r8,r9,r10' /tmp/ccuMTZ8D.s:22252: Error: selected processor does not support ARM mode `mls lr,r1,r2,r13' Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Leif Lindholm <leif.lindholm@arm.com>
| | * | ARM: 7560/1: SMP_TWD: use DIV_ROUND_CLOSEST() for periodic modeLinus Walleij2012-10-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The periodic mode is currently calculated by a simple division but we should pay more attention to our integer arithmetics. Also delete a comment that does not make any sense. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | ARM: 7559/1: smp: switch away from the idmap before updating init_mm.mm_countWill Deacon2012-10-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting a secondary CPU, the primary CPU hands two sets of page tables via the secondary_data struct: (1) swapper_pg_dir: a normal, cacheable, shared (if SMP) mapping of the kernel image (i.e. the tables used by init_mm). (2) idmap_pgd: an uncached mapping of the .idmap.text ELF section. The idmap is generally used when enabling and disabling the MMU, which includes early CPU boot. In this case, the secondary CPU switches to swapper as soon as it enters C code: struct mm_struct *mm = &init_mm; unsigned int cpu = smp_processor_id(); /* * All kernel threads share the same mm context; grab a * reference and switch to it. */ atomic_inc(&mm->mm_count); current->active_mm = mm; cpumask_set_cpu(cpu, mm_cpumask(mm)); cpu_switch_mm(mm->pgd, mm); This causes a problem on ARMv7, where the identity mapping is treated as strongly-ordered leading to architecturally UNPREDICTABLE behaviour of exclusive accesses, such as those used by atomic_inc. This patch re-orders the secondary_start_kernel function so that we switch to swapper before performing any exclusive accesses. Cc: <stable@vger.kernel.org> Cc: David McKay <david.mckay@st.com> Reported-by: Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | ARM: 7556/1: perf: fix updated event period in response to PERF_EVENT_IOC_PERIODWill Deacon2012-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PERF_EVENT_IOC_PERIOD ioctl command can be used to change the sample period of a running perf_event. Consequently, when calculating the next event period, the new period will only be considered after the previous one has overflowed. This patch changes the calculation of the remaining event ticks so that they are offset if the period has changed. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: Andreas Sandberg <andreas.sandberg@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | ARM: 7555/1: kexec: fix segment memory addresses checkAaro Koskinen2012-10-18
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | Commit c564df4db85aac8d1d65a56176a0a25f46138064 (ARM: 7540/1: kexec: Check segment memory addresses) added a safety check with accidentally reversed condition, and broke kexec functionality on ARM. Fix this. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: fix oops on initial entry to userspace with Thumb2 kernelsRussell King2012-10-15
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Daniel Mack reports an oops at boot with the latest kernels: Internal error: Oops - undefined instruction: 0 [#1] SMP THUMB2 Modules linked in: CPU: 0 Not tainted (3.6.0-11057-g584df1d #145) PC is at cpsw_probe+0x45a/0x9ac LR is at trace_hardirqs_on_caller+0x8f/0xfc pc : [<c03493de>] lr : [<c005e81f>] psr: 60000113 sp : cf055fb0 ip : 00000000 fp : 00000000 r10: 00000000 r9 : 00000000 r8 : 00000000 r7 : 00000000 r6 : 00000000 r5 : c0344555 r4 : 00000000 r3 : cf057a40 r2 : 00000000 r1 : 00000001 r0 : 00000000 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 50c5387d Table: 8f3f4019 DAC: 00000015 Process init (pid: 1, stack limit = 0xcf054240) Stack: (0xcf055fb0 to 0xcf056000) 5fa0: 00000001 00000000 00000000 00000000 5fc0: cf055fb0 c000d1a8 00000000 00000000 00000000 00000000 00000000 00000000 5fe0: 00000000 be9b3f10 00000000 b6f6add0 00000010 00000000 aaaabfaf a8babbaa The analysis of this is as follows. In init/main.c, we issue: kernel_thread(kernel_init, NULL, CLONE_FS | CLONE_SIGHAND); This creates a new thread, which falls through to the ret_from_fork assembly, with r4 set NULL and r5 set to kernel_init. You can see this in your oops dump register set - r5 is 0xc0344555, which is the address of kernel_init plus 1 which marks the function as Thumb code. Now, let's look at this code a little closer - this is what the disassembly looks like: c000d180 <ret_from_fork>: c000d180: f03a fe08 bl c0047d94 <schedule_tail> c000d184: 2d00 cmp r5, #0 c000d186: bf1e ittt ne c000d188: 4620 movne r0, r4 c000d18a: 46fe movne lr, pc <-- XXXXXXX c000d18c: 46af movne pc, r5 c000d18e: 46e9 mov r9, sp c000d190: ea4f 3959 mov.w r9, r9, lsr #13 c000d194: ea4f 3949 mov.w r9, r9, lsl #13 c000d198: e7c8 b.n c000d12c <ret_to_user> c000d19a: bf00 nop c000d19c: f3af 8000 nop.w This code was introduced in 9fff2fa0db911 (arm: switch to saner kernel_execve() semantics). I have marked one instruction, and it's the significant one - I'll come back to that later. Eventually, having had a successful call to kernel_execve(), kernel_init() returns zero. In returning, it uses the value in 'lr' which was set by the instruction I marked above. Unfortunately, this causes lr to contain 0xc000d18e - an even address. This switches the ISA to ARM on return but with a non word aligned PC value. So, what do we end up executing? Well, not the instructions above - yes the opcodes, but they don't mean the same thing in ARM mode. In ARM mode, it looks like this instead: c000d18c: 46e946af strbtmi r4, [r9], pc, lsr #13 c000d190: 3959ea4f ldmdbcc r9, {r0, r1, r2, r3, r6, r9, fp, sp, lr, pc}^ c000d194: 3949ea4f stmdbcc r9, {r0, r1, r2, r3, r6, r9, fp, sp, lr, pc}^ c000d198: bf00e7c8 svclt 0x0000e7c8 c000d19c: 8000f3af andhi pc, r0, pc, lsr #7 c000d1a0: e88db092 stm sp, {r1, r4, r7, ip, sp, pc} c000d1a4: 46e81fff ; <UNDEFINED> instruction: 0x46e81fff c000d1a8: 8a00f3ef bhi 0xc004a16c c000d1ac: 0a0cf08a beq 0xc03493dc I have included more above, because it's relevant. The PSR flags which we can see in the oops dump are nZCv, so Z and C are set. All the above ARM instructions are not executed, except for two. c000d1a0, which has no writeback, and writes below the current stack pointer (and that data is lost when we take the next exception.) The other instruction which is executed is c000d1ac, which takes us to... 0xc03493dc. However, remember that bit 1 of the PC got set. So that makes the PC value 0xc03493de. And that value is the value we find in the oops dump for PC. What is the instruction here when interpreted in ARM mode? 0: f71e150c ; <UNDEFINED> instruction: 0xf71e150c and there we have our undefined instruction (remember that the 'never' condition code, 0xf, has been deprecated and is now always executed as it is now being used for additional instructions.) This path also nicely explains the state of the stack we see in the oops dump too. The above is a consistent and sane story for how we got to the oops dump, which all stems from the instruction at 0xc000d18a being wrong. Reported-by: Daniel Mack <zonque@gmail.com> Tested-by: Daniel Mack <zonque@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'for-linus' of ↵Linus Torvalds2012-10-12
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal Pull third pile of kernel_execve() patches from Al Viro: "The last bits of infrastructure for kernel_thread() et.al., with alpha/arm/x86 use of those. Plus sanitizing the asm glue and do_notify_resume() on alpha, fixing the "disabled irq while running task_work stuff" breakage there. At that point the rest of kernel_thread/kernel_execve/sys_execve work can be done independently for different architectures. The only pending bits that do depend on having all architectures converted are restrictred to fs/* and kernel/* - that'll obviously have to wait for the next cycle. I thought we'd have to wait for all of them done before we start eliminating the longjump-style insanity in kernel_execve(), but it turned out there's a very simple way to do that without flagday-style changes." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: alpha: switch to saner kernel_execve() semantics arm: switch to saner kernel_execve() semantics x86, um: convert to saner kernel_execve() semantics infrastructure for saner ret_from_kernel_thread semantics make sure that kernel_thread() callbacks call do_exit() themselves make sure that we always have a return path from kernel_execve() ppc: eeh_event should just use kthread_run() don't bother with kernel_thread/kernel_execve for launching linuxrc alpha: get rid of switch_stack argument of do_work_pending() alpha: don't bother passing switch_stack separately from regs alpha: take SIGPENDING/NOTIFY_RESUME loop into signal.c alpha: simplify TIF_NEED_RESCHED handling
| * | arm: switch to saner kernel_execve() semanticsAl Viro2012-10-12
| | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-10-11
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull second set of ARM updates from Russell King: "This is the second set of ARM updates for this merge window. Contained within are changes to allow the kernel to boot in hypervisor mode on CPUs supporting virtualization, and cache flushing support to the point of inner sharable unification, which are used by the suspend/resume code to avoid having to do a full cache flush. Also included is one fix for VFP code identified by Michael Olbrich." * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: ARM: vfp: fix saving d16-d31 vfp registers on v6+ kernels ARM: 7549/1: HYP: fix boot on some ARM1136 cores ARM: 7542/1: mm: fix cache LoUIS API for xscale and feroceon ARM: mm: update __v7_setup() to the new LoUIS cache maintenance API ARM: kernel: update __cpu_disable to use cache LoUIS maintenance API ARM: kernel: update cpu_suspend code to use cache LoUIS operations ARM: mm: rename jump labels in v7_flush_dcache_all function ARM: mm: implement LoUIS API for cache maintenance ops ARM: virt: arch_timers: enable access to physical timers ARM: virt: Add CONFIG_ARM_VIRT_EXT option ARM: virt: Add boot-time diagnostics ARM: virt: Update documentation for hyp mode entry support ARM: zImage/virt: hyp mode entry support for the zImage loader ARM: virt: allow the kernel to be entered in HYP mode ARM: opcodes: add __ERET/__MSR_ELR_HYP instruction encoding
| * \ \ Merge branch 'fixes' into for-linusRussell King2012-10-11
| |\ \ \ | | | |/ | | |/| | | | | | | | | Conflicts: arch/arm/kernel/smp.c
| * | | Merge branch 'hyp-boot-mode-rmk' of ↵Russell King2012-09-30
| |\ \ \ | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into devel-stable
| | * | | ARM: virt: arch_timers: enable access to physical timersMarc Zyngier2012-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If booting in HYP mode, it makes sense to enable the use of the physical timers, so the kernel can use them directly. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| | * | | ARM: virt: Add boot-time diagnosticsDave Martin2012-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to easily detect pathological cases, print some diagnostics when the kernel boots. This also provides helpers to detect that HYP mode is actually available, which can be used by other subsystems to enable HYP specific features. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| | * | | ARM: zImage/virt: hyp mode entry support for the zImage loaderDave Martin2012-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The zImage loader needs to turn on the MMU in order to take advantage of caching while decompressing the zImage. Running this in hyp mode would require the LPAE pagetable format to be supported; to avoid this complexity, this patch switches out of hyp mode, and returns back to hyp mode just before booting the kernel. This implementation assumes that the Hyp mode view of memory and the PL1 view of memory are coherent, providing that the MMU and caches are off in both, as required by the boot protocol. The zImage decompression code must drain the write buffer on completion anyway, and entry into Hyp mode should flush any prefetch buffer, avoiding hazards associated with local write buffers and the pipeline. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| | * | | ARM: virt: allow the kernel to be entered in HYP modeDave Martin2012-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does two things: * Ensure that asynchronous aborts are masked at kernel entry. The bootloader should be masking these anyway, but this reduces the damage window just in case it doesn't. * Enter svc mode via exception return to ensure that CPU state is properly serialised. This does not matter when switching from an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C parlance), but it potentially does matter when switching from a another privileged mode such as hyp mode. This should allow the kernel to boot safely either from svc mode or hyp mode, even if no support for use of the ARM Virtualization Extensions is built into the kernel. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| * | | | Merge branch 'cache-louis' of git://linux-arm.org/linux-2.6-lp into devel-stableRussell King2012-09-26
| |\ \ \ \
| | * | | | ARM: kernel: update __cpu_disable to use cache LoUIS maintenance APILorenzo Pieralisi2012-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a CPU is hotplugged out caches that reside in its power domain lose their contents and so must be cleaned to the next memory level. Currently, __cpu_disable calls flush_cache_all() that for new generation processor like A15/A7 ends up cleaning and invalidating all cache levels up to Level of Coherency, which includes the unified L2. This ends up being a waste of cycles since the L2 cache contents are not lost on power down. This patch updates __cpu_disable to use the new LoUIS API cache operations. Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Shawn Guo <shawn.guo@linaro.org>
| | * | | | ARM: kernel: update cpu_suspend code to use cache LoUIS operationsLorenzo Pieralisi2012-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In processors like A15/A7 L2 cache is unified and integrated within the processor cache hierarchy, so that it is not considered an outer cache anymore. For processors like A15/A7 flush_cache_all() ends up cleaning all cache levels up to Level of Coherency (LoC) that includes the L2 unified cache. When a single CPU is suspended (CPU idle) a complete L2 clean is not required, so generic cpu_suspend code must clean the data cache using the newly introduced cache LoUIS function. The context and stack pointer (context pointer) are cleaned to main memory using cache area functions that operate on MVA and guarantee that the data is written back to main memory (perform cache cleaning up to the Point of Coherency - PoC) so that the processor can fetch the context when the MMU is off in the cpu_resume code path. outer_cache management remains unchanged. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Shawn Guo <shawn.guo@linaro.org>
* | | | | | Merge branch 'for-linus' of ↵Linus Torvalds2012-10-11
|\ \ \ \ \ \ | | |_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal Pull pile 2 of execve and kernel_thread unification work from Al Viro: "Stuff in there: kernel_thread/kernel_execve/sys_execve conversions for several more architectures plus assorted signal fixes and cleanups. There'll be more (in particular, real fixes for the alpha do_notify_resume() irq mess)..." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (43 commits) alpha: don't open-code trace_report_syscall_{enter,exit} Uninclude linux/freezer.h m32r: trim masks avr32: trim masks tile: don't bother with SIGTRAP in setup_frame microblaze: don't bother with SIGTRAP in setup_rt_frame() mn10300: don't bother with SIGTRAP in setup_frame() frv: no need to raise SIGTRAP in setup_frame() x86: get rid of duplicate code in case of CONFIG_VM86 unicore32: remove pointless test h8300: trim _TIF_WORK_MASK parisc: decide whether to go to slow path (tracesys) based on thread flags parisc: don't bother looping in do_signal() parisc: fix double restarts bury the rest of TIF_IRET sanitize tsk_is_polling() bury _TIF_RESTORE_SIGMASK unicore32: unobfuscate _TIF_WORK_MASK mips: NOTIFY_RESUME is not needed in TIF masks mips: merge the identical "return from syscall" per-ABI code ... Conflicts: arch/arm/include/asm/thread_info.h
| * | | | | Uninclude linux/freezer.hRichard Weinberger2012-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This include is no longer needed. (seems to be a leftover from try_to_freeze()) Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | | | | Merge branch 'for-linus' of ↵Linus Torvalds2012-10-09
|\| | | | | | |_|_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal Pull generic execve() changes from Al Viro: "This introduces the generic kernel_thread() and kernel_execve() functions, and switches x86, arm, alpha, um and s390 over to them." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (26 commits) s390: convert to generic kernel_execve() s390: switch to generic kernel_thread() s390: fold kernel_thread_helper() into ret_from_fork() s390: fold execve_tail() into start_thread(), convert to generic sys_execve() um: switch to generic kernel_thread() x86, um/x86: switch to generic sys_execve and kernel_execve x86: split ret_from_fork alpha: introduce ret_from_kernel_execve(), switch to generic kernel_execve() alpha: switch to generic kernel_thread() alpha: switch to generic sys_execve() arm: get rid of execve wrapper, switch to generic execve() implementation arm: optimized current_pt_regs() arm: introduce ret_from_kernel_execve(), switch to generic kernel_execve() arm: split ret_from_fork, simplify kernel_thread() [based on patch by rmk] generic sys_execve() generic kernel_execve() new helper: current_pt_regs() preparation for generic kernel_thread() um: kill thread->forking um: let signal_delivered() do SIGTRAP on singlestepping into handler ...
| * | | | arm: get rid of execve wrapper, switch to generic execve() implementationAl Viro2012-09-30
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | arm: introduce ret_from_kernel_execve(), switch to generic kernel_execve()Al Viro2012-09-30
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | arm: split ret_from_fork, simplify kernel_thread() [based on patch by rmk]Al Viro2012-09-30
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | don't bother exporting kernel_execve()Al Viro2012-09-20
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | most of the architectures don't and there's not a single caller outside of core kernel. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | | Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-10-07
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM updates from Russell King: "This is the first chunk of ARM updates for this merge window. Conflicts are expected in two files - asm/timex.h and mach-integrator/integrator_cp.c. Nothing particularly stands out more than anything else. Most of the growth is down to the opcodes stuff from Dave Martin, which is countered by Rob's patches to use more of the asm-generic headers on ARM." (A few more conflicts grew since then, but it all looked fairly trivial) * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (44 commits) ARM: 7548/1: include linux/sched.h in syscall.h ARM: 7541/1: Add ARM ERRATA 775420 workaround ARM: ensure vm_struct has its phys_addr member filled in ARM: 7540/1: kexec: Check segment memory addresses ARM: 7539/1: kexec: scan for dtb magic in segments ARM: 7538/1: delay: add registration mechanism for delay timer sources ARM: 7536/1: smp: Formalize an IPI for wakeup ARM: 7525/1: ptrace: use updated syscall number for syscall auditing ARM: 7524/1: support syscall tracing ARM: 7519/1: integrator: convert platform devices to Device Tree ARM: 7518/1: integrator: convert AMBA devices to device tree ARM: 7517/1: integrator: initial device tree support ARM: 7516/1: plat-versatile: add DT support to FPGA IRQ ARM: 7515/1: integrator: check PL010 base address from resource ARM: 7514/1: integrator: call common init function from machine ARM: 7522/1: arch_timers: register a time/cycle counter ARM: 7523/1: arch_timers: enable the use of the virtual timer ARM: 7531/1: mark kernelmode mem{cpy,set} non-experimental ARM: 7520/1: Build dtb files in all target ARM: Fix build warning in arch/arm/mm/alignment.c ...