aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel
Commit message (Collapse)AuthorAge
...
| | | | * | | ARM: hw_breakpoint: disable preemption during debug exception handlingWill Deacon2010-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On ARM, debug exceptions occur in the form of data or prefetch aborts. One difference is that debug exceptions require access to per-cpu banked registers and data structures which are not saved in the low-level exception code. For kernels built with CONFIG_PREEMPT, there is an unlikely scenario that the debug handler ends up running on a different CPU from the one that originally signalled the event, resulting in random data being read from the wrong registers. This patch adds a debug_entry macro to the low-level exception handling code which checks whether the taken exception is a debug exception. If it is, the preempt count for the faulting process is incremented. After the debug handler has finished, the count is decremented. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | * | | ARM: hw_breakpoint: correct and simplify alignment fixup codeWill Deacon2010-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current hw_breakpoint code tries to fix up the alignment of breakpoints so that we can make use of sparse byte-address-select bits in the control register and give the illusion that we can set breakpoints on unaligned addresses. Although this works on v6 cores, v7 forbids this behaviour, instead requiring breakpoints to be set on aligned addresses and have contiguous byte-address-select ranges depending on the instruction set in use. For ARM the only supported size is 4 bytes, whilst Thumb-2 also permits 2 byte breakpoints (watchpoints can be of 1, 2, 4 or 8 bytes long). This patch simplifies the alignment fixup code so that we require addresses to be aligned to the size of the corresponding breakpoint. This allows us to handle the common case of breaking on a half-word aligned Thumb-2 instruction and also allows us to set byte watchpoints on arbitrary addresses. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | * | | ARM: hw_breakpoint: reset control registers in hotplug pathWill Deacon2010-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARMv7 debug architecture doesn't make any guarantees about the contents of debug control registers following a debug logic reset. This patch ensures that we reset the control registers when a cpu comes ONLINE (for example, with hotplug) so that when we enable monitor mode while inserting a breakpoint we won't exhibit random behaviour. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | * | | ARM: hw_breakpoint: ensure OS lock is clear before writing to debug registersWill Deacon2010-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARMv7 architects a system for saving and restoring the debug registers across low-power modes. At the heart of this system is a lock register which, when set, forbids writes to the debug registers. While locked, writes to debug registers via the co-processor interface will result in undefined instruction traps. Linux currently doesn't make use of this feature because we update the debug registers on context switch anyway, however the status of the lock is IMPLEMENTATION DEFINED on reset. This patch ensures that the lock is cleared during boot so that we can write to the debug registers safely. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | ARM: 6521/1: perf: use raw_spinlock_t for pmu_lockWill Deacon2010-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For kernels built with PREEMPT_RT, critical sections protected by standard spinlocks are preemptible. This is not acceptable on perf as (a) we may be scheduled onto a different CPU whilst reading/writing banked PMU registers and (b) the latency when reading the PMU registers becomes unpredictable. This patch upgrades the pmu_lock spinlock to a raw_spinlock instead. Reported-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: 6512/1: perf: fix warnings generated by sparseWill Deacon2010-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Russell reported a number of warnings coming from sparse when checking the ARM perf_event.c files: | perf_event.c seems to also have problems too: | | CHECK arch/arm/kernel/perf_event.c | arch/arm/kernel/perf_event.c:37:1: warning: symbol 'pmu_lock' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:70:1: warning: symbol 'cpu_hw_events' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:1006:1: warning: symbol 'armv6pmu_enable_event' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:1113:1: warning: symbol 'armv6pmu_stop' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:1956:6: warning: symbol 'armv7pmu_enable_event' was not declared. Should it be static? | arch/arm/kernel/perf_event.c:3072:14: warning: incorrect type in argument 1 (different address spaces) | arch/arm/kernel/perf_event.c:3072:14: expected void const volatile [noderef] <asn:1>*<noident> | arch/arm/kernel/perf_event.c:3072:14: got struct frame_tail *tail | arch/arm/kernel/perf_event.c:3074:49: warning: incorrect type in argument 2 (different address spaces) | arch/arm/kernel/perf_event.c:3074:49: expected void const [noderef] <asn:1>*from | arch/arm/kernel/perf_event.c:3074:49: got struct frame_tail *tail This patch resolves these issues so we can live in silence again. Reported-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | Merge branch 'pgt' (early part) into develRussell King2011-01-06
| | |\ \ \ \ \ | | | | |_|_|/ | | | |/| | |
| | | * | | | ARM: pgtable: collect up identity mapping functionsRussell King2010-12-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: pgtable: remove L2 cache flushes for SMP page table bring-upRussell King2010-12-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MMU is always configured to read page tables from the L2 cache so there's little point flushing them out of the L2 cache back to RAM. Remove these flushes. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: pgtable: directly pass pgd/pmd/pte to their error functionsRussell King2010-11-26
| | | | |/ / | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than passing the pte value to __pte_error, pass the raw pte_t cookie instead. Do the same for pmd and pgd functions. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | Merge branch 'misc' into develRussell King2011-01-06
| | |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/Kconfig arch/arm/common/Makefile arch/arm/kernel/Makefile arch/arm/kernel/smp.c
| | | * \ \ \ Merge branch 'smp' into miscRussell King2011-01-06
| | | |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/kernel/entry-armv.S arch/arm/mm/ioremap.c
| | | | * | | | ARM: SMP: ensure frame pointer is reinitialized for soft-CPU hotplugRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we soft-CPU hotplug a CPU, we reset the stack pointer and jump back to start_secondary(). This allows us to restart as if the CPU was actually reset. However, we weren't resetting the frame pointer, which could cause problems with backtracing. Reset the frame pointer to zero (which means no parent frame) just like the early assembly code also does. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: split out software TLB maintainence broadcastingRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | smp.c is becoming too large, so split out the TLB maintainence broadcasting into a separate smp_tlb.c file. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: localtimer: clean up local timer on hot unplugRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a CPU is hot unplugged, the generic tick code cleans up the clock event device, but fails to call down to the device's set_mode function to actually shut the device down. To work around this, we've historically had a local_timer_stop() callback out of the hotplug code. However, this adds needless complexity when we have the clock event device itself available. Explicitly call the clock event device's set_mode function with CLOCK_EVT_MODE_UNUSED, so that the hardware can be cleanly shutdown without any special external callbacks. When/if the generic code is fixed, percpu_timer_stop() can be killed off. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: smp: improve CPU bringup failure diagnosticsRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We used to print a bland error message which gave no clue as to the failure when we failed to bring up a secondary CPU. Resolve this by separating the two failure cases. If boot_secondary() fails, we print a message indicating the returned error code from boot_secondary(): "CPU%u: failed to boot: %d\n", cpu, ret. However, if boot_secondary() succeeded, but the CPU did not appear to mark itself online within the timeout, indicate that it failed to come online: "CPU%u: failed to come online\n", cpu Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels.Dave Martin2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * __fixup_smp_on_up has been modified with support for the THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split into halfwords in case of misalignment, since we can't rely on unaligned accesses working before turning the MMU on. No attempt is made to optimise the aligned case, since the number of fixups is typically small, and it seems best to keep the code as simple as possible. * Add a rotate in the fixup_smp code in order to support CPU_BIG_ENDIAN, as suggested by Nicolas Pitre. * Add an assembly-time sanity-check to ALT_UP() to ensure that the content really is the right size (4 bytes). (No check is done for ALT_SMP(). Possibly, this could be fixed by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus ALT_SMP...SMP_UP_B) into two macros. In the first case, ALT_SMP needs to expand to >= 4 bytes, not == 4.) * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due to macro limitations) has not been modified: the affected instruction (mov) has no 16-bit encoding, so the correct instruction size is satisfied in this case. * A "mode" parameter has been added to smp_dmb: smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser) smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP() This avoids assembly failures due to use of W() inside smp_dmb, when assembling pure-ARM code in the vectors page. There might be a better way to achieve this. * Kconfig: make SMP_ON_UP depend on (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2 currently assumes little-endian order.) Tested using a single generic realview kernel on: ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y}) ARM RealView PBX-A9 (SMP) Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: CPU hotplug: ensure correct ordering of unplugRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't call idle_task_exit() with interrupts disabled, and ensure that we have a memory barrier after interrupts are disabled but before signalling that this CPU has shut down. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: CPU hotplug: move cpu_killed completion to core codeRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We always need to wait for the dying CPU to reach a safe state before taking it down, irrespective of the requirements of the platform. Move the completion code into the ARM SMP hotplug code rather than having each platform re-implement this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: consolidate trace_hardirqs_off() into common SMP codeRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All platforms call trace_hardirqs_off() in their secondary startup code, so move this into the core SMP code - it doesn't need to be in the per-platform code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: consolidate the common parts of smp_prepare_cpus()Russell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a certain amount of smp_prepare_cpus() which doesn't belong in the platform support code - that is, code which is invariant to the SMP implementation. Move this code into arch/arm/kernel/smp.c, and add a platform_ prefix to the original function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: ensure smp_send_stop() waits for CPUs to stopRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wait for CPUs to indicate that they've stopped, after sending the stop IPI, rather than blindly continuing on and hoping that they've stopped in time. Print a warning if we fail to stop the other CPUs. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: use more sane register allocation for __fixup_smp_on_upRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to understand which registers can be modified. Also document which registers hold values which must be preserved. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: collect IPI and local timer IRQs for /proc/statRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IPI and local timer interrupts weren't being properly accounted for in /proc/stat. Collect them from the irq_stat structure, and return their sum. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: provide individual IPI interrupt statisticsRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This separates out the individual IPI interrupt counts from the total IPI count, which allows better visibility of what IPIs are being used for. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: fix /proc/interrupts formattingRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As per x86, align the initial column according to how many IRQs we have. Also, provide an english explaination for the 'LOC:' and 'IPI:' lines. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: move ipi_count into irq_stat structureRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the ipi_count into irq_stat, which allows the ipi_data structure to be entirely removed. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: provide accessors for irq_stat dataRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide __inc_irq_stat() and __get_irq_stat() to increment and read the irq stat counters. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: include local timer irq stats only when local timers configuredRussell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: remove send_ipi_message()Russell King2010-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | send_ipi_message() does nothing except call smp_cross_call(). As this is a static function, nothing external to this file calls it, so we can easily clean up this now unnecessary indirection. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: remove IRQ-disabling for smp_cross_call()Russell King2010-12-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we've now removed the spinlock and bitmask, we have nothing left which requires interrupts to be disabled when sending an IPI. All current IPI-sending implementations use the GIC, which also does not require interrupts disabled when calling gic_raise_softirq(). Remove the now unnecessary IRQ disable. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: avoid using bitmasks and locks for IPIs, use hardware insteadRussell King2010-12-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid using bitmasks and locks in the percpu area for IPIs, and instead use individual software generated interrupts to identify the reason for the IPI. This avoids the problems of having spinlocks in the percpu area. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | ARM: SMP: pass an ipi number to smp_cross_call()Russell King2010-12-03
| | | | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows us to use smp_cross_call() to trigger a number of different software generated interrupts, rather than combining them all on one SGI. Recover the SGI number via do_IPI. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: provide an early platform initialization hookRussell King2010-12-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows platforms to hook into the initialization early to setup things like scheduler clocks, etc. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: simplify early machine init hooksRussell King2010-12-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than storing each machine init hook separately, store a pointer to the machine description record and dereference this instead. This pointer is only available while the init sections are present, which is not a problem as we only use it from init code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: 6538/1: Subarch IRQ handler macros V3Magnus Damm2010-12-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Per subarch interrupt handler macros V3. This patch breaks out code from the irq_handler macro into arch_irq_handler and arch_irq_handler_default. The macros are put in the header file "entry-macro-multi.S" The arch_irq_handler_default macro is designed to be used by irq_handler in entry-armv.S while arch_irq_handler is suitable for per-subarch use. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: 6532/1: Allow machine to specify it's own IRQ handlers at run-timeeric miao2010-12-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normally different ARM platform has different way to decode the IRQ hardware status and demultiplex to the corresponding IRQ handler. This is highly optimized by macro irq_handler in entry-armv.S, and each machine defines their own macro to decode the IRQ number. However, this prevents multiple machine classes to be built into a single kernel. By allowing each machine to specify thier own handler, and making function pointer 'handle_arch_irq' to point to it at run time, this can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both solutions to work. Comparing with the highly optimized macro of irq_handler, the new function must be written with care not to lose too much performance. And the IPI stuff on SMP is expected to move to the provided arch IRQ handler as well. The assembly code to invoke handle_arch_irq is optimized by Russell King. Signed-off-by: Eric Miao <eric.miao@canonical.com> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: move high-usage mostly read variables in setup.c to __read_mostlyRussell King2010-12-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: implement support for read-mostly sectionsRussell King2010-12-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As our SMP implementation uses MESI protocols. Grouping together data which is mostly only read together means that we avoid unnecessary cache line bouncing when this code shares a cache line with other data. In other words, cache lines associated with read-mostly data are expected to spend most of their time in shared state. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: always build swp_emulate as ARMv7Russell King2010-11-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | swp_emulate is only used on ARMv7+, and includes ARMv7+ assembly instructions. Allow the assembler to accept ARMv7 instructions, but leave the compiler's code generation options alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: 6396/1: Add SWP/SWPB emulation for ARMv7 processorsLeif Lindholm2010-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SWP instruction was deprecated in the ARMv6 architecture, superseded by the LDREX/STREX family of instructions for load-linked/store-conditional operations. The ARMv7 multiprocessing extensions mandate that SWP/SWPB instructions are treated as undefined from reset, with the ability to enable them through the System Control Register SW bit. This patch adds the alternative solution to emulate the SWP and SWPB instructions using LDREX/STREX sequences, and log statistics to /proc/cpu/swp_emulation. To correctly deal with copy-on-write, it also modifies cpu_v7_set_pte_ext to change the mappings to priviliged RO when user RO. Signed-off-by: Leif Lindholm <leif.lindholm@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Kirill A. Shutemov <kirill@shutemov.name> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUsCatalin Marinas2010-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the domain switching functionality via the set_fs and __switch_to functions on cores that have a TLS register. Currently, the ioremap and vmalloc areas share the same level 1 page tables and therefore have the same domain (DOMAIN_KERNEL). When the kernel domain is modified from Client to Manager (via the __set_fs or in the __switch_to function), the XN (eXecute Never) bit is overridden and newer CPUs can speculatively prefetch the ioremap'ed memory. Linux performs the kernel domain switching to allow user-specific functions (copy_to/from_user, get/put_user etc.) to access kernel memory. In order for these functions to work with the kernel domain set to Client, the patch modifies the LDRT/STRT and related instructions to the LDR/STR ones. The user pages access rights are also modified for kernel read-only access rather than read/write so that the copy-on-write mechanism still works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register (CPU_32v6K is defined) since writing the TLS value to the high vectors page isn't possible. The user addresses passed to the kernel are checked by the access_ok() function so that they do not point to the kernel space. Tested-by: Anton Vorontsov <cbouatmailru@gmail.com> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | Merge branch 'clksrc' into develRussell King2011-01-05
| | |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/mach-vexpress/v2m.c arch/arm/plat-omap/counter_32k.c arch/arm/plat-versatile/Makefile
| | | * | | | | ARM: sched_clock: provide common infrastructure for sched_clock()Russell King2010-12-22
| | | | |_|/ / | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide common sched_clock() infrastructure for platforms to use to create a 64-bit ns based sched_clock() implementation from a counter running at a non-variable clock rate. This implementation is based upon maintaining an epoch for the counter and an epoch for the nanosecond time. When we desire a sched_clock() time, we calculate the number of counter ticks since the last epoch update, convert this to nanoseconds and add to the epoch nanoseconds. We regularly refresh these epochs within the counter wrap interval. We perform a similar calculation as above, and store the new epochs. We read and write the epochs in such a way that sched_clock() can easily (and locklessly) detect when an update is in progress, and repeat the loading of these constants when they're known not to be stable. The one caveat is that sched_clock() is not called in the middle of an update. We achieve that by disabling IRQs. Finally, if the clock rate is known at compile time, the counter to ns conversion factors can be specified, allowing sched_clock() to be tightly optimized. We ensure that these factors are correct by providing an initialization function which performs a run-time check. Acked-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Will Deacon <will.deacon@arm.com> Tested-by: Mikael Pettersson <mikpe@it.uu.se> Tested-by: Eric Miao <eric.y.miao@gmail.com> Tested-by: Olof Johansson <olof@lixom.net> Tested-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | Merge branch 'ftrace' of git://github.com/rabinv/linux-2.6 into devel-stableRussell King2010-11-26
| | | |\ \ \ \
| | | | * | | | ARM: ftrace: graph tracer + dynamic ftraceRabin Vincent2010-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support the graph tracer + dynamic ftrace combination on ARM. Signed-off-by: Rabin Vincent <rabin@rab.in>
| | | | * | | | ARM: ftrace: function graph tracer supportTim Bird2010-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cc: Tim Bird <tim.bird@am.sony.com> [rabin@rab.in: rebase on top of latest code, keep code in ftrace.c instead of separate file, check for ftrace_graph_entry also] Signed-off-by: Rabin Vincent <rabin@rab.in>
| | | | * | | | ARM: ftrace: use gas macros to avoid code duplicationRabin Vincent2010-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use assembler macros to avoid copy/pasting code between the implementations of the two variants of the mcount call. Signed-off-by: Rabin Vincent <rabin@rab.in>
| | | | * | | | ARM: place C irq handlers in IRQ_ENTRY for ftraceRabin Vincent2010-11-19
| | | | | |/ / | | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When FUNCTION_GRAPH_TRACER is enabled, place do_IRQ() and friends in the IRQ_ENTRY section so that the irq-related features of the function graph tracer work. Signed-off-by: Rabin Vincent <rabin@rab.in>
| | | * | | | ARM: perf: separate PMU backends into multiple filesWill Deacon2010-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM perf_event.c file contains all PMU backends and, as new PMUs are introduced, will continue to grow. This patch follows the example of x86 and splits the PMU implementations into separate files which are then #included back into the main file. Compile-time guards are added to each PMU file to avoid compiling in code that is not relevant for the version of the architecture which we are targetting. Acked-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>