aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel
Commit message (Collapse)AuthorAge
...
| * | | ARM: 7587/1: implement optimized percpu variable accessRob Herring2012-12-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the previously unused TPIDRPRW register to store percpu offsets. TPIDRPRW is only accessible in PL1, so it can only be used in the kernel. This replaces 2 loads with a mrc instruction for each percpu variable access. With hackbench, the performance improvement is 1.4% on Cortex-A9 (highbank). Taking an average of 30 runs of "hackbench -l 1000" yields: Before: 6.2191 After: 6.1348 Will Deacon reported similar delta on v6 with 11MPCore. The asm "memory clobber" are needed here to ensure the percpu offset gets reloaded. Testing by Will found that this would not happen in __schedule() which is a bit of a special case as preemption is disabled but the execution can move cores. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | ARM: 7585/1: kernel: fix nr_cpu_ids check in DT logical map initLorenzo Pieralisi2012-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a kernel is configured with a DT containing more /cpu nodes than nr_cpu_ids, the number of cpus must be capped in the DT parsing code. Current code carries out the check, but fails to cap the value and the check is executed after the cpu logical index is used, which can lead to memory corruption due to index overflow. This patch refactors the check against nr_cpu_ids and move it before any computed index is used in the parsing code. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Grant Likely <grant.likely@secretlab.ca> Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | Merge branch 'bl-cpuinfo' of git://linux-arm.org/linux-2.6-lp into devel-stableRussell King2012-11-20
| |\ \ \
| | * | | ARM: kernel: update cpuinfo to print all online CPUs featuresLorenzo Pieralisi2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, reading /proc/cpuinfo provides userspace with CPU ID of the CPU carrying out the read from the file. This is fine as long as all CPUs in the system are the same. With the advent of big.LITTLE and heterogenous ARM systems this approach provides user space with incorrect bits of information since CPU ids in the system might differ from the one provided by the CPU reading the file. This patch updates the cpuinfo show function so that a read from /proc/cpuinfo prints HW information for all online CPUs at once, mirroring x86 behaviour. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * | | ARM: kernel: add MIDR to per-CPU information dataLorenzo Pieralisi2012-11-19
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The advent of big.LITTLE ARM platforms requires the kernel to be able to identify the MIDRs of all online CPUs upon request. MIDRs are stashed at boot time so that kernel subsystems can detect the MIDR of online CPUs by simply retrieving per-CPU data updated by all booted CPUs. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * | | Merge branch 'cluster-boot-protocol' of git://linux-arm.org/linux-2.6-lp ↵Russell King2012-11-20
| |\ \ \ | | | | | | | | | | | | | | | into devel-stable
| | * | | ARM: kernel: add cpu logical map DT init in setup_archLorenzo Pieralisi2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As soon as the device tree is unflattened the cpu logical to physical mapping is carried out in setup_arch to build a proper array of MPIDR and corresponding logical indexes. The mapping could have been carried out using the flattened DT blob and related primitives, but since the mapping is not needed by early boot code it can safely be executed when the device tree has been uncompressed to its tree data structure. This patch adds the arm_dt_init_cpu maps() function call in setup_arch(). If the kernel is not compiled with DT support the function is empty and no logical mapping takes place through it; the mapping carried out in smp_setup_processor_id() is left unchanged. If DT is supported the mapping created in smp_setup_processor_id() is overriden. The DT mapping also sets the possible cpus mask, hence platform code need not set it again in the respective smp_init_cpus() functions. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * | | ARM: kernel: add device tree init map functionLorenzo Pieralisi2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting through a device tree, the kernel cpu logical id map can be initialized using device tree data passed by FW or through an embedded blob. This patch adds a function that parses device tree "cpu" nodes and retrieves the corresponding CPUs hardware identifiers (MPIDR). It sets the possible cpus and the cpu logical map values according to the number of CPUs defined in the device tree and respective properties. The device tree HW identifiers are considered valid if all CPU nodes contain a "reg" property, there are no duplicate "reg" entries and the DT defines a CPU node whose "reg" property matches the MPIDR[23:0] of the boot CPU. The primary CPU is assigned cpu logical number 0 to keep the current convention valid. Current bindings documentation is included in the patch: Documentation/devicetree/bindings/arm/cpus.txt Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * | | ARM: kernel: smp_setup_processor_id() updatesLorenzo Pieralisi2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch applies some basic changes to the smp_setup_processor_id() ARM implementation to make the code that builds cpu_logical_map more uniform across the kernel. The function now prints the full extent of the boot CPU MPIDR[23:0] and initializes the cpu_logical_map for CPUs up to nr_cpu_ids. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: kernel: update topology to use new MPIDR macrosLorenzo Pieralisi2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch updates the topology initialization code to use the newly defined accessors to retrieve the MPIDR affinity levels. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * | | ARM: kernel: enhance MPIDR macro definitionsLorenzo Pieralisi2012-11-19
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kernel subsystems other than the topology layer need the MPIDR mask definitions to access the MPIDR without relying on hardcoded masks. This patch moves the MPIDR register masks definition to a header file and defines a macro to simplify access to MPIDR bit fields representing affinity levels. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * | | Merge branch 'hw-breakpoint' of ↵Russell King2012-11-19
| |\ \ \ | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| | * | | ARM: hw_breakpoint: kill WARN_ONCE usageWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WARN_ONCE is a bit OTT for some of the simple failure cases encountered in hw_breakpoint, so use either pr_warning or pr_warn_once instead. Reported-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: use CRn as argument for debug reg accessor macrosDietmar Eggemann2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The coprocessor register CRn for accesses to the debug register can be a different one than C0. Take this into account for the ARM_DBG_READ and the ARM_DBG_WRITE macro. The inline assembler calls which used a coprocessor register CRn other than C0 are replaced by the ARM_DBG_READ or ARM_DBG_WRITE macro. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: check if monitor mode is enabled during validationWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than attempt to enable monitor mode explicitly when scheduling in a breakpoint event (which could raise an undefined exception trap when accessing DBGDSCRext), instead check that DBGDSCRint.MDBGen is set during event validation and report an error to the caller if not. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: make boot quieter without CPUID feature registersWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Booting on a v6 core without the CPUID feature registers (e.g. 1136) leads to a noisy dmesg complaining about their absence. This patch changes the pr_warning into a pr_warn_once to keep the log quieter. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: don't try to clear v6 debug registers during bootWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | v6 cores do not provide a way to clear the debug registers without first enabling monitor mode, meaning that we could take spurious debug exceptions. Instead, rely on the registers being in a sane state when we boot as they are defined to be disabled out of reset anyway. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: fix ordering of debug register reset sequenceWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The debug register reset sequence for v7 and v7.1 is congruent with tap-dancing through a minefield. Rather than wait until we've blown ourselves to pieces, this patch instead checks the debug_err_mask after each potentially faulting operation. We also move the enabling of monitor_mode to the end of the sequence in order to prevent spurious debug events generated by UNKNOWN register values. Reported-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: fix monitor mode detection with v7.1Will Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Detecting whether halting debug is enabled is no longer possible via the DBGDSCR in v7.1, returning an UNKNOWN value for the HDBGen bit via CP14 when the OS lock is clear. This patch removes the halting mode check and ensures that accesses to the internal and external views of the DBGDSCR are serialised with an instruction barrier. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: only clear OS lock when implemented on v7Will Deacon2012-11-09
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | The OS save and restore register are optional in debug architecture v7, so check the status register before attempting to clear the OS lock. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | Merge branch 'perf/updates' of ↵Russell King2012-11-19
| |\ \ \ | | |_|/ | |/| | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| | * | ARM: PMU: fix runtime PM enableJon Hunter2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 7be2958 (ARM: PMU: Add runtime PM Support) updated the ARM PMU code to use runtime PM which was prototyped and validated on the OMAP devices. In this commit, there is no call pm_runtime_enable() and for OMAP devices pm_runtime_enable() is currently being called from the OMAP PMU code when the PMU device is created. However, there are two problems with this: 1. For any other ARM device wishing to use runtime PM for PMU they will need to call pm_runtime_enable() for runtime PM to work. 2. When booting with device-tree and using device-tree to create the PMU device, pm_runtime_enable() needs to be called from within the ARM PERF driver as we are no longer calling any device specific code to create the device. Hence, PMU does not work on OMAP devices that use the runtime PM callbacks when using device-tree to create the PMU device. Therefore, call pm_runtime_enable() directly from the ARM PMU driver when registering the device. For platforms that do not use runtime PM, pm_runtime_enable() does nothing and for platforms that do use runtime PM but may not require it specifically for PMU, this will just add a little overhead when initialising and uninitialising the PMU device. Tested with PERF on OMAP2420, OMAP3430 and OMAP4460. Acked-by: Kevin Hilman <khilman@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Jon Hunter <jon-hunter@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: consistently use arm_pmu->name for PMU nameWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Perf has three ways to name a PMU: either by passing an explicit char *, reading arm_pmu->name or accessing arm_pmu->pmu.name. Just use arm_pmu->name consistently in the ARM backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: return NOTIFY_DONE from cpu notifier when no available PMUWill Deacon2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When attempting to reset the PMU state for either a NULL PMU or a PMU implementation without a reset function, return NOTIFY_DONE from the CPU notifier as we don't care about the hotplug event. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: register cpu_notifier at driver initMark Rutland2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current practice of registering the cpu hotplug notifier at PMU registration time won't be safe with multiple PMUs, as we'll repeatedly attempt to register the notifier. This has the unfortunate effect of silently corrupting the notifier list, leading to boot stalling. Instead, register the notifier at init time. Its sanity checks will prevent anything bad from happening if the notifier is called before we have any PMUs registered. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: check ARMv7 counter validity on a per-pmu basisSudeep KarkadaNagesha2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Multi-cluster ARMv7 systems may have CPU PMUs with different number of counters. This patch updates armv7_pmnc_counter_valid so that it takes a pmu argument and checks the counter validity against that. We also remove a number of redundant counter checks whether the current PMU is not easily retrievable. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: consistently use struct perf_event in arm_pmu functionsSudeep KarkadaNagesha2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm_pmu functions have wildly varied parameters which can often be derived from struct perf_event. This patch changes the arm_pmu function prototypes so that struct perf_event pointers are passed in preference to fields that can be derived from the event. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: allocate CPU PMU dynamically at probe timeSudeep KarkadaNagesha2012-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Supporting multiple, heterogeneous CPU PMUs requires us to allocate the arm_pmu structures dynamically as the devices are probed. This patch removes the static structure definitions for each CPU PMU type and instead passes pointers to the PMU-specific init functions. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | ARM: perf: add guest vs host discriminationMarc Zyngier2012-11-09
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add minimal guest support to perf, so it can distinguish whether the PMU interrupt was in the host or the guest, as well as collecting some very basic information (guest PC, user vs kernel mode). This is not feature complete though, as it doesn't support backtracing in the guest. Based on the x86 implementation, tested with KVM/ARM. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | |
| \ \
| \ \
| \ \
*---. \ \ Merge branches 'cache-l2x0', 'fixes', 'hdrs', 'misc', 'mmci', 'vic' and ↵Russell King2012-12-10
|\ \ \ \ \ | | | | |/ | | | |/| | | | | | 'warnings' into for-next
| | * | | ARM: 7595/1: syscall: rework ordering in syscall_trace_exitWill Deacon2012-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall_trace_exit is currently doing things back-to-front; invoking the audit hook *after* signalling the debugger, which presents an opportunity for the registers to be re-written by userspace in order to bypass auditing constaints. This patch fixes the ordering by moving the audit code first and the tracehook code last. On the face of it, it looks like current_thread_info()->syscall may be incorrect for the sys_exit tracepoint, but that's actually not an issue because it will have been set during syscall entry and cannot have changed since then. Reported-by: Andrew Gabbasov <Andrew_Gabbasov@mentor.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7591/1: nommu: Enable the strict alignment (CR_A) bit only if ARCH < v6Armando Visconti2012-12-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch keeps disabled the strict alignment CP15 bit for all armv6 and armv7 processor without the mmu. This behaviour is now same as in the mmu case. Signed-off-by: Armando Visconti <armando.visconti@st.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7579/1: arch/allow a scno of -1 to not cause a SIGILLKees Cook2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On tracehook-friendly platforms, a system call number of -1 falls through without running much code or taking much action. ARM is different. This adds a short-circuit check in the trace path to avoid any additional work, as suggested by Russell King, to make sure that ARM behaves the same way as other platforms. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Will Drewry <wad@chromium.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7578/1: arch/move secure_computing into traceKees Cook2012-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is very little difference in the TIF_SECCOMP and TIF_SYSCALL_WORK path in entry-common.S, so merge TIF_SECCOMP into TIF_SYSCALL_WORK and move seccomp into the syscall_trace_enter() handler. Expanded some of the tracehook logic into the callers to make this code more readable. Since tracehook needs to do register changing, this portion is best left in its own function instead of copy/pasting into the callers. Additionally, the return value for secure_computing() is now checked and a -1 value will result in the system call being skipped. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Will Drewry <wad@chromium.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7574/1: kernel/process.c: include idmap.h instead of redeclaring ↵Nicolas Pitre2012-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setup_mm_for_reboot() Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7571/1: SMP: add function arch_send_wakeup_ipi_mask()Shawn Guo2012-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add function arch_send_wakeup_ipi_mask(), so that platform code can use it as an easy way to wake up cores that are in WFI. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7568/1: Sort exception table at compile timeStephen Boyd2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ARM machine identifier to sortextable and select the config option so that we can sort the exception table at compile time. sortextable relies on a section named __ex_table existing in the vmlinux, but ARM's linker script places the exception table in the data section. Give the exception table its own section so that sortextable can find it. This allows us to skip the sorting step during boot. Cc: David Daney <david.daney@cavium.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7563/1: SMP_TWD: make setup()/stop() reentrantLinus Walleij2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It has been brought to my knowledge that the .setup()/.stop() function pair in the SMP TWD is going to be called from atomic contexts for CPUs coming and going, and then the clk_prepare()/clk_unprepare() calls cannot be called on subsequent .setup()/.stop() iterations. This is however just the tip of an iceberg as the function pair is not designed to be reentrant at all. This change makes the SMP_TWD clock .setup()/.stop() pair reentrant by splitting the .setup() function in three parts: - One COMMON part that is executed the first time the first CPU in the TWD cluster is initialized. This will fetch the TWD clk for the cluster and prepare+enable it. If no clk is available it will calibrate the rate instead. - One part that is executed the FIRST TIME a certain CPU is brought on-line. This initializes and sets up the clock event for a certain CPU. - One part that is executed on every subsequent .setup() call. This will re-initialize the clock event. This is augmented to call the clk_enable()/clk_disable() pair properly. Cc: Shawn Guo <shawn.guo@linaro.org> Reported-by: Peter Chen <peter.chen@freescale.com> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7561/1: SMP_TWD: use clk_prepare_enable()Linus Walleij2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A minor code refactoring saving a few lines by merging prepare() and enable() calls. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | ARM: 7544/1: Add BUG_ON when hlt counter is wrongly usedfwu2012-10-18
| |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. On ARM platform, "nohlt" can be used to prevent core from idle process, returning immediately. 2. There are two interfaces, exported for other modules, named "disable_hlt" and "enable_hlt" are used to enable/disable the cpuidle mechanism by increasing/decreasing "hlt_counter". Disable_hlt and enable_hlt are paired operation, when you first call disable_hlt and then enable_hlt, the semantics are right. 3. There is no obvious constraint to prevent user(driver/module) code to prevent the case that enable_hlt is ahead of disable_hlt, which is a fatal operation on kernel state change from user, and there is no any WARNING or notification if the case happens in current kernel code. This patch aims to report BUG when the case happens, just like what the kernel do when enable_irq is ahead of disable_irq. Link: https://patchwork.kernel.org/patch/1527881/ Signed-off-by: fwu <fwu@marvell.com> Signed-off-by: YiLu Mao <ylmao@marvell.com> Signed-off-by: Ning Jiang <ning.jiang@marvell.com> Acked-by: Nicolas Pitre Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | ARM: 7590/1: /proc/interrupts: limit the display of IPIs to online CPUs onlyNicolas Pitre2012-12-07
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | This is what is done for the regular interrupts in kernel/irqs/proc.c already, before calling arch_show_interrupts(). Not doing so for the IPIs causes the column headers not to match with the content whenever some CPUs are offline. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-11-06
| |\ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull arm fixes from Russell King: "Not much here again. The two most notable things here are the sched_clock() fix, which was causing problems with the scheduling of threaded IRQs after a suspend event, and the vfp fix, which afaik has only been seen on some older OMAP boards. Nevertheless, both are fairly important fixes." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7569/1: mm: uninitialized warning corrections ARM: 7567/1: io: avoid GCC's offsettable addressing modes for halfword accesses ARM: 7566/1: vfp: fix save and restore when running on pre-VFPv3 and CONFIG_VFPv3 set ARM: 7565/1: sched: stop sched_clock() during suspend
| | * ARM: 7565/1: sched: stop sched_clock() during suspendFelipe Balbi 22012-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The scheduler imposes a requirement to sched_clock() which is to stop the clock during suspend, if we don't do that any RT thread will be rescheduled in the future which might cause any sort of problems. This became an issue on OMAP when we converted omap-i2c.c to use threaded IRQs, it turned out that depending on how much time we spent on suspend, the I2C IRQ thread would end up being rescheduled so far in the future that I2C transfers would timeout and, because omap_hsmmc depends on an I2C-connected device to detect if an MMC card is inserted in the slot, our rootfs would just vanish. arch/arm/kernel/sched_clock.c already had an optional implementation (sched_clock_needs_suspend()) which would handle scheduler's requirement properly, what this patch does is simply to make that implementation non-optional. Note that this has the side-effect that printk timings won't reflect the actual time spent on suspend so other methods to measure that will have to be used. This has been tested with beagleboard XM (OMAP3630) and pandaboard rev A3 (OMAP4430). Suspend to RAM is now working after this patch. Thanks to Kevin Hilman for helping out with debugging. Acked-by: Kevin Hilman <khilman@ti.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-10-25
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM fixes from Russell King: "A random collection of various fixes, mainly from Arnd and a few other people. Not thing really stands out here." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: drop experimental status for hotplug and Thumb2 ARM: 7560/1: SMP_TWD: use DIV_ROUND_CLOSEST() for periodic mode ARM: 7559/1: smp: switch away from the idmap before updating init_mm.mm_count ARM: 7556/1: perf: fix updated event period in response to PERF_EVENT_IOC_PERIOD ARM: 7555/1: kexec: fix segment memory addresses check ARM: warnings in arch/arm/include/asm/uaccess.h ARM: binfmt_flat: unused variable 'persistent' ARM: be really quiet when building with 'make -s' ARM: pass -marm to gcc by default for both C and assembler ARM: Xen: fix initial build problems ARM: export default read_current_timer ARM: Fix another build warning in arch/arm/mm/alignment.c ARM: export set_irq_flags ARM: kprobes: make more tests conditional
| | * Merge tag 'fixes-for-rmk' of ↵Russell King2012-10-22
| | |\ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc into fixes
| | | * ARM: export set_irq_flagsArnd Bergmann2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recently added Emma Mobile GPIO driver calls set_irq_flags and irq_set_chip_and_handler for the interrupts it exports and it can be built as a module, which currently fails with ERROR: "set_irq_flags" [drivers/gpio/gpio-em.ko] undefined! We either need to replace the call to set_irq_flags with something else or export that function. This patch does the latter. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Magnus Damm <damm@opensource.se> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * ARM: kprobes: make more tests conditionalArnd Bergmann2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mls instruction is not available in ARMv6K or below, so we should make the test conditional on at least ARMv7. ldrexd/strexd are available in ARMv6K or ARMv7, which we can test by checking the CONFIG_CPU_32v6K symbol. /tmp/ccuMTZ8D.s: Assembler messages: /tmp/ccuMTZ8D.s:22188: Error: selected processor does not support ARM mode `mls r0,r1,r2,r3' /tmp/ccuMTZ8D.s:22222: Error: selected processor does not support ARM mode `mlshi r7,r8,r9,r10' /tmp/ccuMTZ8D.s:22252: Error: selected processor does not support ARM mode `mls lr,r1,r2,r13' Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Leif Lindholm <leif.lindholm@arm.com>
| | * | ARM: 7560/1: SMP_TWD: use DIV_ROUND_CLOSEST() for periodic modeLinus Walleij2012-10-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The periodic mode is currently calculated by a simple division but we should pay more attention to our integer arithmetics. Also delete a comment that does not make any sense. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | ARM: 7559/1: smp: switch away from the idmap before updating init_mm.mm_countWill Deacon2012-10-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting a secondary CPU, the primary CPU hands two sets of page tables via the secondary_data struct: (1) swapper_pg_dir: a normal, cacheable, shared (if SMP) mapping of the kernel image (i.e. the tables used by init_mm). (2) idmap_pgd: an uncached mapping of the .idmap.text ELF section. The idmap is generally used when enabling and disabling the MMU, which includes early CPU boot. In this case, the secondary CPU switches to swapper as soon as it enters C code: struct mm_struct *mm = &init_mm; unsigned int cpu = smp_processor_id(); /* * All kernel threads share the same mm context; grab a * reference and switch to it. */ atomic_inc(&mm->mm_count); current->active_mm = mm; cpumask_set_cpu(cpu, mm_cpumask(mm)); cpu_switch_mm(mm->pgd, mm); This causes a problem on ARMv7, where the identity mapping is treated as strongly-ordered leading to architecturally UNPREDICTABLE behaviour of exclusive accesses, such as those used by atomic_inc. This patch re-orders the secondary_start_kernel function so that we switch to swapper before performing any exclusive accesses. Cc: <stable@vger.kernel.org> Cc: David McKay <david.mckay@st.com> Reported-by: Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | ARM: 7556/1: perf: fix updated event period in response to PERF_EVENT_IOC_PERIODWill Deacon2012-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PERF_EVENT_IOC_PERIOD ioctl command can be used to change the sample period of a running perf_event. Consequently, when calculating the next event period, the new period will only be considered after the previous one has overflowed. This patch changes the calculation of the remaining event ticks so that they are offset if the period has changed. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: Andreas Sandberg <andreas.sandberg@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>