aboutsummaryrefslogtreecommitdiffstats
path: root/arch/mips/include/asm/cmpxchg.h
Commit message (Collapse)AuthorAge
* MIPS: Get rid of branches to .subsections.Ralf Baechle2010-10-29
| | | | | | | | | | | | | | | | | | | | | | | | It was a nice optimization - on paper at least. In practice it results in branches that may exceed the maximum legal range for a branch. We can fight that problem with -ffunction-sections but -ffunction-sections again is incompatible with -pg used by the function tracer. By rewriting the loop around all simple LL/SC blocks to C we reduce the amount of inline assembler and at the same time allow GCC to often fill the branch delay slots with something sensible or whatever else clever optimization it may have up in its sleeve. With this optimization gone we also no longer need -ffunction-sections, so drop it. This optimization was originally introduced in 2.6.21, commit 5999eca25c1fd4b9b9aca7833b04d10fe4bc877d (linux-mips.org) rsp. f65e4fa8e0c6022ad58dc88d1b11b12589ed7f9f (kernel.org). Original fix for the issues which caused me to pull this optimization by Paul Gortmaker <paul.gortmaker@windriver.com>. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: cmpxchg.h: Fix excessive indentation.Ralf Baechle2010-04-30
| | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: New macro smp_mb__before_llsc.David Daney2010-02-27
| | | | | | | | | | | | | | | | | Replace some instances of smp_llsc_mb() with a new macro smp_mb__before_llsc(). It is used before ll/sc sequences that are documented as needing write barrier semantics. The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(), so there are no changes in semantics. Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just barrier() in the non-SMP case. Signed-off-by: David Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/851/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Allow kernel use of LL/SC to be separate from the presence of LL/SC.David Daney2009-09-17
| | | | | | | | | | | | | | | | | On some CPUs, it is more efficient to disable and enable interrupts in the kernel rather than use ll/sc for atomic operations. But if we were to set cpu_has_llsc to false, we would break the userspace futex interface (in asm/futex.h). We separate the two concepts, with a new predicate kernel_uses_llsc, that lets us disable the kernel's use of ll/sc while still allowing the futex code to use it. Also there were a couple of cases in bitops.h where we were using ll/sc unconditionally even if cpu_has_llsc were false. Signed-off-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Move headfiles to new location below arch/mips/includeRalf Baechle2008-10-11
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>