| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Use "+m" rather than a combination of "=m" and "m" for improved
clarity and consistency.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
|
|
|
| |
There already exists a big endian safe bitops implementation in
lib/find_next_bit.c. The code in it is 90%+ common with the powerpc
specific version, so the powerpc version is redundant. This patch
makes the necessary changes to use the generic bitops in powerpc, and
removes the powerpc specific version.
Signed-off-by: Jon Mason <jdmason@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- remove __{,test_and_}{set,clear,change}_bit() and test_bit()
- remove generic_fls64()
- remove generic_hweight{64,32,16,8}()
- remove sched_find_first_bit()
Signed-off-by: Akinobu Mita <mita@miraclelinux.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
eieio is only a store - store ordering. When used to order an unlock
operation loads may leak out of the critical region. This is potentially
buggy, one example is if a user wants to atomically read a couple of
values.
We can solve this with an lwsync which orders everything except store - load.
I removed the (now unused) EIEIO_ON_SMP macros and the c versions
isync_on_smp and eieio_on_smp now we dont use them. I also removed some
old comments that were used to identify inline spinlocks in assembly,
they dont make sense now our locks are out of line.
Another interesting thing was that read_unlock was using an eieio even
though the rest of the spinlock code had already been converted to
use lwsync.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
|
|
|
|
|
| |
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consolidates macros used to generate assembly for
compatibility across different CPUs or configs. A new header,
asm-powerpc/asm-compat.h contains the main compatibility macros. It
uses some preprocessor magic to make the macros suitable both for use
in .S files, and in inline asm in .c files. Headers (bitops.h,
uaccess.h, atomic.h, bug.h) which had their own such compatibility
macros are changed to use asm-compat.h.
ppc_asm.h is now for use in .S files *only*, and a #error enforces
that. As such, we're a lot more careless about namespace pollution
here than in asm-compat.h.
While we're at it, this patch adds a call to the PPC405_ERR77 macro in
futex.h which should have had it already, but didn't.
Built and booted on pSeries, Maple and iSeries (ARCH=powerpc). Built
for 32-bit powermac (ARCH=powerpc) and Walnut (ARCH=ppc).
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
|
|
Here's a revised version. This re-introduces the set_bits() function
from ppc64, which I removed because I thought it was unused (it exists
on no other arch). In fact it is used in the powermac interrupt code
(but not on pSeries).
- We use LARXL/STCXL macros to generate the right (32 or 64 bit)
instructions, similar to LDL/STL from ppc_asm.h, used in fpu.S
- ppc32 previously used a full "sync" barrier at the end of
test_and_*_bit(), whereas ppc64 used an "isync". The merged version
uses "isync", since I believe that's sufficient.
- The ppc64 versions of then minix_*() bitmap functions have changed
semantics. Previously on ppc64, these functions were big-endian
(that is bit 0 was the LSB in the first 64-bit, big-endian word).
On ppc32 (and x86, for that matter, they were little-endian. As far
as I can tell, the big-endian usage was simply wrong - I guess
no-one ever tried to use minixfs on ppc64.
- On ppc32 find_next_bit() and find_next_zero_bit() are no longer
inline (they were already out-of-line on ppc64).
- For ppc64, sched_find_first_bit() has moved from mmu_context.h to
the merged bitops. What it was doing in mmu_context.h in the first
place, I have no idea.
- The fls() function is now implemented using the cntlzw instruction
on ppc64, instead of generic_fls(), as it already was on ppc32.
- For ARCH=ppc, this patch requires adding arch/powerpc/lib to the
arch/ppc/Makefile. This in turn requires some changes to
arch/powerpc/lib/Makefile which didn't correctly handle ARCH=ppc.
Built and running on G5.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
|