diff options
author | David Gibson <david@gibson.dropbear.id.au> | 2005-11-01 01:28:10 -0500 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2005-11-01 05:49:02 -0500 |
commit | a0e60b2033b30a6bb8479629001cf98e58e4079a (patch) | |
tree | 6386eeca340a25c4ae1876f2f9663f94628c8cc3 /include/asm-ppc64/mmu_context.h | |
parent | 031ef0a72aa8f7ee63ae9f307c1bcff92b3ccc2c (diff) |
[PATCH] powerpc: Merge bitops.h
Here's a revised version. This re-introduces the set_bits() function
from ppc64, which I removed because I thought it was unused (it exists
on no other arch). In fact it is used in the powermac interrupt code
(but not on pSeries).
- We use LARXL/STCXL macros to generate the right (32 or 64 bit)
instructions, similar to LDL/STL from ppc_asm.h, used in fpu.S
- ppc32 previously used a full "sync" barrier at the end of
test_and_*_bit(), whereas ppc64 used an "isync". The merged version
uses "isync", since I believe that's sufficient.
- The ppc64 versions of then minix_*() bitmap functions have changed
semantics. Previously on ppc64, these functions were big-endian
(that is bit 0 was the LSB in the first 64-bit, big-endian word).
On ppc32 (and x86, for that matter, they were little-endian. As far
as I can tell, the big-endian usage was simply wrong - I guess
no-one ever tried to use minixfs on ppc64.
- On ppc32 find_next_bit() and find_next_zero_bit() are no longer
inline (they were already out-of-line on ppc64).
- For ppc64, sched_find_first_bit() has moved from mmu_context.h to
the merged bitops. What it was doing in mmu_context.h in the first
place, I have no idea.
- The fls() function is now implemented using the cntlzw instruction
on ppc64, instead of generic_fls(), as it already was on ppc32.
- For ARCH=ppc, this patch requires adding arch/powerpc/lib to the
arch/ppc/Makefile. This in turn requires some changes to
arch/powerpc/lib/Makefile which didn't correctly handle ARCH=ppc.
Built and running on G5.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'include/asm-ppc64/mmu_context.h')
-rw-r--r-- | include/asm-ppc64/mmu_context.h | 15 |
1 files changed, 0 insertions, 15 deletions
diff --git a/include/asm-ppc64/mmu_context.h b/include/asm-ppc64/mmu_context.h index 77a743402db4..820dd729b895 100644 --- a/include/asm-ppc64/mmu_context.h +++ b/include/asm-ppc64/mmu_context.h | |||
@@ -16,21 +16,6 @@ | |||
16 | * 2 of the License, or (at your option) any later version. | 16 | * 2 of the License, or (at your option) any later version. |
17 | */ | 17 | */ |
18 | 18 | ||
19 | /* | ||
20 | * Every architecture must define this function. It's the fastest | ||
21 | * way of searching a 140-bit bitmap where the first 100 bits are | ||
22 | * unlikely to be set. It's guaranteed that at least one of the 140 | ||
23 | * bits is cleared. | ||
24 | */ | ||
25 | static inline int sched_find_first_bit(unsigned long *b) | ||
26 | { | ||
27 | if (unlikely(b[0])) | ||
28 | return __ffs(b[0]); | ||
29 | if (unlikely(b[1])) | ||
30 | return __ffs(b[1]) + 64; | ||
31 | return __ffs(b[2]) + 128; | ||
32 | } | ||
33 | |||
34 | static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) | 19 | static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) |
35 | { | 20 | { |
36 | } | 21 | } |