aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/bitops.h
Commit message (Collapse)AuthorAge
* Merge branch 'core-hweight-for-linus' of ↵Linus Torvalds2010-05-18
|\ | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-hweight-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, hweight: Use a 32-bit popcnt for __arch_hweight32() arch, hweight: Fix compilation errors x86: Add optimized popcnt variants bitops: Optimize hweight() by making use of compile-time evaluation
| * arch, hweight: Fix compilation errorsBorislav Petkov2010-05-04
| | | | | | | | | | | | | | | | | | | | | | Fix function prototype visibility issues when compiling for non-x86 architectures. Tested with crosstool (ftp://ftp.kernel.org/pub/tools/crosstool/) with alpha, ia64 and sparc targets. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100503130736.GD26107@aftab> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * bitops: Optimize hweight() by making use of compile-time evaluationPeter Zijlstra2010-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | Rename the extisting runtime hweight() implementations to __arch_hweight(), rename the compile-time versions to __const_hweight() and then have hweight() pick between them. Suggested-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100318111929.GB11152@aftab> Acked-by: H. Peter Anvin <hpa@zytor.com> LKML-Reference: <1265028224.24455.154.camel@laptop> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* | bitops: remove temporary for_each_bit()Andrew Morton2010-04-07
|/ | | | | | | | | | | Migration has been completed so remove this now. There's one straggler in linux-next's drivers/mtd/sm_ftl.c. A patch has been sent. Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* bitops: rename for_each_bit() to for_each_set_bit()Akinobu Mita2010-03-06
| | | | | | | | | | | | | | | | | | | | Rename for_each_bit to for_each_set_bit in the kernel source tree. To permit for_each_clear_bit(), should that ever be added. The patch includes a macro to map the old for_each_bit() onto the new for_each_set_bit(). This is a (very) temporary thing to ease the migration. [akpm@linux-foundation.org: add temporary for_each_bit()] Suggested-by: Alexey Dobriyan <adobriyan@gmail.com> Suggested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Russell King <rmk@arm.linux.org.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Artem Bityutskiy <dedekind@infradead.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* bitops: Ensure the compile time HWEIGHT is only used for suchPeter Zijlstra2010-02-04
| | | | | | | | | | Avoid accidental misuse by failing to compile things Suggested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* bitops: Provide compile time HWEIGHT{8,16,32,64}Peter Zijlstra2010-01-29
| | | | | | | | | | | | | Provide compile time versions of hweight. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> LKML-Reference: <20100122155535.797688466@chello.nl> [ Remove some whitespace damage while we are at it ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
* bitops: Add __ffs64 bitopSteven Whitehouse2009-04-23
| | | | | | | | | | Finds the first set bit in a 64 bit word. This is required in order to fix a bug in GFS2, but I think it should be a generic function in case of future users. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Reviewed-by: Christoph Lameter <cl@linux.com> Reviewed-by: Willy Tarreau <w@1wt.eu>
* bitmap: find_last_bit()Rusty Russell2008-12-31
| | | | | | | | Impact: New API As the name suggests. For the moment everyone uses the generic one. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* bitops: remove "optimizations"Thomas Gleixner2008-04-29
| | | | | | | | | | | | | | | | | | | | The mapsize optimizations which were moved from x86 to the generic code in commit 64970b68d2b3ed32b964b0b30b1b98518fde388e increased the binary size on non x86 architectures. Looking into the real effects of the "optimizations" it turned out that they are not used in find_next_bit() and find_next_zero_bit(). The ones in find_first_bit() and find_first_zero_bit() are used in a couple of places but none of them is a real hot path. Remove the "optimizations" all together and call the library functions unconditionally. Boot-tested on x86 and compile tested on every cross compiler I have. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Avoid divides in BITS_TO_LONGSEric Dumazet2008-04-29
| | | | | | | | | | | | | | | | | BITS_PER_LONG is a signed value (32 or 64) DIV_ROUND_UP(nr, BITS_PER_LONG) performs signed arithmetic if "nr" is signed too. Converting BITS_TO_LONGS(nr) to DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) makes sure compiler can perform a right shift, even if "nr" is a signed value, instead of an expensive integer divide. Applying this patch saves 141 bytes on x86 when CONFIG_CC_OPTIMIZE_FOR_SIZE=y and speedup bitmap operations. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: optimize find_first_bit for small bitmapsAlexander van Heukelum2008-04-26
| | | | | | | | | | Avoid a call to find_first_bit if the bitmap size is know at compile time and small enough to fit in a single long integer. Modeled after an optimization in the original x86_64-specific code. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: generic versions of find_first_(zero_)bit, convert i386Alexander van Heukelum2008-04-26
| | | | | | | | | | | | | | | | | | Generic versions of __find_first_bit and __find_first_zero_bit are introduced as simplified versions of __find_next_bit and __find_next_zero_bit. Their compilation and use are guarded by a new config variable GENERIC_FIND_FIRST_BIT. The generic versions of find_first_bit and find_first_zero_bit are implemented in terms of the newly introduced __find_first_bit and __find_first_zero_bit. This patch does not remove the i386-specific implementation, but it does switch i386 to use the generic functions by setting GENERIC_FIND_FIRST_BIT=y for X86_32. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, generic: optimize find_next_(zero_)bit for small constant-size bitmapsAlexander van Heukelum2008-04-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This moves an optimization for searching constant-sized small bitmaps form x86_64-specific to generic code. On an i386 defconfig (the x86#testing one), the size of vmlinux hardly changes with this applied. I have observed only four places where this optimization avoids a call into find_next_bit: In the functions return_unused_surplus_pages, alloc_fresh_huge_page, and adjust_pool_surplus, this patch avoids a call for a 1-bit bitmap. In __next_cpu a call is avoided for a 32-bit bitmap. That's it. On x86_64, 52 locations are optimized with a minimal increase in code size: Current #testing defconfig: 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392637 846592 724424 6963653 6a41c5 vmlinux After removing the x86_64 specific optimization for find_next_*bit: 94 x bsf, 79 x find_next_*bit text data bss dec hex filename 5392358 846592 724424 6963374 6a40ae vmlinux After this patch (making the optimization generic): 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392396 846592 724424 6963412 6a40d4 vmlinux [ tglx@linutronix.de: build fixes ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
* kernel: add bit rotation helpers for 16 and 8 bitHarvey Harrison2008-03-28
| | | | | | | | | | | | | | Will replace open-coded variants elsewhere. Done in the same style as the 32-bit versions. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: John W. Linville <linville@tuxdriver.com> Cc: Joe Perches <joe@perches.com> Cc: Jiri Benc <jbenc@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* remove BITS_TO_TYPE macroJiri Slaby2007-10-19
| | | | | | | | | | | remove BITS_TO_TYPE macro I realized, that it is actually the same as DIV_ROUND_UP, use it instead. [akpm@linux-foundation.org: build fix] Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* define global BIT macroJiri Slaby2007-10-19
| | | | | | | | | | | | | | | | | | | | define global BIT macro move all local BIT defines to the new globally define macro. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Kumar Gala <galak@gate.crashing.org> Cc: Dmitry Torokhov <dtor@mail.ru> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: "Antonino A. Daplas" <adaplas@pol.net> Cc: Russell King <rmk@arm.linux.org.uk> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: "John W. Linville" <linville@tuxdriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* define first set of BIT* macrosJiri Slaby2007-10-19
| | | | | | | | | | | | | | | define first set of BIT* macros - move BITOP_MASK and BITOP_WORD from asm-generic/bitops/atomic.h to include/linux/bitops.h and rename it to BIT_MASK and BIT_WORD - move BITS_TO_LONGS and BITS_PER_BYTE to bitops.h too and allow easily define another BITS_TO_something (e.g. in event.c) by BITS_TO_TYPE macro Remaining (and common) BIT macro will be defined after all occurences and conflicts will be sorted out in the patches. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Add support for MSI and MSI-XShannon Nelson2007-10-16
| | | | | | | | | | | Add support for MSI and MSI-X interrupt handling, including the ability to choose the desired interrupt method. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Acked-by: David S. Miller <davem@davemloft.net> [bunk@kernel.org: drivers/dma/ioat_dma.c: make 3 functions static] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] fix various kernel-doc in header filesRobert P. J. Day2007-01-26
| | | | | | | | | | | Fix a number of kernel-doc entries for header files in include/linux by making sure they begin with the appropriate '/**' notation and use @var notation. Signed-off-by: Robert P. J. Day <rpjday@mindspring.com> Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] bitops: remove unused generic bitops in include/linux/bitops.hAkinobu Mita2006-03-26
| | | | | | | | | generic_{ffs,fls,fls64,hweight{64,32,16,8}}() were moved into include/asm-generic/bitops.h. So all architectures don't use them. Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] roundup_pow_of_two() 64-bit fixAndrew Morton2006-03-25
| | | | | | | | | | | | | | | | | | | | fls() takes an integer, so roundup_pow_of_two() is busted for ulongs larger than 2^32-1. Fix this by implementing and using fls_long(). (Why does roundup_pow_of_two() return a long?) (Why is roundup_pow_of_two() __attribute_const__ whereas long_log2() is __attribute_pure__?) (Why does long_log2() suck so much? Because we were missing fls_long()?) Cc: Roland Dreier <rdreier@cisco.com> Cc: "Chen, Kenneth W" <kenneth.w.chen@intel.com> Cc: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fix generic_fls64()Akinobu Mita2006-02-03
| | | | | | | | | | Noticed by Rune Torgersen. Fix generic_fls64(). tcp_cubic is using fls64(). Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [FLS64]: generic versionStephen Hemminger2006-01-03
| | | | | Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] x86-64/i386: Intel HT, Multi core detection fixesSiddha, Suresh B2005-11-14
| | | | | | | | | | | | | | | Fields obtained through cpuid vector 0x1(ebx[16:23]) and vector 0x4(eax[14:25], eax[26:31]) indicate the maximum values and might not always be the same as what is available and what OS sees. So make sure "siblings" and "cpu cores" values in /proc/cpuinfo reflect the values as seen by OS instead of what cpuid instruction says. This will also fix the buggy BIOS cases (for example where cpuid on a single core cpu says there are "2" siblings, even when HT is disabled in the BIOS. http://bugzilla.kernel.org/show_bug.cgi?id=4359) Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-16
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!