aboutsummaryrefslogtreecommitdiffstats
path: root/arch/Kconfig
diff options
context:
space:
mode:
authorDavid Woodhouse <David.Woodhouse@intel.com>2012-12-03 11:25:40 -0500
committerDavid Woodhouse <David.Woodhouse@intel.com>2012-12-05 20:22:31 -0500
commitcf66bb93e0f75e0a4ba1ec070692618fa028e994 (patch)
tree0ae48658adb29f50bdd85a94cbb84670a234f441 /arch/Kconfig
parent27d7c2a006a81c04fab00b8cd81b99af3b32738d (diff)
byteorder: allow arch to opt to use GCC intrinsics for byteswapping
Since GCC 4.4, there have been __builtin_bswap32() and __builtin_bswap16() intrinsics. A __builtin_bswap16() came a little later (4.6 for PowerPC, 48 for other platforms). By using these instead of the inline assembler that most architectures have in their __arch_swabXX() macros, we let the compiler see what's actually happening. The resulting code should be at least as good, and much *better* in the cases where it can be combined with a nearby load or store, using a load-and-byteswap or store-and-byteswap instruction (e.g. lwbrx/stwbrx on PowerPC, movbe on Atom). When GCC is sufficiently recent *and* the architecture opts in to using the intrinsics by setting CONFIG_ARCH_USE_BUILTIN_BSWAP, they will be used in preference to the __arch_swabXX() macros. An architecture which does not set ARCH_USE_BUILTIN_BSWAP will continue to use its own hand-crafted macros. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/Kconfig')
-rw-r--r--arch/Kconfig19
1 files changed, 19 insertions, 0 deletions
diff --git a/arch/Kconfig b/arch/Kconfig
index 366ec06a5185..c31416b10586 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -112,6 +112,25 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
112 See Documentation/unaligned-memory-access.txt for more 112 See Documentation/unaligned-memory-access.txt for more
113 information on the topic of unaligned memory accesses. 113 information on the topic of unaligned memory accesses.
114 114
115config ARCH_USE_BUILTIN_BSWAP
116 bool
117 help
118 Modern versions of GCC (since 4.4) have builtin functions
119 for handling byte-swapping. Using these, instead of the old
120 inline assembler that the architecture code provides in the
121 __arch_bswapXX() macros, allows the compiler to see what's
122 happening and offers more opportunity for optimisation. In
123 particular, the compiler will be able to combine the byteswap
124 with a nearby load or store and use load-and-swap or
125 store-and-swap instructions if the architecture has them. It
126 should almost *never* result in code which is worse than the
127 hand-coded assembler in <asm/swab.h>. But just in case it
128 does, the use of the builtins is optional.
129
130 Any architecture with load-and-swap or store-and-swap
131 instructions should set this. And it shouldn't hurt to set it
132 on architectures that don't have such instructions.
133
115config HAVE_SYSCALL_WRAPPERS 134config HAVE_SYSCALL_WRAPPERS
116 bool 135 bool
117 136