diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-19 10:52:48 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-19 10:52:48 -0500 |
commit | 7f2de8171ddf28fdb2ca7f9a683ee1207849f718 (patch) | |
tree | d89da981ac762de3fd32e1c08ddc8041f3c37519 /arch | |
parent | 59771079c18c44e39106f0f30054025acafadb41 (diff) | |
parent | cf66bb93e0f75e0a4ba1ec070692618fa028e994 (diff) |
Merge tag 'byteswap-for-linus-20121219' of git://git.infradead.org/users/dwmw2/byteswap
Pull preparatory gcc intrisics bswap patch from David Woodhouse:
"This single patch is effectively a no-op for now. It enables
architectures to opt in to using GCC's __builtin_bswapXX() intrinsics
for byteswapping, and if we merge this now then the architecture
maintainers can enable it for their arch during the next cycle without
dependency issues.
It's worth making it a par-arch opt-in, because although in *theory*
the compiler should never do worse than hand-coded assembler (and of
course it also ought to do a lot better on platforms like Atom and
PowerPC which have load-and-swap or store-and-swap instructions), that
isn't always the case. See
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46453
for example."
* tag 'byteswap-for-linus-20121219' of git://git.infradead.org/users/dwmw2/byteswap:
byteorder: allow arch to opt to use GCC intrinsics for byteswapping
Diffstat (limited to 'arch')
-rw-r--r-- | arch/Kconfig | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/arch/Kconfig b/arch/Kconfig index 54ffd0f9df21..8e9e3246b2b4 100644 --- a/arch/Kconfig +++ b/arch/Kconfig | |||
@@ -113,6 +113,25 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS | |||
113 | See Documentation/unaligned-memory-access.txt for more | 113 | See Documentation/unaligned-memory-access.txt for more |
114 | information on the topic of unaligned memory accesses. | 114 | information on the topic of unaligned memory accesses. |
115 | 115 | ||
116 | config ARCH_USE_BUILTIN_BSWAP | ||
117 | bool | ||
118 | help | ||
119 | Modern versions of GCC (since 4.4) have builtin functions | ||
120 | for handling byte-swapping. Using these, instead of the old | ||
121 | inline assembler that the architecture code provides in the | ||
122 | __arch_bswapXX() macros, allows the compiler to see what's | ||
123 | happening and offers more opportunity for optimisation. In | ||
124 | particular, the compiler will be able to combine the byteswap | ||
125 | with a nearby load or store and use load-and-swap or | ||
126 | store-and-swap instructions if the architecture has them. It | ||
127 | should almost *never* result in code which is worse than the | ||
128 | hand-coded assembler in <asm/swab.h>. But just in case it | ||
129 | does, the use of the builtins is optional. | ||
130 | |||
131 | Any architecture with load-and-swap or store-and-swap | ||
132 | instructions should set this. And it shouldn't hurt to set it | ||
133 | on architectures that don't have such instructions. | ||
134 | |||
116 | config HAVE_SYSCALL_WRAPPERS | 135 | config HAVE_SYSCALL_WRAPPERS |
117 | bool | 136 | bool |
118 | 137 | ||