diff options
| author | Avi Kivity <avi@redhat.com> | 2010-02-13 03:33:12 -0500 |
|---|---|---|
| committer | H. Peter Anvin <hpa@zytor.com> | 2010-02-13 16:37:56 -0500 |
| commit | 0d1622d7f526311d87d7da2ee7dd14b73e45d3fc (patch) | |
| tree | eb97e7b70d96faabbbd32cfea8fa34ac5e12eef5 | |
| parent | 1838ef1d782f7527e6defe87e180598622d2d071 (diff) | |
x86-64, rwsem: Avoid store forwarding hazard in __downgrade_write
The Intel Architecture Optimization Reference Manual states that a short
load that follows a long store to the same object will suffer a store
forwading penalty, particularly if the two accesses use different addresses.
Trivially, a long load that follows a short store will also suffer a penalty.
__downgrade_write() in rwsem incurs both penalties: the increment operation
will not be able to reuse a recently-loaded rwsem value, and its result will
not be reused by any recently-following rwsem operation.
A comment in the code states that this is because 64-bit immediates are
special and expensive; but while they are slightly special (only a single
instruction allows them), they aren't expensive: a test shows that two loops,
one loading a 32-bit immediate and one loading a 64-bit immediate, both take
1.5 cycles per iteration.
Fix this by changing __downgrade_write to use the same add instruction on
i386 and on x86_64, so that it uses the same operand size as all the other
rwsem functions.
Signed-off-by: Avi Kivity <avi@redhat.com>
LKML-Reference: <1266049992-17419-1-git-send-email-avi@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| -rw-r--r-- | arch/x86/include/asm/rwsem.h | 25 |
1 files changed, 5 insertions, 20 deletions
diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h index 10204a25bf93..606ede126972 100644 --- a/arch/x86/include/asm/rwsem.h +++ b/arch/x86/include/asm/rwsem.h | |||
| @@ -232,34 +232,19 @@ static inline void __up_write(struct rw_semaphore *sem) | |||
| 232 | */ | 232 | */ |
| 233 | static inline void __downgrade_write(struct rw_semaphore *sem) | 233 | static inline void __downgrade_write(struct rw_semaphore *sem) |
| 234 | { | 234 | { |
| 235 | #ifdef CONFIG_X86_64 | ||
| 236 | # if RWSEM_WAITING_BIAS != -0x100000000 | ||
| 237 | # error "This code assumes RWSEM_WAITING_BIAS == -2^32" | ||
| 238 | # endif | ||
| 239 | |||
| 240 | /* 64-bit immediates are special and expensive, and not needed here */ | ||
| 241 | asm volatile("# beginning __downgrade_write\n\t" | ||
| 242 | LOCK_PREFIX "incl 4(%1)\n\t" | ||
| 243 | /* transitions 0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 */ | ||
| 244 | " jns 1f\n\t" | ||
| 245 | " call call_rwsem_downgrade_wake\n" | ||
| 246 | "1:\n\t" | ||
| 247 | "# ending __downgrade_write\n" | ||
| 248 | : "+m" (sem->count) | ||
| 249 | : "a" (sem) | ||
| 250 | : "memory", "cc"); | ||
| 251 | #else | ||
| 252 | asm volatile("# beginning __downgrade_write\n\t" | 235 | asm volatile("# beginning __downgrade_write\n\t" |
| 253 | LOCK_PREFIX _ASM_ADD "%2,(%1)\n\t" | 236 | LOCK_PREFIX _ASM_ADD "%2,(%1)\n\t" |
| 254 | /* transitions 0xZZZZ0001 -> 0xYYYY0001 */ | 237 | /* |
| 238 | * transitions 0xZZZZ0001 -> 0xYYYY0001 (i386) | ||
| 239 | * 0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 (x86_64) | ||
| 240 | */ | ||
| 255 | " jns 1f\n\t" | 241 | " jns 1f\n\t" |
| 256 | " call call_rwsem_downgrade_wake\n" | 242 | " call call_rwsem_downgrade_wake\n" |
| 257 | "1:\n\t" | 243 | "1:\n\t" |
| 258 | "# ending __downgrade_write\n" | 244 | "# ending __downgrade_write\n" |
| 259 | : "+m" (sem->count) | 245 | : "+m" (sem->count) |
| 260 | : "a" (sem), "i" (-RWSEM_WAITING_BIAS) | 246 | : "a" (sem), "er" (-RWSEM_WAITING_BIAS) |
| 261 | : "memory", "cc"); | 247 | : "memory", "cc"); |
| 262 | #endif | ||
| 263 | } | 248 | } |
| 264 | 249 | ||
| 265 | /* | 250 | /* |
