diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-13 15:38:26 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-13 15:38:26 -0500 |
commit | 8e9a2dba8686187d8c8179e5b86640e653963889 (patch) | |
tree | a4ba543649219cbb28d91aab65b785d763f5d069 /net/ipv4/tcp_input.c | |
parent | 6098850e7e6978f95a958f79a645a653228d0002 (diff) | |
parent | 450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730 (diff) |
Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull core locking updates from Ingo Molnar:
"The main changes in this cycle are:
- Another attempt at enabling cross-release lockdep dependency
tracking (automatically part of CONFIG_PROVE_LOCKING=y), this time
with better performance and fewer false positives. (Byungchul Park)
- Introduce lockdep_assert_irqs_enabled()/disabled() and convert
open-coded equivalents to lockdep variants. (Frederic Weisbecker)
- Add down_read_killable() and use it in the VFS's iterate_dir()
method. (Kirill Tkhai)
- Convert remaining uses of ACCESS_ONCE() to
READ_ONCE()/WRITE_ONCE(). Most of the conversion was Coccinelle
driven. (Mark Rutland, Paul E. McKenney)
- Get rid of lockless_dereference(), by strengthening Alpha atomics,
strengthening READ_ONCE() with smp_read_barrier_depends() and thus
being able to convert users of lockless_dereference() to
READ_ONCE(). (Will Deacon)
- Various micro-optimizations:
- better PV qspinlocks (Waiman Long),
- better x86 barriers (Michael S. Tsirkin)
- better x86 refcounts (Kees Cook)
- ... plus other fixes and enhancements. (Borislav Petkov, Juergen
Gross, Miguel Bernal Marin)"
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
locking/x86: Use LOCK ADD for smp_mb() instead of MFENCE
rcu: Use lockdep to assert IRQs are disabled/enabled
netpoll: Use lockdep to assert IRQs are disabled/enabled
timers/posix-cpu-timers: Use lockdep to assert IRQs are disabled/enabled
sched/clock, sched/cputime: Use lockdep to assert IRQs are disabled/enabled
irq_work: Use lockdep to assert IRQs are disabled/enabled
irq/timings: Use lockdep to assert IRQs are disabled/enabled
perf/core: Use lockdep to assert IRQs are disabled/enabled
x86: Use lockdep to assert IRQs are disabled/enabled
smp/core: Use lockdep to assert IRQs are disabled/enabled
timers/hrtimer: Use lockdep to assert IRQs are disabled/enabled
timers/nohz: Use lockdep to assert IRQs are disabled/enabled
workqueue: Use lockdep to assert IRQs are disabled/enabled
irq/softirqs: Use lockdep to assert IRQs are disabled/enabled
locking/lockdep: Add IRQs disabled/enabled assertion APIs: lockdep_assert_irqs_enabled()/disabled()
locking/pvqspinlock: Implement hybrid PV queued/unfair locks
locking/rwlocks: Fix comments
x86/paravirt: Set up the virt_spin_lock_key after static keys get initialized
block, locking/lockdep: Assign a lock_class per gendisk used for wait_for_completion()
workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes
...
Diffstat (limited to 'net/ipv4/tcp_input.c')
-rw-r--r-- | net/ipv4/tcp_input.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index b6bb3cdfad09..887585045b27 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c | |||
@@ -816,12 +816,12 @@ static void tcp_update_pacing_rate(struct sock *sk) | |||
816 | if (likely(tp->srtt_us)) | 816 | if (likely(tp->srtt_us)) |
817 | do_div(rate, tp->srtt_us); | 817 | do_div(rate, tp->srtt_us); |
818 | 818 | ||
819 | /* ACCESS_ONCE() is needed because sch_fq fetches sk_pacing_rate | 819 | /* WRITE_ONCE() is needed because sch_fq fetches sk_pacing_rate |
820 | * without any lock. We want to make sure compiler wont store | 820 | * without any lock. We want to make sure compiler wont store |
821 | * intermediate values in this location. | 821 | * intermediate values in this location. |
822 | */ | 822 | */ |
823 | ACCESS_ONCE(sk->sk_pacing_rate) = min_t(u64, rate, | 823 | WRITE_ONCE(sk->sk_pacing_rate, min_t(u64, rate, |
824 | sk->sk_max_pacing_rate); | 824 | sk->sk_max_pacing_rate)); |
825 | } | 825 | } |
826 | 826 | ||
827 | /* Calculate rto without backoff. This is the second half of Van Jacobson's | 827 | /* Calculate rto without backoff. This is the second half of Van Jacobson's |