diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2012-07-11 01:50:31 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2012-07-11 21:12:59 -0400 |
commit | 46d3ceabd8d98ed0ad10f20c595ca784e34786c5 (patch) | |
tree | 771200292431be56c6ebcb23af9206bc03d40e65 /include/linux/tcp.h | |
parent | 2100844ca9d7055d5cddce2f8ed13af94c01f85b (diff) |
tcp: TCP Small Queues
This introduce TSQ (TCP Small Queues)
TSQ goal is to reduce number of TCP packets in xmit queues (qdisc &
device queues), to reduce RTT and cwnd bias, part of the bufferbloat
problem.
sk->sk_wmem_alloc not allowed to grow above a given limit,
allowing no more than ~128KB [1] per tcp socket in qdisc/dev layers at a
given time.
TSO packets are sized/capped to half the limit, so that we have two
TSO packets in flight, allowing better bandwidth use.
As a side effect, setting the limit to 40000 automatically reduces the
standard gso max limit (65536) to 40000/2 : It can help to reduce
latencies of high prio packets, having smaller TSO packets.
This means we divert sock_wfree() to a tcp_wfree() handler, to
queue/send following frames when skb_orphan() [2] is called for the
already queued skbs.
Results on my dev machines (tg3/ixgbe nics) are really impressive,
using standard pfifo_fast, and with or without TSO/GSO.
Without reduction of nominal bandwidth, we have reduction of buffering
per bulk sender :
< 1ms on Gbit (instead of 50ms with TSO)
< 8ms on 100Mbit (instead of 132 ms)
I no longer have 4 MBytes backlogged in qdisc by a single netperf
session, and both side socket autotuning no longer use 4 Mbytes.
As skb destructor cannot restart xmit itself ( as qdisc lock might be
taken at this point ), we delegate the work to a tasklet. We use one
tasklest per cpu for performance reasons.
If tasklet finds a socket owned by the user, it sets TSQ_OWNED flag.
This flag is tested in a new protocol method called from release_sock(),
to eventually send new segments.
[1] New /proc/sys/net/ipv4/tcp_limit_output_bytes tunable
[2] skb_orphan() is usually called at TX completion time,
but some drivers call it in their start_xmit() handler.
These drivers should at least use BQL, or else a single TCP
session can still fill the whole NIC TX ring, since TSQ will
have no effect.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Dave Taht <dave.taht@bufferbloat.net>
Cc: Tom Herbert <therbert@google.com>
Cc: Matt Mathis <mattmathis@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux/tcp.h')
-rw-r--r-- | include/linux/tcp.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 2de9cf46f9fc..1888169e07c7 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h | |||
@@ -339,6 +339,9 @@ struct tcp_sock { | |||
339 | u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */ | 339 | u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */ |
340 | u32 lsndtime; /* timestamp of last sent data packet (for restart window) */ | 340 | u32 lsndtime; /* timestamp of last sent data packet (for restart window) */ |
341 | 341 | ||
342 | struct list_head tsq_node; /* anchor in tsq_tasklet.head list */ | ||
343 | unsigned long tsq_flags; | ||
344 | |||
342 | /* Data for direct copy to user */ | 345 | /* Data for direct copy to user */ |
343 | struct { | 346 | struct { |
344 | struct sk_buff_head prequeue; | 347 | struct sk_buff_head prequeue; |
@@ -494,6 +497,12 @@ struct tcp_sock { | |||
494 | struct tcp_cookie_values *cookie_values; | 497 | struct tcp_cookie_values *cookie_values; |
495 | }; | 498 | }; |
496 | 499 | ||
500 | enum tsq_flags { | ||
501 | TSQ_THROTTLED, | ||
502 | TSQ_QUEUED, | ||
503 | TSQ_OWNED, /* tcp_tasklet_func() found socket was locked */ | ||
504 | }; | ||
505 | |||
497 | static inline struct tcp_sock *tcp_sk(const struct sock *sk) | 506 | static inline struct tcp_sock *tcp_sk(const struct sock *sk) |
498 | { | 507 | { |
499 | return (struct tcp_sock *)sk; | 508 | return (struct tcp_sock *)sk; |