aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4/tcp_ipv4.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-11-13 03:40:34 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2013-11-13 03:40:34 -0500
commit42a2d923cc349583ebf6fdd52a7d35e1c2f7e6bd (patch)
tree2b2b0c03b5389c1301800119333967efafd994ca /net/ipv4/tcp_ipv4.c
parent5cbb3d216e2041700231bcfc383ee5f8b7fc8b74 (diff)
parent75ecab1df14d90e86cebef9ec5c76befde46e65f (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: 1) The addition of nftables. No longer will we need protocol aware firewall filtering modules, it can all live in userspace. At the core of nftables is a, for lack of a better term, virtual machine that executes byte codes to inspect packet or metadata (arriving interface index, etc.) and make verdict decisions. Besides support for loading packet contents and comparing them, the interpreter supports lookups in various datastructures as fundamental operations. For example sets are supports, and therefore one could create a set of whitelist IP address entries which have ACCEPT verdicts attached to them, and use the appropriate byte codes to do such lookups. Since the interpreted code is composed in userspace, userspace can do things like optimize things before giving it to the kernel. Another major improvement is the capability of atomically updating portions of the ruleset. In the existing netfilter implementation, one has to update the entire rule set in order to make a change and this is very expensive. Userspace tools exist to create nftables rules using existing netfilter rule sets, but both kernel implementations will need to co-exist for quite some time as we transition from the old to the new stuff. Kudos to Patrick McHardy, Pablo Neira Ayuso, and others who have worked so hard on this. 2) Daniel Borkmann and Hannes Frederic Sowa made several improvements to our pseudo-random number generator, mostly used for things like UDP port randomization and netfitler, amongst other things. In particular the taus88 generater is updated to taus113, and test cases are added. 3) Support 64-bit rates in HTB and TBF schedulers, from Eric Dumazet and Yang Yingliang. 4) Add support for new 577xx tigon3 chips to tg3 driver, from Nithin Sujir. 5) Fix two fatal flaws in TCP dynamic right sizing, from Eric Dumazet, Neal Cardwell, and Yuchung Cheng. 6) Allow IP_TOS and IP_TTL to be specified in sendmsg() ancillary control message data, much like other socket option attributes. From Francesco Fusco. 7) Allow applications to specify a cap on the rate computed automatically by the kernel for pacing flows, via a new SO_MAX_PACING_RATE socket option. From Eric Dumazet. 8) Make the initial autotuned send buffer sizing in TCP more closely reflect actual needs, from Eric Dumazet. 9) Currently early socket demux only happens for TCP sockets, but we can do it for connected UDP sockets too. Implementation from Shawn Bohrer. 10) Refactor inet socket demux with the goal of improving hash demux performance for listening sockets. With the main goals being able to use RCU lookups on even request sockets, and eliminating the listening lock contention. From Eric Dumazet. 11) The bonding layer has many demuxes in it's fast path, and an RCU conversion was started back in 3.11, several changes here extend the RCU usage to even more locations. From Ding Tianhong and Wang Yufen, based upon suggestions by Nikolay Aleksandrov and Veaceslav Falico. 12) Allow stackability of segmentation offloads to, in particular, allow segmentation offloading over tunnels. From Eric Dumazet. 13) Significantly improve the handling of secret keys we input into the various hash functions in the inet hashtables, TCP fast open, as well as syncookies. From Hannes Frederic Sowa. The key fundamental operation is "net_get_random_once()" which uses static keys. Hannes even extended this to ipv4/ipv6 fragmentation handling and our generic flow dissector. 14) The generic driver layer takes care now to set the driver data to NULL on device removal, so it's no longer necessary for drivers to explicitly set it to NULL any more. Many drivers have been cleaned up in this way, from Jingoo Han. 15) Add a BPF based packet scheduler classifier, from Daniel Borkmann. 16) Improve CRC32 interfaces and generic SKB checksum iterators so that SCTP's checksumming can more cleanly be handled. Also from Daniel Borkmann. 17) Add a new PMTU discovery mode, IP_PMTUDISC_INTERFACE, which forces using the interface MTU value. This helps avoid PMTU attacks, particularly on DNS servers. From Hannes Frederic Sowa. 18) Use generic XPS for transmit queue steering rather than internal (re-)implementation in virtio-net. From Jason Wang. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits) random32: add test cases for taus113 implementation random32: upgrade taus88 generator to taus113 from errata paper random32: move rnd_state to linux/random.h random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized random32: add periodic reseeding random32: fix off-by-one in seeding requirement PHY: Add RTL8201CP phy_driver to realtek xtsonic: add missing platform_set_drvdata() in xtsonic_probe() macmace: add missing platform_set_drvdata() in mace_probe() ethernet/arc/arc_emac: add missing platform_set_drvdata() in arc_emac_probe() ipv6: protect for_each_sk_fl_rcu in mem_check with rcu_read_lock_bh vlan: Implement vlan_dev_get_egress_qos_mask as an inline. ixgbe: add warning when max_vfs is out of range. igb: Update link modes display in ethtool netfilter: push reasm skb through instead of original frag skbs ip6_output: fragment outgoing reassembled skb properly MAINTAINERS: mv643xx_eth: take over maintainership from Lennart net_sched: tbf: support of 64bit rates ixgbe: deleting dfwd stations out of order can cause null ptr deref ixgbe: fix build err, num_rx_queues is only available with CONFIG_RPS ...
Diffstat (limited to 'net/ipv4/tcp_ipv4.c')
-rw-r--r--net/ipv4/tcp_ipv4.c125
1 files changed, 35 insertions, 90 deletions
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index b14266bb91eb..14bba8a1c5a7 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -288,6 +288,7 @@ static void tcp_v4_mtu_reduced(struct sock *sk)
288 mtu = dst_mtu(dst); 288 mtu = dst_mtu(dst);
289 289
290 if (inet->pmtudisc != IP_PMTUDISC_DONT && 290 if (inet->pmtudisc != IP_PMTUDISC_DONT &&
291 ip_sk_accept_pmtu(sk) &&
291 inet_csk(sk)->icsk_pmtu_cookie > mtu) { 292 inet_csk(sk)->icsk_pmtu_cookie > mtu) {
292 tcp_sync_mss(sk, mtu); 293 tcp_sync_mss(sk, mtu);
293 294
@@ -835,11 +836,11 @@ static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst,
835 skb = tcp_make_synack(sk, dst, req, NULL); 836 skb = tcp_make_synack(sk, dst, req, NULL);
836 837
837 if (skb) { 838 if (skb) {
838 __tcp_v4_send_check(skb, ireq->loc_addr, ireq->rmt_addr); 839 __tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
839 840
840 skb_set_queue_mapping(skb, queue_mapping); 841 skb_set_queue_mapping(skb, queue_mapping);
841 err = ip_build_and_send_pkt(skb, sk, ireq->loc_addr, 842 err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
842 ireq->rmt_addr, 843 ireq->ir_rmt_addr,
843 ireq->opt); 844 ireq->opt);
844 err = net_xmit_eval(err); 845 err = net_xmit_eval(err);
845 if (!tcp_rsk(req)->snt_synack && !err) 846 if (!tcp_rsk(req)->snt_synack && !err)
@@ -972,7 +973,7 @@ static struct tcp_md5sig_key *tcp_v4_reqsk_md5_lookup(struct sock *sk,
972{ 973{
973 union tcp_md5_addr *addr; 974 union tcp_md5_addr *addr;
974 975
975 addr = (union tcp_md5_addr *)&inet_rsk(req)->rmt_addr; 976 addr = (union tcp_md5_addr *)&inet_rsk(req)->ir_rmt_addr;
976 return tcp_md5_do_lookup(sk, addr, AF_INET); 977 return tcp_md5_do_lookup(sk, addr, AF_INET);
977} 978}
978 979
@@ -1149,8 +1150,8 @@ int tcp_v4_md5_hash_skb(char *md5_hash, struct tcp_md5sig_key *key,
1149 saddr = inet_sk(sk)->inet_saddr; 1150 saddr = inet_sk(sk)->inet_saddr;
1150 daddr = inet_sk(sk)->inet_daddr; 1151 daddr = inet_sk(sk)->inet_daddr;
1151 } else if (req) { 1152 } else if (req) {
1152 saddr = inet_rsk(req)->loc_addr; 1153 saddr = inet_rsk(req)->ir_loc_addr;
1153 daddr = inet_rsk(req)->rmt_addr; 1154 daddr = inet_rsk(req)->ir_rmt_addr;
1154 } else { 1155 } else {
1155 const struct iphdr *iph = ip_hdr(skb); 1156 const struct iphdr *iph = ip_hdr(skb);
1156 saddr = iph->saddr; 1157 saddr = iph->saddr;
@@ -1366,8 +1367,8 @@ static int tcp_v4_conn_req_fastopen(struct sock *sk,
1366 kfree_skb(skb_synack); 1367 kfree_skb(skb_synack);
1367 return -1; 1368 return -1;
1368 } 1369 }
1369 err = ip_build_and_send_pkt(skb_synack, sk, ireq->loc_addr, 1370 err = ip_build_and_send_pkt(skb_synack, sk, ireq->ir_loc_addr,
1370 ireq->rmt_addr, ireq->opt); 1371 ireq->ir_rmt_addr, ireq->opt);
1371 err = net_xmit_eval(err); 1372 err = net_xmit_eval(err);
1372 if (!err) 1373 if (!err)
1373 tcp_rsk(req)->snt_synack = tcp_time_stamp; 1374 tcp_rsk(req)->snt_synack = tcp_time_stamp;
@@ -1410,8 +1411,8 @@ static int tcp_v4_conn_req_fastopen(struct sock *sk,
1410 inet_csk(child)->icsk_af_ops->rebuild_header(child); 1411 inet_csk(child)->icsk_af_ops->rebuild_header(child);
1411 tcp_init_congestion_control(child); 1412 tcp_init_congestion_control(child);
1412 tcp_mtup_init(child); 1413 tcp_mtup_init(child);
1413 tcp_init_buffer_space(child);
1414 tcp_init_metrics(child); 1414 tcp_init_metrics(child);
1415 tcp_init_buffer_space(child);
1415 1416
1416 /* Queue the data carried in the SYN packet. We need to first 1417 /* Queue the data carried in the SYN packet. We need to first
1417 * bump skb's refcnt because the caller will attempt to free it. 1418 * bump skb's refcnt because the caller will attempt to free it.
@@ -1502,8 +1503,8 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
1502 tcp_openreq_init(req, &tmp_opt, skb); 1503 tcp_openreq_init(req, &tmp_opt, skb);
1503 1504
1504 ireq = inet_rsk(req); 1505 ireq = inet_rsk(req);
1505 ireq->loc_addr = daddr; 1506 ireq->ir_loc_addr = daddr;
1506 ireq->rmt_addr = saddr; 1507 ireq->ir_rmt_addr = saddr;
1507 ireq->no_srccheck = inet_sk(sk)->transparent; 1508 ireq->no_srccheck = inet_sk(sk)->transparent;
1508 ireq->opt = tcp_v4_save_options(skb); 1509 ireq->opt = tcp_v4_save_options(skb);
1509 1510
@@ -1578,15 +1579,15 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
1578 fastopen_cookie_present(&valid_foc) ? &valid_foc : NULL); 1579 fastopen_cookie_present(&valid_foc) ? &valid_foc : NULL);
1579 1580
1580 if (skb_synack) { 1581 if (skb_synack) {
1581 __tcp_v4_send_check(skb_synack, ireq->loc_addr, ireq->rmt_addr); 1582 __tcp_v4_send_check(skb_synack, ireq->ir_loc_addr, ireq->ir_rmt_addr);
1582 skb_set_queue_mapping(skb_synack, skb_get_queue_mapping(skb)); 1583 skb_set_queue_mapping(skb_synack, skb_get_queue_mapping(skb));
1583 } else 1584 } else
1584 goto drop_and_free; 1585 goto drop_and_free;
1585 1586
1586 if (likely(!do_fastopen)) { 1587 if (likely(!do_fastopen)) {
1587 int err; 1588 int err;
1588 err = ip_build_and_send_pkt(skb_synack, sk, ireq->loc_addr, 1589 err = ip_build_and_send_pkt(skb_synack, sk, ireq->ir_loc_addr,
1589 ireq->rmt_addr, ireq->opt); 1590 ireq->ir_rmt_addr, ireq->opt);
1590 err = net_xmit_eval(err); 1591 err = net_xmit_eval(err);
1591 if (err || want_cookie) 1592 if (err || want_cookie)
1592 goto drop_and_free; 1593 goto drop_and_free;
@@ -1644,9 +1645,9 @@ struct sock *tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
1644 newtp = tcp_sk(newsk); 1645 newtp = tcp_sk(newsk);
1645 newinet = inet_sk(newsk); 1646 newinet = inet_sk(newsk);
1646 ireq = inet_rsk(req); 1647 ireq = inet_rsk(req);
1647 newinet->inet_daddr = ireq->rmt_addr; 1648 newinet->inet_daddr = ireq->ir_rmt_addr;
1648 newinet->inet_rcv_saddr = ireq->loc_addr; 1649 newinet->inet_rcv_saddr = ireq->ir_loc_addr;
1649 newinet->inet_saddr = ireq->loc_addr; 1650 newinet->inet_saddr = ireq->ir_loc_addr;
1650 inet_opt = ireq->opt; 1651 inet_opt = ireq->opt;
1651 rcu_assign_pointer(newinet->inet_opt, inet_opt); 1652 rcu_assign_pointer(newinet->inet_opt, inet_opt);
1652 ireq->opt = NULL; 1653 ireq->opt = NULL;
@@ -2194,18 +2195,6 @@ EXPORT_SYMBOL(tcp_v4_destroy_sock);
2194#ifdef CONFIG_PROC_FS 2195#ifdef CONFIG_PROC_FS
2195/* Proc filesystem TCP sock list dumping. */ 2196/* Proc filesystem TCP sock list dumping. */
2196 2197
2197static inline struct inet_timewait_sock *tw_head(struct hlist_nulls_head *head)
2198{
2199 return hlist_nulls_empty(head) ? NULL :
2200 list_entry(head->first, struct inet_timewait_sock, tw_node);
2201}
2202
2203static inline struct inet_timewait_sock *tw_next(struct inet_timewait_sock *tw)
2204{
2205 return !is_a_nulls(tw->tw_node.next) ?
2206 hlist_nulls_entry(tw->tw_node.next, typeof(*tw), tw_node) : NULL;
2207}
2208
2209/* 2198/*
2210 * Get next listener socket follow cur. If cur is NULL, get first socket 2199 * Get next listener socket follow cur. If cur is NULL, get first socket
2211 * starting from bucket given in st->bucket; when st->bucket is zero the 2200 * starting from bucket given in st->bucket; when st->bucket is zero the
@@ -2309,10 +2298,9 @@ static void *listening_get_idx(struct seq_file *seq, loff_t *pos)
2309 return rc; 2298 return rc;
2310} 2299}
2311 2300
2312static inline bool empty_bucket(struct tcp_iter_state *st) 2301static inline bool empty_bucket(const struct tcp_iter_state *st)
2313{ 2302{
2314 return hlist_nulls_empty(&tcp_hashinfo.ehash[st->bucket].chain) && 2303 return hlist_nulls_empty(&tcp_hashinfo.ehash[st->bucket].chain);
2315 hlist_nulls_empty(&tcp_hashinfo.ehash[st->bucket].twchain);
2316} 2304}
2317 2305
2318/* 2306/*
@@ -2329,7 +2317,6 @@ static void *established_get_first(struct seq_file *seq)
2329 for (; st->bucket <= tcp_hashinfo.ehash_mask; ++st->bucket) { 2317 for (; st->bucket <= tcp_hashinfo.ehash_mask; ++st->bucket) {
2330 struct sock *sk; 2318 struct sock *sk;
2331 struct hlist_nulls_node *node; 2319 struct hlist_nulls_node *node;
2332 struct inet_timewait_sock *tw;
2333 spinlock_t *lock = inet_ehash_lockp(&tcp_hashinfo, st->bucket); 2320 spinlock_t *lock = inet_ehash_lockp(&tcp_hashinfo, st->bucket);
2334 2321
2335 /* Lockless fast path for the common case of empty buckets */ 2322 /* Lockless fast path for the common case of empty buckets */
@@ -2345,18 +2332,7 @@ static void *established_get_first(struct seq_file *seq)
2345 rc = sk; 2332 rc = sk;
2346 goto out; 2333 goto out;
2347 } 2334 }
2348 st->state = TCP_SEQ_STATE_TIME_WAIT;
2349 inet_twsk_for_each(tw, node,
2350 &tcp_hashinfo.ehash[st->bucket].twchain) {
2351 if (tw->tw_family != st->family ||
2352 !net_eq(twsk_net(tw), net)) {
2353 continue;
2354 }
2355 rc = tw;
2356 goto out;
2357 }
2358 spin_unlock_bh(lock); 2335 spin_unlock_bh(lock);
2359 st->state = TCP_SEQ_STATE_ESTABLISHED;
2360 } 2336 }
2361out: 2337out:
2362 return rc; 2338 return rc;
@@ -2365,7 +2341,6 @@ out:
2365static void *established_get_next(struct seq_file *seq, void *cur) 2341static void *established_get_next(struct seq_file *seq, void *cur)
2366{ 2342{
2367 struct sock *sk = cur; 2343 struct sock *sk = cur;
2368 struct inet_timewait_sock *tw;
2369 struct hlist_nulls_node *node; 2344 struct hlist_nulls_node *node;
2370 struct tcp_iter_state *st = seq->private; 2345 struct tcp_iter_state *st = seq->private;
2371 struct net *net = seq_file_net(seq); 2346 struct net *net = seq_file_net(seq);
@@ -2373,45 +2348,16 @@ static void *established_get_next(struct seq_file *seq, void *cur)
2373 ++st->num; 2348 ++st->num;
2374 ++st->offset; 2349 ++st->offset;
2375 2350
2376 if (st->state == TCP_SEQ_STATE_TIME_WAIT) { 2351 sk = sk_nulls_next(sk);
2377 tw = cur;
2378 tw = tw_next(tw);
2379get_tw:
2380 while (tw && (tw->tw_family != st->family || !net_eq(twsk_net(tw), net))) {
2381 tw = tw_next(tw);
2382 }
2383 if (tw) {
2384 cur = tw;
2385 goto out;
2386 }
2387 spin_unlock_bh(inet_ehash_lockp(&tcp_hashinfo, st->bucket));
2388 st->state = TCP_SEQ_STATE_ESTABLISHED;
2389
2390 /* Look for next non empty bucket */
2391 st->offset = 0;
2392 while (++st->bucket <= tcp_hashinfo.ehash_mask &&
2393 empty_bucket(st))
2394 ;
2395 if (st->bucket > tcp_hashinfo.ehash_mask)
2396 return NULL;
2397
2398 spin_lock_bh(inet_ehash_lockp(&tcp_hashinfo, st->bucket));
2399 sk = sk_nulls_head(&tcp_hashinfo.ehash[st->bucket].chain);
2400 } else
2401 sk = sk_nulls_next(sk);
2402 2352
2403 sk_nulls_for_each_from(sk, node) { 2353 sk_nulls_for_each_from(sk, node) {
2404 if (sk->sk_family == st->family && net_eq(sock_net(sk), net)) 2354 if (sk->sk_family == st->family && net_eq(sock_net(sk), net))
2405 goto found; 2355 return sk;
2406 } 2356 }
2407 2357
2408 st->state = TCP_SEQ_STATE_TIME_WAIT; 2358 spin_unlock_bh(inet_ehash_lockp(&tcp_hashinfo, st->bucket));
2409 tw = tw_head(&tcp_hashinfo.ehash[st->bucket].twchain); 2359 ++st->bucket;
2410 goto get_tw; 2360 return established_get_first(seq);
2411found:
2412 cur = sk;
2413out:
2414 return cur;
2415} 2361}
2416 2362
2417static void *established_get_idx(struct seq_file *seq, loff_t pos) 2363static void *established_get_idx(struct seq_file *seq, loff_t pos)
@@ -2464,10 +2410,9 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
2464 if (rc) 2410 if (rc)
2465 break; 2411 break;
2466 st->bucket = 0; 2412 st->bucket = 0;
2413 st->state = TCP_SEQ_STATE_ESTABLISHED;
2467 /* Fallthrough */ 2414 /* Fallthrough */
2468 case TCP_SEQ_STATE_ESTABLISHED: 2415 case TCP_SEQ_STATE_ESTABLISHED:
2469 case TCP_SEQ_STATE_TIME_WAIT:
2470 st->state = TCP_SEQ_STATE_ESTABLISHED;
2471 if (st->bucket > tcp_hashinfo.ehash_mask) 2416 if (st->bucket > tcp_hashinfo.ehash_mask)
2472 break; 2417 break;
2473 rc = established_get_first(seq); 2418 rc = established_get_first(seq);
@@ -2524,7 +2469,6 @@ static void *tcp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
2524 } 2469 }
2525 break; 2470 break;
2526 case TCP_SEQ_STATE_ESTABLISHED: 2471 case TCP_SEQ_STATE_ESTABLISHED:
2527 case TCP_SEQ_STATE_TIME_WAIT:
2528 rc = established_get_next(seq, v); 2472 rc = established_get_next(seq, v);
2529 break; 2473 break;
2530 } 2474 }
@@ -2548,7 +2492,6 @@ static void tcp_seq_stop(struct seq_file *seq, void *v)
2548 if (v != SEQ_START_TOKEN) 2492 if (v != SEQ_START_TOKEN)
2549 spin_unlock_bh(&tcp_hashinfo.listening_hash[st->bucket].lock); 2493 spin_unlock_bh(&tcp_hashinfo.listening_hash[st->bucket].lock);
2550 break; 2494 break;
2551 case TCP_SEQ_STATE_TIME_WAIT:
2552 case TCP_SEQ_STATE_ESTABLISHED: 2495 case TCP_SEQ_STATE_ESTABLISHED:
2553 if (v) 2496 if (v)
2554 spin_unlock_bh(inet_ehash_lockp(&tcp_hashinfo, st->bucket)); 2497 spin_unlock_bh(inet_ehash_lockp(&tcp_hashinfo, st->bucket));
@@ -2606,10 +2549,10 @@ static void get_openreq4(const struct sock *sk, const struct request_sock *req,
2606 seq_printf(f, "%4d: %08X:%04X %08X:%04X" 2549 seq_printf(f, "%4d: %08X:%04X %08X:%04X"
2607 " %02X %08X:%08X %02X:%08lX %08X %5u %8d %u %d %pK%n", 2550 " %02X %08X:%08X %02X:%08lX %08X %5u %8d %u %d %pK%n",
2608 i, 2551 i,
2609 ireq->loc_addr, 2552 ireq->ir_loc_addr,
2610 ntohs(inet_sk(sk)->inet_sport), 2553 ntohs(inet_sk(sk)->inet_sport),
2611 ireq->rmt_addr, 2554 ireq->ir_rmt_addr,
2612 ntohs(ireq->rmt_port), 2555 ntohs(ireq->ir_rmt_port),
2613 TCP_SYN_RECV, 2556 TCP_SYN_RECV,
2614 0, 0, /* could print option size, but that is af dependent. */ 2557 0, 0, /* could print option size, but that is af dependent. */
2615 1, /* timers active (only the expire timer) */ 2558 1, /* timers active (only the expire timer) */
@@ -2707,6 +2650,7 @@ static void get_timewait4_sock(const struct inet_timewait_sock *tw,
2707static int tcp4_seq_show(struct seq_file *seq, void *v) 2650static int tcp4_seq_show(struct seq_file *seq, void *v)
2708{ 2651{
2709 struct tcp_iter_state *st; 2652 struct tcp_iter_state *st;
2653 struct sock *sk = v;
2710 int len; 2654 int len;
2711 2655
2712 if (v == SEQ_START_TOKEN) { 2656 if (v == SEQ_START_TOKEN) {
@@ -2721,14 +2665,14 @@ static int tcp4_seq_show(struct seq_file *seq, void *v)
2721 switch (st->state) { 2665 switch (st->state) {
2722 case TCP_SEQ_STATE_LISTENING: 2666 case TCP_SEQ_STATE_LISTENING:
2723 case TCP_SEQ_STATE_ESTABLISHED: 2667 case TCP_SEQ_STATE_ESTABLISHED:
2724 get_tcp4_sock(v, seq, st->num, &len); 2668 if (sk->sk_state == TCP_TIME_WAIT)
2669 get_timewait4_sock(v, seq, st->num, &len);
2670 else
2671 get_tcp4_sock(v, seq, st->num, &len);
2725 break; 2672 break;
2726 case TCP_SEQ_STATE_OPENREQ: 2673 case TCP_SEQ_STATE_OPENREQ:
2727 get_openreq4(st->syn_wait_sk, v, seq, st->num, st->uid, &len); 2674 get_openreq4(st->syn_wait_sk, v, seq, st->num, st->uid, &len);
2728 break; 2675 break;
2729 case TCP_SEQ_STATE_TIME_WAIT:
2730 get_timewait4_sock(v, seq, st->num, &len);
2731 break;
2732 } 2676 }
2733 seq_printf(seq, "%*s\n", TMPSZ - 1 - len, ""); 2677 seq_printf(seq, "%*s\n", TMPSZ - 1 - len, "");
2734out: 2678out:
@@ -2806,6 +2750,7 @@ struct proto tcp_prot = {
2806 .orphan_count = &tcp_orphan_count, 2750 .orphan_count = &tcp_orphan_count,
2807 .memory_allocated = &tcp_memory_allocated, 2751 .memory_allocated = &tcp_memory_allocated,
2808 .memory_pressure = &tcp_memory_pressure, 2752 .memory_pressure = &tcp_memory_pressure,
2753 .sysctl_mem = sysctl_tcp_mem,
2809 .sysctl_wmem = sysctl_tcp_wmem, 2754 .sysctl_wmem = sysctl_tcp_wmem,
2810 .sysctl_rmem = sysctl_tcp_rmem, 2755 .sysctl_rmem = sysctl_tcp_rmem,
2811 .max_header = MAX_TCP_HEADER, 2756 .max_header = MAX_TCP_HEADER,