diff options
author | Yuchung Cheng <ycheng@google.com> | 2013-03-20 09:32:58 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2013-03-21 11:47:50 -0400 |
commit | 9b44190dc114c1720b34975b5bfc65aece112ced (patch) | |
tree | c1202e05d6a04fa1d31be2ad2942fbe32ffa3f76 /net/ipv4/tcp_output.c | |
parent | e306e2c13b8c214618af0c61acf62a6e42d486de (diff) |
tcp: refactor F-RTO
The patch series refactor the F-RTO feature (RFC4138/5682).
This is to simplify the loss recovery processing. Existing F-RTO
was developed during the experimental stage (RFC4138) and has
many experimental features. It takes a separate code path from
the traditional timeout processing by overloading CA_Disorder
instead of using CA_Loss state. This complicates CA_Disorder state
handling because it's also used for handling dubious ACKs and undos.
While the algorithm in the RFC does not change the congestion control,
the implementation intercepts congestion control in various places
(e.g., frto_cwnd in tcp_ack()).
The new code implements newer F-RTO RFC5682 using CA_Loss processing
path. F-RTO becomes a small extension in the timeout processing
and interfaces with congestion control and Eifel undo modules.
It lets congestion control (module) determines how many to send
independently. F-RTO only chooses what to send in order to detect
spurious retranmission. If timeout is found spurious it invokes
existing Eifel undo algorithms like DSACK or TCP timestamp based
detection.
The first patch removes all F-RTO code except the sysctl_tcp_frto is
left for the new implementation. Since CA_EVENT_FRTO is removed, TCP
westwood now computes ssthresh on regular timeout CA_EVENT_LOSS event.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp_output.c')
-rw-r--r-- | net/ipv4/tcp_output.c | 11 |
1 files changed, 2 insertions, 9 deletions
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e787ecec505e..163cf5fc0119 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c | |||
@@ -78,10 +78,6 @@ static void tcp_event_new_data_sent(struct sock *sk, const struct sk_buff *skb) | |||
78 | tcp_advance_send_head(sk, skb); | 78 | tcp_advance_send_head(sk, skb); |
79 | tp->snd_nxt = TCP_SKB_CB(skb)->end_seq; | 79 | tp->snd_nxt = TCP_SKB_CB(skb)->end_seq; |
80 | 80 | ||
81 | /* Don't override Nagle indefinitely with F-RTO */ | ||
82 | if (tp->frto_counter == 2) | ||
83 | tp->frto_counter = 3; | ||
84 | |||
85 | tp->packets_out += tcp_skb_pcount(skb); | 81 | tp->packets_out += tcp_skb_pcount(skb); |
86 | if (!prior_packets || icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS || | 82 | if (!prior_packets || icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS || |
87 | icsk->icsk_pending == ICSK_TIME_LOSS_PROBE) | 83 | icsk->icsk_pending == ICSK_TIME_LOSS_PROBE) |
@@ -1470,11 +1466,8 @@ static inline bool tcp_nagle_test(const struct tcp_sock *tp, const struct sk_buf | |||
1470 | if (nonagle & TCP_NAGLE_PUSH) | 1466 | if (nonagle & TCP_NAGLE_PUSH) |
1471 | return true; | 1467 | return true; |
1472 | 1468 | ||
1473 | /* Don't use the nagle rule for urgent data (or for the final FIN). | 1469 | /* Don't use the nagle rule for urgent data (or for the final FIN). */ |
1474 | * Nagle can be ignored during F-RTO too (see RFC4138). | 1470 | if (tcp_urg_mode(tp) || (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)) |
1475 | */ | ||
1476 | if (tcp_urg_mode(tp) || (tp->frto_counter == 2) || | ||
1477 | (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)) | ||
1478 | return true; | 1471 | return true; |
1479 | 1472 | ||
1480 | if (!tcp_nagle_check(tp, skb, cur_mss, nonagle)) | 1473 | if (!tcp_nagle_check(tp, skb, cur_mss, nonagle)) |