diff options
author | Neal Cardwell <ncardwell@google.com> | 2012-02-26 05:06:19 -0500 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2012-03-19 11:57:45 -0400 |
commit | 85526d578a0c7b6723c1a429b39870ce3bfec11c (patch) | |
tree | 7a86e0ecba6e857e67a0f4c84a7901a88d2bde14 /net/ipv4/tcp_input.c | |
parent | 2991ddd266470f77442bfb023b2737b6920f8715 (diff) |
tcp: fix false reordering signal in tcp_shifted_skb
[ Upstream commit 4c90d3b30334833450ccbb02f452d4972a3c3c3f ]
When tcp_shifted_skb() shifts bytes from the skb that is currently
pointed to by 'highest_sack' then the increment of
TCP_SKB_CB(skb)->seq implicitly advances tcp_highest_sack_seq(). This
implicit advancement, combined with the recent fix to pass the correct
SACKed range into tcp_sacktag_one(), caused tcp_sacktag_one() to think
that the newly SACKed range was before the tcp_highest_sack_seq(),
leading to a call to tcp_update_reordering() with a degree of
reordering matching the size of the newly SACKed range (typically just
1 packet, which is a NOP, but potentially larger).
This commit fixes this by simply calling tcp_sacktag_one() before the
TCP_SKB_CB(skb)->seq advancement that can advance our notion of the
highest SACKed sequence.
Correspondingly, we can simplify the code a little now that
tcp_shifted_skb() should update the lost_cnt_hint in all cases where
skb == tp->lost_skb_hint.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'net/ipv4/tcp_input.c')
-rw-r--r-- | net/ipv4/tcp_input.c | 18 |
1 files changed, 10 insertions, 8 deletions
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index ee08f11ff21..8eb13020451 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c | |||
@@ -1385,8 +1385,16 @@ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb, | |||
1385 | 1385 | ||
1386 | BUG_ON(!pcount); | 1386 | BUG_ON(!pcount); |
1387 | 1387 | ||
1388 | /* Adjust hint for FACK. Non-FACK is handled in tcp_sacktag_one(). */ | 1388 | /* Adjust counters and hints for the newly sacked sequence |
1389 | if (tcp_is_fack(tp) && (skb == tp->lost_skb_hint)) | 1389 | * range but discard the return value since prev is already |
1390 | * marked. We must tag the range first because the seq | ||
1391 | * advancement below implicitly advances | ||
1392 | * tcp_highest_sack_seq() when skb is highest_sack. | ||
1393 | */ | ||
1394 | tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, | ||
1395 | start_seq, end_seq, dup_sack, pcount); | ||
1396 | |||
1397 | if (skb == tp->lost_skb_hint) | ||
1390 | tp->lost_cnt_hint += pcount; | 1398 | tp->lost_cnt_hint += pcount; |
1391 | 1399 | ||
1392 | TCP_SKB_CB(prev)->end_seq += shifted; | 1400 | TCP_SKB_CB(prev)->end_seq += shifted; |
@@ -1412,12 +1420,6 @@ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb, | |||
1412 | skb_shinfo(skb)->gso_type = 0; | 1420 | skb_shinfo(skb)->gso_type = 0; |
1413 | } | 1421 | } |
1414 | 1422 | ||
1415 | /* Adjust counters and hints for the newly sacked sequence range but | ||
1416 | * discard the return value since prev is already marked. | ||
1417 | */ | ||
1418 | tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, | ||
1419 | start_seq, end_seq, dup_sack, pcount); | ||
1420 | |||
1421 | /* Difference in this won't matter, both ACKed by the same cumul. ACK */ | 1423 | /* Difference in this won't matter, both ACKed by the same cumul. ACK */ |
1422 | TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS); | 1424 | TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS); |
1423 | 1425 | ||