aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4
diff options
context:
space:
mode:
authorIlpo Järvinen <ilpo.jarvinen@helsinki.fi>2009-03-14 10:23:07 -0400
committerDavid S. Miller <davem@davemloft.net>2009-03-15 23:09:55 -0400
commitafece1c6587010cc81d1a43045c855774e8234a3 (patch)
tree00caa55dc0c2a86c44883154986885f0421d1251 /net/ipv4
parent2a3a041c4e2c1685e668b280c121a5a40a029a03 (diff)
tcp: make sure xmit goal size never becomes zero
It's not too likely to happen, would basically require crafted packets (must hit the max guard in tcp_bound_to_half_wnd()). It seems that nothing that bad would happen as there's tcp_mems and congestion window that prevent runaway at some point from hurting all too much (I'm not that sure what all those zero sized segments we would generate do though in write queue). Preventing it regardless is certainly the best way to go. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Cc: Evgeniy Polyakov <zbr@ioremap.net> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4')
-rw-r--r--net/ipv4/tcp.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 0db9f3b984f7..1c4d42ff72bd 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -689,7 +689,7 @@ static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now,
689 } 689 }
690 } 690 }
691 691
692 return xmit_size_goal; 692 return max(xmit_size_goal, mss_now);
693} 693}
694 694
695static int tcp_send_mss(struct sock *sk, int *size_goal, int flags) 695static int tcp_send_mss(struct sock *sk, int *size_goal, int flags)