diff options
author | Willem de Bruijn <willemb@google.com> | 2011-09-30 06:38:28 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-10-03 14:18:26 -0400 |
commit | 7091fbd82cd5686444ffe9935ed6a8190101fe9d (patch) | |
tree | 8414bd888e6ef13f9338458ee94cf24b0d34bd2c /net/packet | |
parent | d0e5d83284dac15c015bb48115b6780f5a6413cd (diff) |
make PACKET_STATISTICS getsockopt report consistently between ring and non-ring
This is a minor change.
Up until kernel 2.6.32, getsockopt(fd, SOL_PACKET, PACKET_STATISTICS,
...) would return total and dropped packets since its last invocation. The
introduction of socket queue overflow reporting [1] changed drop
rate calculation in the normal packet socket path, but not when using a
packet ring. As a result, the getsockopt now returns different statistics
depending on the reception method used. With a ring, it still returns the
count since the last call, as counts are incremented in tpacket_rcv and
reset in getsockopt. Without a ring, it returns 0 if no drops occurred
since the last getsockopt and the total drops over the lifespan of
the socket otherwise. The culprit is this line in packet_rcv, executed
on a drop:
drop_n_acct:
po->stats.tp_drops = atomic_inc_return(&sk->sk_drops);
As it shows, the new drop number it taken from the socket drop counter,
which is not reset at getsockopt. I put together a small example
that demonstrates the issue [2]. It runs for 10 seconds and overflows
the queue/ring on every odd second. The reported drop rates are:
ring: 16, 0, 16, 0, 16, ...
non-ring: 0, 15, 0, 30, 0, 46, 0, 60, 0 , 74.
Note how the even ring counts monotonically increase. Because the
getsockopt adds tp_drops to tp_packets, total counts are similarly
reported cumulatively. Long story short, reinstating the original code, as
the below patch does, fixes the issue at the cost of additional per-packet
cycles. Another solution that does not introduce per-packet overhead
is be to keep the current data path, record the value of sk_drops at
getsockopt() at call N in a new field in struct packetsock and subtract
that when reporting at call N+1. I'll be happy to code that, instead,
it's just more messy.
[1] http://patchwork.ozlabs.org/patch/35665/
[2] http://kernel.googlecode.com/files/test-packetsock-getstatistics.c
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/packet')
-rw-r--r-- | net/packet/af_packet.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index c698cec0a445..fabb4fafa281 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c | |||
@@ -961,7 +961,10 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev, | |||
961 | return 0; | 961 | return 0; |
962 | 962 | ||
963 | drop_n_acct: | 963 | drop_n_acct: |
964 | po->stats.tp_drops = atomic_inc_return(&sk->sk_drops); | 964 | spin_lock(&sk->sk_receive_queue.lock); |
965 | po->stats.tp_drops++; | ||
966 | atomic_inc(&sk->sk_drops); | ||
967 | spin_unlock(&sk->sk_receive_queue.lock); | ||
965 | 968 | ||
966 | drop_n_restore: | 969 | drop_n_restore: |
967 | if (skb_head != skb->data && skb_shared(skb)) { | 970 | if (skb_head != skb->data && skb_shared(skb)) { |