diff options
author | Michael S. Tsirkin <mst@redhat.com> | 2013-07-09 01:13:04 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-07-28 19:29:56 -0400 |
commit | 2b0e8a4ff83fd85afe5d0b84a2e1c6faa5fa56b8 (patch) | |
tree | 797347db0c30e89ff3fbb442e69cac79e1e5d269 | |
parent | c23b1ece6112ed0f227fdc881db33c6427b65222 (diff) |
virtio_net: fix race in RX VQ processing
[ Upstream commit cbdadbbf0c790f79350a8f36029208944c5487d0 ]
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason Wang reports that he can trigger the races artificially,
by adding udelay() in virtqueue_enable_cb() after virtio_mb().
However, we must call napi_complete to clear NAPI_STATE_SCHED before
polling the virtqueue for used buffers, otherwise napi_schedule_prep in
a callback will fail, causing us to lose RX events.
To fix, call virtqueue_enable_cb_prepare with NAPI_STATE_SCHED
set (under napi lock), later call virtqueue_poll with
NAPI_STATE_SCHED clear (outside the lock).
Reported-by: Jason Wang <jasowang@redhat.com>
Tested-by: Jason Wang <jasowang@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | drivers/net/virtio_net.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c9e00387d999..42d670a468f8 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c | |||
@@ -602,7 +602,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget) | |||
602 | container_of(napi, struct receive_queue, napi); | 602 | container_of(napi, struct receive_queue, napi); |
603 | struct virtnet_info *vi = rq->vq->vdev->priv; | 603 | struct virtnet_info *vi = rq->vq->vdev->priv; |
604 | void *buf; | 604 | void *buf; |
605 | unsigned int len, received = 0; | 605 | unsigned int r, len, received = 0; |
606 | 606 | ||
607 | again: | 607 | again: |
608 | while (received < budget && | 608 | while (received < budget && |
@@ -619,8 +619,9 @@ again: | |||
619 | 619 | ||
620 | /* Out of packets? */ | 620 | /* Out of packets? */ |
621 | if (received < budget) { | 621 | if (received < budget) { |
622 | r = virtqueue_enable_cb_prepare(rq->vq); | ||
622 | napi_complete(napi); | 623 | napi_complete(napi); |
623 | if (unlikely(!virtqueue_enable_cb(rq->vq)) && | 624 | if (unlikely(virtqueue_poll(rq->vq, r)) && |
624 | napi_schedule_prep(napi)) { | 625 | napi_schedule_prep(napi)) { |
625 | virtqueue_disable_cb(rq->vq); | 626 | virtqueue_disable_cb(rq->vq); |
626 | __napi_schedule(napi); | 627 | __napi_schedule(napi); |