diff options
author | Olaf Kirch <olaf.kirch@oracle.com> | 2007-06-24 02:11:52 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2007-06-24 02:11:52 -0400 |
commit | 5b5a60da281c767196427ce8144deae6ec46b389 (patch) | |
tree | 02ac728c14eb8fa0bd49ac8ede6f15e760ddc3f3 /net | |
parent | 515e06c4556bd8388db6b2bb2cd8859126932946 (diff) |
[NET]: Make skb_seq_read unmap the last fragment
Having walked through the entire skbuff, skb_seq_read would leave the
last fragment mapped. As a consequence, the unwary caller would leak
kmaps, and proceed with preempt_count off by one. The only (kind of
non-intuitive) workaround is to use skb_seq_read_abort.
This patch makes sure skb_seq_read always unmaps frag_data after
having cycled through the skb's paged part.
Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/skbuff.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 8d43ae6979e..27cfe5fe4bb 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c | |||
@@ -1706,6 +1706,11 @@ next_skb: | |||
1706 | st->stepped_offset += frag->size; | 1706 | st->stepped_offset += frag->size; |
1707 | } | 1707 | } |
1708 | 1708 | ||
1709 | if (st->frag_data) { | ||
1710 | kunmap_skb_frag(st->frag_data); | ||
1711 | st->frag_data = NULL; | ||
1712 | } | ||
1713 | |||
1709 | if (st->cur_skb->next) { | 1714 | if (st->cur_skb->next) { |
1710 | st->cur_skb = st->cur_skb->next; | 1715 | st->cur_skb = st->cur_skb->next; |
1711 | st->frag_idx = 0; | 1716 | st->frag_idx = 0; |