diff options
| author | Szilveszter Ördög <slipszi@gmail.com> | 2010-08-05 21:26:38 -0400 |
|---|---|---|
| committer | Herbert Xu <herbert@gondor.apana.org.au> | 2010-08-05 21:26:38 -0400 |
| commit | 23a75eee070f1370bee803a34f285cf81eb5f331 (patch) | |
| tree | 6427c53a261840661f135b99d81062fc015dd571 /crypto | |
| parent | fc1caf6eafb30ea185720e29f7f5eccca61ecd60 (diff) | |
crypto: hash - Fix handling of small unaligned buffers
If a scatterwalk chain contains an entry with an unaligned offset then
hash_walk_next() will cut off the next step at the next alignment point.
However, if the entry ends before the next alignment point then we a loop,
which leads to a kernel oops.
Fix this by checking whether the next aligment point is before the end of the
current entry.
Signed-off-by: Szilveszter Ördög <slipszi@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'crypto')
| -rw-r--r-- | crypto/ahash.c | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/crypto/ahash.c b/crypto/ahash.c index b8c59b889c6e..f669822a7a44 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c | |||
| @@ -47,8 +47,11 @@ static int hash_walk_next(struct crypto_hash_walk *walk) | |||
| 47 | walk->data = crypto_kmap(walk->pg, 0); | 47 | walk->data = crypto_kmap(walk->pg, 0); |
| 48 | walk->data += offset; | 48 | walk->data += offset; |
| 49 | 49 | ||
| 50 | if (offset & alignmask) | 50 | if (offset & alignmask) { |
| 51 | nbytes = alignmask + 1 - (offset & alignmask); | 51 | unsigned int unaligned = alignmask + 1 - (offset & alignmask); |
| 52 | if (nbytes > unaligned) | ||
| 53 | nbytes = unaligned; | ||
| 54 | } | ||
| 52 | 55 | ||
| 53 | walk->entrylen -= nbytes; | 56 | walk->entrylen -= nbytes; |
| 54 | return nbytes; | 57 | return nbytes; |
