| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On overflow, the math-emu macro _FP_TO_INT_ROUND tries to saturate its
result (subject to the value of rsigned specifying the desired
overflow semantics). However, if the rounding step has the effect of
increasing the exponent so as to cause overflow (if the rounded result
is 1 larger than the largest positive value with the given number of
bits, allowing for signedness), the overflow does not get detected,
meaning that for unsigned results 0 is produced instead of the maximum
unsigned integer with the give number of bits, without an exception
being raised for overflow, and that for signed results the minimum
(negative) value is produced instead of the maximum (positive) value,
again without an exception. This patch makes the code check for
rounding increasing the exponent and adjusts the exponent value as
needed for the overflow check.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The math-emu macros _FP_TO_INT and _FP_TO_INT_ROUND are supposed to
saturate their results for out-of-range arguments, except in the case
rsigned == 2 (when instead the low bits of the result are taken).
However, in the case rsigned == 0 (converting to unsigned integers),
they mistakenly produce 0 for positive results and the maximum
unsigned integer for negative results, the opposite of correct
unsigned saturation. This patch fixes the logic.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The kernel's math-emu code contains a macro _FP_FROM_INT() which is
used to convert an integer to a raw normalized floating-point value.
It does this basically in three steps:
1. Compute the exponent from the number of leading zero bits.
2. Downshift large fractions to put the MSB in the right position
for normalized fractions.
3. Upshift small fractions to put the MSB in the right position.
There is an boundary error in step 2, causing a fraction with its
MSB exactly one bit above the normalized MSB position to not be
downshifted. This results in a non-normalized raw float, which when
packed becomes a massively inaccurate representation for that input.
The impact of this depends on a number of arch-specific factors,
but it is known to have broken emulation of FXTOD instructions
on UltraSPARC III, which was originally reported as GCC bug 44631
<http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44631>.
Any arch which uses math-emu to emulate conversions from integers to
same-size floats may be affected.
The fix is simple: the exponent comparison used to determine if the
fraction should be downshifted must be "<=" not "<".
I'm sending a kernel module to test this as a reply to this message.
There are also SPARC user-space test cases in the GCC bug entry.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
| |
Some comments misspell "truly"; this fixes them. No code changes.
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 48d6c64311ddb6417b901639530ccbc47bdc7635 ("math-emu: Add
support for reporting exact invalid exception") code was added to
set the new FP_EX_INVALID_{IDI,ZDZ} exception flag bits.
However there is a missing break statement for the
_FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_INF) switch case, the
code just falls into _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_ZERO)
which then proceeds to overwrite all of the settings.
Fix by adding the missing break.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm trying to move the powerpc math-emu code to use the include/math-emu bits.
In doing so I've been using TestFloat to see how good or bad we are
doing. For the most part the current math-emu code that PPC uses has
a number of issues that the code in include/math-emu seems to solve
(plus bugs we've had for ever that no one every realized).
Anyways, I've come across a case that we are flagging underflow and
inexact because we think we have a denormalized result from a double
precision divide:
000.FFFFFFFFFFFFF / 3FE.FFFFFFFFFFFFE
soft: 001.0000000000000 ..... syst: 001.0000000000000 ...ux
What it looks like is the results out of FP_DIV_D are:
D:
sign: 0
mantissa: 01000000 00000000
exp: -1023 (0)
The problem seems like we aren't normalizing the result and bumping the exp.
Now that I'm digging into this a bit I'm thinking my issue has to do with
the fix DaveM put in place from back in Aug 2007 (commit
405849610fd96b4f34cd1875c4c033228fea6c0f):
[MATH-EMU]: Fix underflow exception reporting.
2) we ended up rounding back up to normal (this is the case where
we set the exponent to 1 and set the fraction to zero), this
should set inexact too
...
Another example, "0x0.0000000000001p-1022 / 16.0", should signal both
inexact and underflow. The cpu implementations and ieee1754
literature is very clear about this. This is case #2 above.
Here is the distilled glibc test case from Jakub Jelinek which prompted that
commit:
--------------------
#include <float.h>
#include <fenv.h>
#include <stdio.h>
volatile double d = DBL_MIN;
volatile double e = 0x0.0000000000001p-1022;
volatile double f = 16.0;
int
main (void)
{
printf ("%x\n", fetestexcept (FE_UNDERFLOW));
d /= f;
printf ("%x\n", fetestexcept (FE_UNDERFLOW));
e /= f;
printf ("%x\n", fetestexcept (FE_UNDERFLOW));
return 0;
}
--------------------
It looks like the case I have we are exact before rounding, but think it
looks like the rounding case since it appears as if "overflow is set".
000.FFFFFFFFFFFFF / 3FE.FFFFFFFFFFFFE = 001.0000000000000
I think the following adds the check for my case and still works for the
issue your commit was trying to resolve.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some architectures (like powerpc) provide status information on the exact
type of invalid exception. This is pretty straight forward as we already
report invalid exceptions via FP_SET_EXCEPTION.
We add new flags (FP_EX_INVALID_*) the architecture code can define if it
wants the exact invalid exception reported.
We had to split out the INF/INF and 0/0 cases for divide to allow reporting
the two invalid forms properly.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
| |
Fix warnings of the form:
arch/powerpc/math-emu/fsubs.c:15: warning: 'R_f1' may be used uninitialized in this function
arch/powerpc/math-emu/fsubs.c:15: warning: 'R_f0' may be used uninitialized in this function
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
|
|
|
|
| |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The underflow exception cases were wrong.
This is one weird area of ieee1754 handling in that the underflow
behavior changes based upon whether underflow is enabled in the trap
enable mask of the FPU control register. As a specific case the Sparc
V9 manual gives us the following description:
--------------------
If UFM = 0: Underflow occurs if a nonzero result is tiny and a
loss of accuracy occurs. Tininess may be detected
before or after rounding. Loss of accuracy may be
either a denormalization loss or an inexact result.
If UFM = 1: Underflow occurs if a nonzero result is tiny.
Tininess may be detected before or after rounding.
--------------------
What this amounts to in the packing case is if we go subnormal,
we set underflow if any of the following are true:
1) rounding sets inexact
2) we ended up rounding back up to normal (this is the case where
we set the exponent to 1 and set the fraction to zero), this
should set inexact too
3) underflow is set in FPU control register trap-enable mask
The initially discovered example was "DBL_MIN / 16.0" which
incorrectly generated an underflow. It should not, unless underflow
is set in the trap-enable mask of the FPU csr.
Another example, "0x0.0000000000001p-1022 / 16.0", should signal both
inexact and underflow. The cpu implementations and ieee1754
literature is very clear about this. This is case #2 above.
However, if underflow is set in the trap enable mask, only underflow
should be set and reported as a trap. That is handled properly by the
prioritization logic in
arch/sparc{,64}/math-emu/math.c:record_exception().
Based upon a report and test case from Jakub Jelinek.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
| |
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|