An Experimental Floating-Point Scalar Evolution
The TL;DR - after a conversation at EuroLLVM with Steve Canon about how LLVM is missing scalar evolution analysis for floating-point, I’ve spent some spare time hacking on a new LLVM analysis pass - fpscev (Floating-Point SCalar EVolution) - available here at github. The pass will analyze floating-point operations in a function and work out if there are any constraints on the range of these values, information which can be used to better optimize code.
What is Scalar Evolution?⌗
Scalar evolution lets a compiler understand the scope of a value throughout a program. The classic place that scalar evolution is used in compiler optimizations is with respect to loop index variables to let a compiler infer the number of loop iterations that a loop will perform.
Scalar evolution can also be used to fold branches away. Imagine you have code like:
if (i < 4) {
// Do a million lines of awful code that will bloat you executable!
}
And lets say that via scalar evolution we know that i
could never be less
than 4. That allows the compiler to remove the entire if branch as it will never
be hit!
Why Floating-Point Scalar Evolution?⌗
Many floating-point algorithms can be vastly improved if we know the scope of the inputs to math functions are. Lots of high performance code that game developers throw through compiler stacks will use many operations that could benefit from knowing a little more about the value of the float.
A great use of this would be something like the following:
float f = ...; // definitely not NaN or Infinity
f = sin(f); // because f wasn't NaN or Infinity f is now in the range [-1..1]
if (!isfinite(f)) {
// Do a million lines of awful code that will bloat you executable!
}
In the example above that if check should always return false - but the compiler doesn’t have enough information to know that.
If we look at the latest (as of the time of writing) version of LLVM from trunk SVN (9.0.0svn) and how it handles the above:
define float @func(float %0) {
1:
%2 = tail call nnan ninf float @llvm.sin.f32(float %0)
%3 = tail call float @llvm.fabs.f32(float %2)
%4 = fcmp ueq float %3, 0x7FF0000000000000
br i1 %4, label %5, label %7
5:
%6 = ... ; do something complicated
br label %7
7:
%8 = phi float [ %2, %1 ], [ %6, %5 ]
ret float %8
}
Even with the additional fast-math flags ’nnan’ (no NaNs) and ’ninf’ (no
Infinity) on the call stored into %2
- the compiler cannot deduce that the
branch in %4
is redundant.
This is all because LLVM’s scalar evolution only cares about integers. This generally makes sense - scalar evolution came about because of loops, and loops have integer indices (for the most part…). What if we extended some similar techniques as found in scalar evolution to floating-point - what optimization opportunites we could open up?
The Approach⌗
The overall approach that I’ve taken is to classify all operations that produce a floating-point result as having a result within a given range. We will keep track of the minimum (down to -NaN), the maximum (up to +NaN) and whether the value is an integer (whole number hiding in a floating-point) or not.
The analysis runs as a function pass that iterates on the basic blocks of that function in a reverse post-order traversal - which ensures that for graphs of basic blocks that are not loops, we will always have classified an input float before it is used.
To keep things simple - no attempt has been made to classify floating-point values that are persistant around a loop - any phi node can attempt to classify a float which has not been identified yet, and thus will have to make a default worst-case assumption on the float.
Lets now run through the list of instructions that we will analyze, and I’ll explain what the analysis pass does for each.
Signed Integer to Floating-Point⌗
%f = sitofp i8 %i to float
The sitofp
instruction converts a signed integer to a floating-point value. Given that integers have a constrained range with respect to their integer size, we can make some assumptions on all integers as to what their maximum and minimum values are. To get an even better insight into their value, I use the existing scalar evolution analysis (which only works for integers) to query the range of the input integer and propagate this to the float. The smallest integer value is rounded towards negative when converting and the largest integer value is rounded towards positive, to ensure that even for the integer numbers that are not fully representable in their floating-point equivalents - we account for the worst-case behaviour of any rounding mode the system supports.
In the above example, because %i
has a signed range of [-128..127]
we can infer that %f
also has this range, and we record that %f
is an integer (hiding in a float).
min: -128
max: 127
isInteger: 1
Unsigned Integer to Floating-Point⌗
%f = uitofp i3 %i to float
Much like for the signed equivalent above, we know that %i
has an unsigned range of [0..7]
and so %f
also has this range and is an integer.
min: 0
max: 7
isInteger: 1
Floating-Point Truncation⌗
%f = fptrunc double %d to float
For floating-point truncation - the range of the truncated value is the same as the range of the larger value we are truncating from. The only exceptions will be if the larger number is not fully representable in the smaller data type.
If we assume that %d
above came from an already constrained number, we can see that the constraints have passed from %d
to %f
.
min: -128
max: 127
isInteger: 1
Floating-Point Extension⌗
%f = fpext half %h to float
For floating-point extension - the approach mirrors that of truncation. We simply pass the original constrained range onto the larger type.
min: 4.5
max: 42.42
isInteger: 0
Selects⌗
%d = select i1 %b, double 0.0, double %in
Selecting between two values yields a range that is the intersection of the two original ranges.
In the above example %in
is a value that when classifying we could deduce no information about. Think of values sourced from loads or passed in as function parameters. So the range of %in
is:
min: -NaN
max: NaN
isInteger: 0
This means that the result of the selection is the same range - EG. the range of the 0.0 is merged with the range of %in
which just so happens to already contain 0.0.
PHIs⌗
%f = phi float [%a, %true], [%b, %false]
PHIs merge their ranges together in a similar fashion to selects. So for the above phi the range of %f
is the intersection of the ranges of %a
and %b
. To keep the algorithm simple though, I’ve purposefully not handled any arguments to a PHI that we haven’t seen before. Since we iterate through the basic blocks in a reverse post order traversal, the only time we will see a value to a PHI that we haven’t seen before is in loops where the PHI value has yet to be parsed:
loop:
%f = phi float [0.0, %entry], [%f2, %loop]
%f2 = fadd float %f, 1.0
%c = fcmp oge float %f2, 42.0
br i1 %c, label %loop, label %merge
In the example above %f2
has not been encountered when we are classifying the PHI node in %f
, and thus we have to assume the worst case for %f
that %f2
could be any floating-point number.
min: -NaN
max: NaN
isInteger: 0
Floating-Point Negate⌗
%f2 = fneg float %f
Negate is a fun one - because in any cases where we know that the input is all positive or all negative we know for sure that the output is exactly the opposite range.
Lets assume that %f
is in the range [0..4000.12]
, and thus %f2
is the negative of this range:
min: -4000.12
max: -0
isInteger: 0
Floating-Point Addition⌗
%f3 = fadd float %f, %f2
The addition of two numbers has a linear result across the range. So to evaluate the full range of the addition we need to perform eight adds - we need to add the min/max from the first operand to the min/max of the second operand. We need to be careful of the worst case behaviour with respect to rounding also, so we need to perform all these additions under both round to negative and round to positive rounding modes (thus the eight adds).
LLVM’s APFloat class supports all this for us, we just leverage it.
min: 0
max: 638.1
isInteger: 0
Floating-Point Subtraction⌗
%f3 = fsub float %f, %f2
Subtraction follows the same idea as addition - eight subtractions with the min/max of both operands.
min: -127
max: 511
isInteger: 1
Floating-Point Multiplication⌗
%f3 = fmul float %f, %f2
Multiplication is the same as addition and subtraction - use LLVM’s APFloat to multiply the min/max of both operands.
min: 0
max: 64897
isInteger: 1
Floating-Point Division⌗
%f3 = fdiv float %f, %f2
Division is an interesting one because there are three domains of interest outwith any NaNs:
- Could the denominator be zero? This can produce infinities.
- Could the denominator be between minus one and one? This grows the range of the output.
- Could the denominator be exclusively outwith the range minus one and one? This shrinks the range.
For my evaluation I care most about 1. and 3. - if the denominator could be zero I need my range to encompass all finite and infinite numbers, and if the denominator will always cause the range to shrink I calculate the division on the bounds and use that as the new constrained range.
Let us assume that %f
in the above example is in the range [0..511.0]
, and that %f2
is a single value 400000000.0
. The resulting range for %f3
is:
FPSCEV: 0x7fe02e006698:
min: 0
max: 1.27750002E-6
isInteger: 0
Another interesting point with division is the fast-math flag allow reciprocal (arcp). This flags lets a division operation to be replaced with x * (1 / y) - EG. you take the reciprocal of y (with intermediate rounding) and multiply that by x. Lots of hardware has efficient reciprocals, and so this optimization can be very powerful for performance. The problem with this optimization for us is that the intermediate rounding of x presents an issue with the range of the output. To combat this - if arcp is being used we have to perform four division operations for the reciprocal with each value from the range of y being calculate with each of the most biased rounding modes, and then these reciprocals are combined with the range of x using a further sixteen multiplies. We need sixteen multiplies because we need to multiply all four of the reciprocal calculations with the two ranges of x for each of the two most biased rounding modes - resulting in the sixteen operations. At the end of this we just calculate the min and max of all these values to get our final range.
If we apply arcp to the division operation above, our new resulting range for %f3
is:
min: 0
max: 1.27750013E-6
isInteger: 0
Which as you can see is slightly larger in the maximum resulting value because of the intermediate rounding.
Floating-Point Remainder⌗
Floating-point remainder is an operation that I’ve avoided - simply because LLVM’s APFloat does not contain a method for calculating the remainder with a provided rounding mode.
The only optimizations I propagate through remainder are the fast-math flags.
Arbitrary Calls⌗
LLVM lets you place fast-math flags on arbitrary calls.
%f = call nnan float @some_called_func()
%f2 = call nnan ninf float @some_called_func()
For the above two calls, the range of %f
will not contain NaNs, and the range of %f2
is strictly finite - EG. no NaNs or infinities.
Square Root⌗
%f2 = call float @llvm.sqrt.f32(float %f)
LLVM’s APFloat does not contain a sqrt operation. So all I could do with sqrt was to check if the range was entirely negative and thus a NaN would be produced. If the range contains positives and negatives, then the largest output range could be NaN, and the smallest output range could always be zero for a sufficiently small input. Otherwise if the range is entirely finite and positive, the best I could do for now was to say that the output range is at least as small as the input range, and I wipe all knowledge of whether the input was an integer because the likelyhood of an integer input being a perfect square is small enough that I didn’t currently care.
If %f
in the above example is an integer in the range [0.0..511.0]
, then the range for %f2
is:
min: 0
max: 511
isInteger: 0
Or - if the range is entirely negative the result is:
min: NaN
max: NaN
isInteger: 0
X To the Power of Integer Y⌗
%f2 = call float @llvm.powi.f32(float %f, i32 %i)
For x to the power of an integer y, we again don’t have anything in LLVM’s APFloat to help us here. But we can make some assumptions on the output if either %f
or %i
all have the same sign. Firstly - I only optimize if x is entirely a positive number. This is because negative x will cause the sign of the result to oscillate depending on whether the input integer y is odd or even. Next if y is always negative and x is greater than or equal to one, we know that the output range will at most be as large as the input range. And lastly if y is always positive then if x is less than one the output is between zero and one, otherwise if x is one or greater then the result will be between the minimum of one and the original range of x (because x^0 == 1.0), and the maximum will be infinity.
For instance, lets say that %i
above is in the range [0..255]
and %f
is in the range [0..65535]
, the range of %f2
is:
min: 0
max: +Inf
isInteger: 0
We could greatly reduce this range if we had an equivalent of pow for an APFloat in LLVM!
Cosine and Sine⌗
%f2 = call float @llvm.cos.f32(float %f)
%f3 = call float @llvm.sin.f32(float %f)
The two trignometric functions cos and sin are great candidates for analysis. Both functions will return a result in the range [-1..1]
if the input value is finite.
Exponentials⌗
%f2 = call float @llvm.exp.f32(float %f)
%f3 = call float @llvm.exp2.f32(float %f)
The two exponential functions LLVM provides are hard to provide a maximum bound to because we don’t have functions in APFloat to perform the calculation. But we do know that exponential always provides a positive result, so we set the minimum to zero for the output range, and the maximum to the largest the fast-math flags let us support (so worst case NaN, otherwise infinity or the largest positive number).
Logarithms⌗
%f2 = call float @llvm.log.f32(float %f)
%f3 = call float @llvm.log10.f32(float %f)
%f4 = call float @llvm.log2.f32(float %f)
A similar problem with logarithm is that we don’t have functions in APFloat that can help us here. But there are some properties of logs that we can check:
- I check if the input is entirely negative, which always produces a NaN.
- If the input is always positive but could be zero, then the minimum value could be a negative infinity.
- If the input is less than one but not zero, then the minimum value could go as low as the largest negative representable number.
- Otherwise if the input is greater than or equal to one, the minimum value is zero.
Since I don’t have a way to calculate the log of the input, the maximum range is poorly chosen as the same as the input - which will nearly always be hugely bigger than the actual maximum could be if we had a way to calculate it!
If we assume that %f
is definitely not negative, then the output range is:
min: -Inf
max: 511
isInteger: 0
Fused Multiply and Add⌗
%f3 = call float @llvm.fma.f32(float %f, float %f2, float 4.0)
Fused multiply and add does not intermediately round the multiplied values before combining the additive. Luckily APFloat has a function for calculating this that takes a rounding mode, so we can use a similar approach to our addition above, except that we need sixteen operations for min and max we calculate - we need to perform four multiplications with two additions for the two most biased rounding modes.
If we assume that %f
above has a range of [0..511]
and %f2
has a range of [-511..0]
, then the resulting range of the fma is:
min: -261117
max: 4
isInteger: 0
Floating-Point Absolute Value⌗
%f2 = call float @llvm.fabs.f32(float %f)
The absolute value of a number is just the range of the input with its sign reversed. This one took me an embarassingly long time to get right, as I’ll illustrate with a fun little range. Let’s say that %f
is in the range [-4..1.5]
, what I did originally was to wipe the sign from the range and then recalculate which of the new values was min/max. The resulting range was incredibly wrong:
min: 1.5
max: 4
isInteger: 0
The fix was trivial though - if the original range contained positive and negative numbers it thus passed through zero, and the minimum result would be zero also. After applying this fix the range correctly was:
min: 0
max: 4
isInteger: 0
Minimums and Maximums⌗
%f3 = call float @llvm.minnum.f32(float %f, float %f2)
%f4 = call float @llvm.maxnum.f32(float %f, float %f2)
%f5 = call float @llvm.minimum.f32(float %f, float %f2)
%f6 = call float @llvm.maximum.f32(float %f, float %f2)
LLVM provides two variants of minimum/maximum calculations - they only really differ in how they handle NaNs. Minnum and maxnum match libm’s fmin/fmax, whereas minimum and maximum match IEEE 754-2018.
Lets assume that %f
is in the range [0..511]
and %f2
is in the range [-256..255]
.
Minnum’s range is:
min: -256
max: 255
isInteger: 1
Maxnum’s range is:
min: 0
max: 511
isInteger: 1
Minimum’s range is:
min: -256
max: 255
isInteger: 1
Maximum’s range is:
min: -256
max: 255
isInteger: 1
Copy Sign⌗
%f3 = call float @llvm.copysign.f32(float %f, float %f2)
Copy sign just copies the sign of %f2
to %f
. The interesting ranges are:
- If
%f2
contains positive and negative numbers, I wipe the sign the minimum and maximum ranges of%f
, find the largest of these for the new maximum, and flip the sign for the new minimum. - If
%f
contains positive and negative numbers, I do the same a 1. - If
%f2
is all negative then the result is always negative. - if
%f2
is all positive then the result is always positive.
Let us assume that %f
is in the range [0..511]
and %f2
is in the range [-256..255]
. This would mean our output range would be:
min: -511
max: 511
isInteger: 0
Rounding⌗
%f2 = call float @llvm.floor.f32(float %f)
%f3 = call float @llvm.ceil.f32(float %f)
%f4 = call float @llvm.trunc.f32(float %f)
%f5 = call float @llvm.rint.f32(float %f)
%f6 = call float @llvm.nearbyint.f32(float %f)
%f7 = call float @llvm.round.f32(float %f)
LLVM has various functions that essentially boil down to “Round this floating-point to an integer with various rounding modes”. For each of these functions I use the APFloat’s roundToIntegral
to handle them.
Let us assume that %f
is in the range [-32768.5039..32768.5039]
.
For floor the output range is:
min: -32769
max: 32768
isInteger: 1
For ceil the output range is:
min: -32768
max: 32769
isInteger: 1
For trunc the output range is:
min: -32768
max: 32768
isInteger: 1
For rint the output range is:
min: -32769
max: 32769
isInteger: 1
For nearbyint the output range is:
min: -32769
max: 32769
isInteger: 1
For round the output range is:
min: -32769
max: 32769
isInteger: 1
Note that all of the results are always integer too.
Fast-Math Flags⌗
LLVM supports fast-math flags that allows the compiler to assume some properties about the input floating-point. For each of the functions above, the fast-math flags no-NaNs, no-infinities, and no signed zeros are propagated onto the inputs before the calculations of their range, and also propagated onto the output to constrain the range further.
All The Missing Parts⌗
Even though I’ve done a quite extensive analysis on the input, there are lots of missing parts. These can be categorized into a few main areas:
- Any function that is missing from APFloat (like exponentials) means I make a really poor guess at the range’s constraint. If these functions could be added this analysis would get better.
- I make no attempt to handle the fast-math flag approximate function (afn). In actual fact given there are no range constraints on what an operation with afn is tagged with, I really should modify the optimization to set the range to NaN if afn is used in any function. This would suck because afn is meant to be used to allow more optimizations of the function, so to make it that our range analysis gets much worse when it is present seems backwards. I think the only way to solve this would be to provide some limitations on the range - like we could still require that afn on trig functions would definitely return a result in the range
[-1..1]
. - I almost wonder whether I would have been better tracking the NaN/infinity separately from the range itself - some functions like sin/cos can return NaN or something in the range
[-1..1]
- and if this was fed into an operation with no-NaNs specified then we could assume a much tighter range for that function. - Potentially track disjoint ranges to let us have a better guess - means a lot more data per FPSCEV though.
- Integrate with the integer SCEV itself? I didn’t do this because SCEV is huge and scary and anything that changes it this drastically would probably break all of LLVM. But if you convert a floating-point to an integer and you have FPSCEV information, I could translate that back into some better SCEV information on the integer too.
Conclusions⌗
So doing this analysis wasn’t trivial - but the results are pretty cool! We can now perform lots more analysis on floating-point values in LLVM.
In my next blog I’ll use this analysis to start actually optimizing some fun little programs, and see what is now possible with this extra information.