2017/11/18
Rounding 1.15
When you round 1.15 to the nearest tenths, what do you get?
The elementary math gives 1.2. Many programming language implementations disagree, however:
$ cat tmp.c #include <stdio.h> int main() { printf("%5.1f\n", 1.15); return 0; } $ gcc tmp.c && ./a.out 1.1
Is it a bug? No, it's working as intended. The intention, however, is different from what people naturally expect.
* * *
Decimal 1.15 can't be represented exactly with a binary floating point number; so internally, the runtime picks the closest binary floating point number to 1.15, which happens to be very slightly smaller than the actual 1.15. So, when you need to choose to round it to either 1.1 or 1.2---if you look at the actual number you have, you should say 1.1 is closer. (By the way, if you use 4.15 instead in the above example, you'll get 4.2. That's because the binary floating point number closest to 4.15 is slightly greater than that.)
You can use Gauche to check if that's really the case.
The exact
function tries to find the simplest rational number
within the error boundary of the floating point number,
but using real->rational
you can get the exact number
represented internally by the floating point number.
gosh> (exact 1.15) 23/20 gosh> (real->rational 1.15 0 0 #f) 2589569785738035/2251799813685248
And indeed, the exact one is smaller than the one you naturally expect from the notation:
gosh> (< 2589569785738035/2251799813685248 23/20) #t
With 4.15, the exact one is greater than the closest simplified one:
gosh> (exact 4.15) 83/20 gosh> (real->rational 4.15 0 0 #f) 2336242306698445/562949953421312 gosh> (> 2336242306698445/562949953421312 83/20) #t
So, if you take a point of view that a binary floating point number stands for the value it exactly represents (which is how they're treated inside the computer), you should round "the closest number to 1.15" to 1.1.
When users complain, we programmer tend to say "Floating point numbers have error. Use arbitrary precison arithmetic!" Well, floating point numbers themselves don't have error, per se. It has a precisely defined exact value, sign * mantissa * 2^exponent. It is an operation that has error, and in this case, it is the conversion from the decimal notation 1.15 to a binary floating point number.
* * *
But is it the only valid interpretation?
Another view is that when we treat a value noted "1.15", it is intended to be exactly 1.15 but we take the closest floating-point number as an approximation. The distinction is subtle but important---in the previous view, the intended value is 2589569785738035/2251799813685248 in floating-point number, and 1.15 is approximation. In the current view, the intended value is 1.15, and 2589569785738035/2251799813685248 in floating-point number is the approximation.
In this view, rounding "1.15" to the nearest tenths should result "1.2". (To be precise, we must also assume round-half-up or round-half-to-even rule). This usually fits user's expectation better. But it may be costly that we have to first obtain the optimal decimal representation of the given floating point number to decide which way to round.
* * *
We see both views are useful depending on circumstances. So we decided to support both.
The format
procedure now supports floating-point number output
directive, ~f
. You can specify the field width and precision:
(format "~6,3f" 3.141592) ⇒ " 3.142"
If we need to round to the given precision, the default is to take the exact value of the floating-point number---the first view we discussed above. We call it effective rounding.
(format "~6,1f" 1.15) ⇒ " 1.1"
However, if you need the latter view---we call it notational rounding---you can have it with :
flag.
(format "~6,1:f" 1.15) ⇒ " 1.2"
Post a comment