Yeah, this works especially well for currencies (effectively doing all calculations in cents/pennies), as you do need perfect precision throughout the calculations, but the final results gets rounded to two-digit-precision anyways.
quite a horrible hack, most modern languages have decimal type that handles floating rounding. And if not, you should just use rounding functions to two digits with currency.
Not sure what financing applications you develop. But what you suggest wouldn’t pass a code review in any financial-related project I saw.
Using integers for currency-related calculations and formatting the output is no dirty hack, it’s industry standard because floating-point arithmetic is, on contemporary hardware, never precise (can’t be, see https://en.wikipedia.org/wiki/IEEE_754 ) whereas integer arithmetic (or integers used to represent fixed-point arithmetic) always has the same level of precision across all the range it can represent. You typically don’t want to round the numbers you work with, you need to round the result ;-) .
One of my lecturers mentioned a way they would get around this was to store all values as ints and then append a . two character before the final one.
Yeah, this works especially well for currencies (effectively doing all calculations in cents/pennies), as you do need perfect precision throughout the calculations, but the final results gets rounded to two-digit-precision anyways.
quite a horrible hack, most modern languages have decimal type that handles floating rounding. And if not, you should just use rounding functions to two digits with currency.
Not sure what financing applications you develop. But what you suggest wouldn’t pass a code review in any financial-related project I saw.
Using integers for currency-related calculations and formatting the output is no dirty hack, it’s industry standard because floating-point arithmetic is, on contemporary hardware, never precise (can’t be, see https://en.wikipedia.org/wiki/IEEE_754 ) whereas integer arithmetic (or integers used to represent fixed-point arithmetic) always has the same level of precision across all the range it can represent. You typically don’t want to round the numbers you work with, you need to round the result ;-) .
Fixed point notation. Before floats were invented, that was the standard way of doing it. You needed to keep your equation within certain boundaries.