Project

General

Profile

Actions

Feature #14362

closed

use BigDecimal instead of Float by default

Added by AaronLasseigne (Aaron Lasseigne) over 6 years ago. Updated about 6 years ago.

Status:
Rejected
Assignee:
-
Target version:
-
[ruby-core:84880]

Description

When writing a decimal the default type assigned is Float:

> 1.2.class
=> Float

This is great for memory savings and for application speed but it comes with accuracy issues:

> 129.95 * 100
=> 12994.999999999998

Ruby's own BigDecimal docs say:

Decimal arithmetic is also useful for general calculation, because it provides the correct answers people expect–whereas normal binary floating point arithmetic often introduces subtle errors because of the conversion between base 10 and base 2.

What if BigDecimal was moved into the Ruby core and made the default for numbers like 1.2?

> 1.2.class
=> BigDecimal

I realize this goes against the 3x3 goal but I think BigDecimal is preferable over Float for developer happiness. I've seen lots of developers stumble when first learning about the pitfalls of Float. I've see test suites where a range is tested for because of answers like 12994.999999999998 instead of 12995.0. At one point trading accuracy for performance made sense. I'm not sure that's still the case today.

Right now a decimal generates the faster and less accurate Float. Developers have to opt-in to the slower but safer BigDecimal by manually requesting a BigDecimal. By flipping this we default to the safer version and ask developers to opt-in to the faster but less accurate Float if needed.

> 1.2.class
=> Decimal
> Float.new('1.2')
=> 1.2

There could also be a shorthand for float where the number is followed by an f (similar to Rational).

1.2f # => Float

The change would help "provide the correct answers people expect". The change would be mostly seamless from an interface standpoint. The only methods on Float and not on BigDecimal appear to be rationalize, next_float, and prev_float. I suspect those methods are rarely used. The increased accuracy seems unlikely to cause code issues for people.

The two largest downsides that I can come up with are speed and display. I'm not sure what kind of hit is taken by handling all decimals as BigDecimal. Would an average Rails application see a large hit? Additionally, the display value of BigDecimal is engineering notation. This is also the default produced by to_s. It's harder to read and might mess up code by displaying things like "0.125e2" instead of "12.5". Certainly the default produced by to_s could change to the conventional floating point notation.

A change this significant would likely target Ruby 3 so there would be time to make some changes like adding a BigDecimal#rationalize method or changing the default output of BigDecimal#to_s.

Thank you for considering this.

Actions

Also available in: Atom PDF

Like0
Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0