Feature #8509


Use 128 bit integer type in Bignum

Added by akr (Akira Tanaka) over 8 years ago. Updated almost 8 years ago.

Target version:


How about Bignum uses 128 bit integer type?

I found that recent gcc (since gcc 4.6) supports 128 bit integer type,
__int128, on some platforms.

It seems gcc supports it on x86_64 and not on i386.

Currently Ruby implements Bignum on top of 32 bit integer type (BDIGIT)
and 64 bit integer type (BDIGIT_DBL).
(Ruby uses two integer types for multiplication.
BDIGIT_DBL can represent any value of BDIGIT * BDIGIT.)

Historically, Ruby supported platforms without 64 bit integer type.
Ruby used 16 bit integer type (BDIGIT) and 32 bit integer type (BDIGIT_DBL)
on such platform.
However I guess no one use such platforms today.

So with gcc 4.6 or later, we can use 64 bit integer type (BDIGIT) and
128 bit integer type (BDIGIT_DBL).

This may gain performance.

I implemented it. (int128-bignum.patch)

Simple benchmark on Debian GNU/Linux 7.0 (wheezy) x86_64:

trunk% time ./ruby -e 'v = 3*1000; u = 1; 1000.times { u *= v }'
./ruby -e 'v = 3
1000; u = 1; 1000.times { u *= v }' 1.64s user 0.00s system 99% cpu 1.655 total
128bit% time ./ruby -e 'v = 3
1000; u = 1; 1000.times { u *= v }'
./ruby -e 'v = 3
*1000; u = 1; 1000.times { u *= v }' 1.21s user 0.01s system 99% cpu 1.222 total

I think larger integer type reduces control overhead and compiler will have more opportunity for optimization.

However the patch has API incompatibility.

BDIGIT and BDIGIT_DBL and related definitions are defined in a public headers,

So third party extensions may be broken with the change.

Note that BDIGIT_DBL is a macro (not typedef name), compiler used for third party extension
don't need to support __int128 unless the extension actually uses BDIGIT_DBL.

If a program try to extract information from a Bignum and assumes BDIGIT is 32 bit integer,
the result may be invalid.
In this situation rb_big_pack/rb_big_unpack or rb_integer_pack/rb_integer_unpack [ruby-core:55408] may help.

However BDIGIT size change itself may cause problems.

One example I patched is about rb_big_pow.
int128-bignum.patch contains following modification for rb_big_pow.

  • const long BIGLEN_LIMIT = BITSPERDIG*1024*1024;
  • const long BIGLEN_LIMIT = 32*1024*1024;

BIGLEN_LIMIT controls the rb_big_pow generates a Bignum or a Float.
If it is not modified, a test causes memory allocation failure.

Another problem is bigdecimal tests.
bigdecimal tests failed with int128-bignum.patch as follows.

1) Failure:
TestBigDecimal#test_power_of_three [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:1006]:
<(1/81)> expected but was

2) Failure:
TestBigDecimal#test_power_with_prec [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:1110]:
<#> expected but was

3) Failure:
TestBigDecimal#test_power_without_prec [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:1103]:
<#> expected but was

4) Failure:
TestBigDecimal#test_sqrt_bigdecimal [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:796]:
expected but was

5) Failure:
TestBigMath#test_atan [/home/akr/tst1/ruby/test/bigdecimal/test_bigmath.rb:60]:
<#> expected but was

I guess bigdecimal determines precision depend on sizeof(BDIGIT).
I think it is not a good way to use BDIGIT.

How do you think, mrkn?

Also, we cannot define PRI_BDIGIT_DBL_PREFIX because
there is no printf directive for __int128.

Anyway, is Bignum with __int128 worth to support?
Any opinion?


int128-bignum.patch (2.69 KB) int128-bignum.patch akr (Akira Tanaka), 06/10/2013 09:59 PM

Also available in: Atom PDF