Feature #8509
closedUse 128 bit integer type in Bignum
Description
How about Bignum uses 128 bit integer type?
I found that recent gcc (since gcc 4.6) supports 128 bit integer type,
__int128, on some platforms.
http://gcc.gnu.org/gcc-4.6/changes.html
It seems gcc supports it on x86_64 and not on i386.
Currently Ruby implements Bignum on top of 32 bit integer type (BDIGIT)
and 64 bit integer type (BDIGIT_DBL).
(Ruby uses two integer types for multiplication.
BDIGIT_DBL can represent any value of BDIGIT * BDIGIT.)
Historically, Ruby supported platforms without 64 bit integer type.
Ruby used 16 bit integer type (BDIGIT) and 32 bit integer type (BDIGIT_DBL)
on such platform.
However I guess no one use such platforms today.
So with gcc 4.6 or later, we can use 64 bit integer type (BDIGIT) and
128 bit integer type (BDIGIT_DBL).
This may gain performance.
I implemented it. (int128-bignum.patch)
Simple benchmark on Debian GNU/Linux 7.0 (wheezy) x86_64:
trunk% time ./ruby -e 'v = 31000; u = 1; 1000.times { u *= v }'
./ruby -e 'v = 31000; u = 1; 1000.times { u *= v }' 1.64s user 0.00s system 99% cpu 1.655 total
128bit% time ./ruby -e 'v = 31000; u = 1; 1000.times { u *= v }'
./ruby -e 'v = 31000; u = 1; 1000.times { u *= v }' 1.21s user 0.01s system 99% cpu 1.222 total
I think larger integer type reduces control overhead and compiler will have more opportunity for optimization.
However the patch has API incompatibility.
BDIGIT and BDIGIT_DBL and related definitions are defined in a public headers,
ruby/defines.h.
So third party extensions may be broken with the change.
Note that BDIGIT_DBL is a macro (not typedef name), compiler used for third party extension
don't need to support __int128 unless the extension actually uses BDIGIT_DBL.
If a program try to extract information from a Bignum and assumes BDIGIT is 32 bit integer,
the result may be invalid.
In this situation rb_big_pack/rb_big_unpack or rb_integer_pack/rb_integer_unpack [ruby-core:55408] may help.
However BDIGIT size change itself may cause problems.
One example I patched is about rb_big_pow.
int128-bignum.patch contains following modification for rb_big_pow.
-
const long BIGLEN_LIMIT = BITSPERDIG*1024*1024;
-
const long BIGLEN_LIMIT = 32*1024*1024;
BIGLEN_LIMIT controls the rb_big_pow generates a Bignum or a Float.
If it is not modified, a test causes memory allocation failure.
Another problem is bigdecimal tests.
bigdecimal tests failed with int128-bignum.patch as follows.
-
Failure:
TestBigDecimal#test_power_of_three [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:1006]:
<(1/81)> expected but was
<#<BigDecimal:2b72eab381d8,'0.1234567901 2345679012 3456790123 4567901234 5679012345 6790123456 7901234567 9012345679 0123456790 1234567901 2345679012 3456790123 4567901234 57E-1',133(133)>>. -
Failure:
TestBigDecimal#test_power_with_prec [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:1110]:
<#<BigDecimal:2b72e939bf20,'0.2245915771 8361045473E2',38(57)>> expected but was
<#<BigDecimal:2b72e93b57b8,'0.2061448331 0990090312E2',38(114)>>. -
Failure:
TestBigDecimal#test_power_without_prec [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:1103]:
<#<BigDecimal:2b72e93b7ab8,'0.2245915771 8361045473 4271522045 4373502758 9315133996 7843873233 068E2',95(95)>> expected but was
<#<BigDecimal:2b72eaa0d5d8,'0.2061448331 0990090311 8271522045 4373474226 6494929516 0232192991 9587472564 291161E2',95(171)>>. -
Failure:
TestBigDecimal#test_sqrt_bigdecimal [/home/akr/tst1/ruby/test/bigdecimal/test_bigdecimal.rb:796]:
<1267650600228229401496703205376> expected but was
<#<BigDecimal:2b72eaaa25c0,'0.1267650600 2282294014 9670320537 5999999999 9999999999 9999999999 9999999995 234375E31',95(114)>>. -
Failure:
TestBigMath#test_atan [/home/akr/tst1/ruby/test/bigdecimal/test_bigmath.rb:60]:
[ruby-dev:41257].
<#<BigDecimal:2b72eac54cd8,'0.8238407534 1863629176 9355073102 5140889593 4562402795 2954058347 0231225394 89E0',76(95)>> expected but was
<#<BigDecimal:2b72eabf2ec0,'0.8238407534 1863629176 9355073102 5140889593 4562402795 2954062036 3719372813 99E0',76(228)>>.
I guess bigdecimal determines precision depend on sizeof(BDIGIT).
I think it is not a good way to use BDIGIT.
How do you think, mrkn?
Also, we cannot define PRI_BDIGIT_DBL_PREFIX because
there is no printf directive for __int128.
Anyway, is Bignum with __int128 worth to support?
Any opinion?
Files