Feature #15589
closed
`Numeric#zero?` is much slower than `== 0`
Added by sawa (Tsuyoshi Sawada) almost 6 years ago.
Updated over 4 years ago.
Description
My understanding is that the predicate method Numeric#zero?
is not only a shorthand for == 0
, but is also optimized for frequent patterns. If zero?
is not faster than == 0
, then it loses its reason for existence.
However, According to benchmarks on my environment, number.zero?
is around 1.23 times to 1.64 times slower than number == 0
when number
is an Integer
, Rational
, or Complex
. It is faster only when number
is a Float
.
And with number.nonzero?
, it is even worse. It is about 1.88 times to 4.35 times slower than number != 0
.
I think there is something wrong with this, and it should be possible to optimize these methods, which has somehow been missed.
- Description updated (diff)
- Assignee set to k0kubun (Takashi Kokubun)
Numeric#zero?
is not slow, but normal. Rather, ==
is fast because of specific VM instruction. I don't think that Numeric#zero?
deserves such a special handling. I hope that MJIT will implement method inlining and fix this issue in more general way. So, assigning to k0kubun.
(Personally I don't see any reason to use zero?
. == 0
is more explicit, shorter, more consistent, and easier to understand. Anyway.)
Just FYI: at least, the reason why zero?
was introduced is not because it is for frequent patterns. I have investigated the history of Numeric#zero?
and nonzero?
before.
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/58498
In short, it was introduced for very technical reason, and the motivation example is less significant now.
I was reading the old link - 1998 is indeed a long time ago. :-)
I don't want to write much about the .zero? versus .nonzero? method per se - I think
that is up to matz to determine how useful the methods are or not.
To me .zero? is intuitive and == 0 is understandable. I have no problem with either
variant and I think it is a lot up to the personal/individual style.
But let's leave that alone for the moment, and not look at .zero? or any other
method names or such; sawa also specifically wrote about the difference in
speed.
I do not know how much work it would be to make .zero? as fast as == 0, but
from a syntactic/semantic point of view, I think it would make sense to be
able to treat both in the same way, speed-wise alone. At the least I don't
see why they should have to be treated in a dissimilar manner, so I agree
with sawa. If it can be changed, it would be great if it can be changed.
But I think it may not have the highest priority either - but if possible then
I think it would be an improve to have .zero? be as fast or as a comparable
speed to == 0. (The syntax/semantic is of course for matz to have a look
at; might be interesting to see if anything has changed since 1998 ... that's
a LONG time by the way now with 2019 ... :) )
I do not know how much work it would be to make .zero? as fast as == 0
In fact, it is easy to add a new special instruction, say opt_zero_p
, for .zero?
. And if we add only opt_zero_p
, it will not cause a big problem.
However, there are many other methods that are much more frequent than .zero?
. Adding all of them will cause two problems:
- increases the maintenance cost of the VM
- increases the footprint of the interpreter and even degrade the whole performance because of instruction cache miss
So, we need to carefully choose the set of the methods that deserve special handling.
I would like to thank mame for introducing the history. I am not too crazy about using zero?
or nonzero?
, especially after reading about the history. I now understand that there might be less motivation for these methods today. But it just does not make much sense to have such method that is slower than what it is shorthand for. So I think there are two ways to go. One: Acknowledge that there are uses for these methods, and optimize them, or Two: Make them obsolete, in which case the original use cases should use == 0
.
If zero? was written in Ruby (like it is in Rubinius & TruffleRuby), and Ruby inlining was implemented (TruffleRuby does), then there should be very little difference once #zero? is compiled by the JIT.
That said, I am unsure if the difference ever matters in real-world workloads: it seems unlikely for #zero? to be a significant bottleneck for an application.
MJIT already achieves around 24Mi/s for == 0
and 20Mi/s for #zero? (and TruffleRuby 193Mi/s for both zero? and == 0, illustrating my point) on my laptop:
v = 0
r = true
benchmark("== 0") do
r = (v == 0)
end
benchmark("zero?") do
r = v.zero?
end
using benchmark-interface --simple
.
That's counting some block overhead and reading/writing to captured variables though, otherwise the benchmark optimizes away in >1 billion i/s.
sawa (Tsuyoshi Sawada) wrote:
So I think there are two ways to go. One: Acknowledge that there are uses for these methods, and optimize them, or Two: Make them obsolete, in which case the original use cases should use == 0
.
Don't be extreme. Both "One" and "Two" are unreasonable to me. My "Three": Let it be. Speed consistency is a too weak reason to do anything.
Eregon (Benoit Daloze) wrote:
If zero? was written in Ruby (like it is in Rubinius & TruffleRuby), and Ruby inlining was implemented (TruffleRuby does), then there should be very little difference once #zero? is compiled by the JIT.
I think that this is the way to go. So I assigned this to k0kubun. (I think the priority is very low, though.)
- Tracker changed from Bug to Feature
- ruby -v deleted (
2.6.1)
- Backport deleted (
2.4: UNKNOWN, 2.5: UNKNOWN, 2.6: UNKNOWN)
- Status changed from Open to Closed
Also available in: Atom
PDF
Like0
Like0Like0Like0Like0Like0Like0Like0Like0Like0