Feature #15881
openOptimize deconstruct in pattern matching
Description
class A
def deconstruct
puts 'deconstruct called'
[1]
end
end
case A.new
in [2]
2
in [1]
1
else
end
Currently this outputs:
deconstruct called
deconstruct called
=> 1
Shouldn't deconstruct called
print only once, whenever the first deconstruction needed occurs?
Updated by mame (Yusuke Endoh) over 5 years ago
I talked with ktsj, the author of pattern matching. He had actually considered caching the result of deconstruct, but we found it difficult because of some reasons.
- If destructive operation is applied to the object being matched (this is possible in a guard expression), the behavior of pattern matching would get messed up.
- ktsj investigated Scala's pattern match, and it calls
unapply
method each time without caching. - We believe
Array#deconstruct
andHash#deconstruct_keys
would be most often called. They just return the receiver itself, so no object is generated. So, caching is useless in the typical case. - If the overhead of a method call itself matters, we can optimize it by adding a special instruction like
opt_deconstruct
. - If you need to cache the result of your own
deconstruct
definition, it is not so difficult to manually memoize the result.
Updated by marcandre (Marc-Andre Lafortune) over 5 years ago
mame (Yusuke Endoh) wrote:
I talked with ktsj, the author of pattern matching. He had actually considered caching the result of deconstruct, but we found it difficult because of some reasons.
- If destructive operation is applied to the object being matched (this is possible in a guard expression), the behavior of pattern matching would get messed up.
Is there a valid use case for this though? Seems more like an anti-pattern that is better not being supported at all.
- ktsj investigated Scala's pattern match, and it calls
unapply
method each time without caching.
Interesting. Do we know if there a is good reason for that? Scala is in general faster than Ruby, so it might not matter as much there...
- We believe
Array#deconstruct
andHash#deconstruct_keys
would be most often called. They just return the receiver itself, so no object is generated. So, caching is useless in the typical case.
Well, unless I'm mistaken, it would not be completely useless as it would avoid send(:respond_to?, :deconstruct_keys)
and send(:deconstruct_keys)
, but the main issue really is for user defined classes.
- If you need to cache the result of your own
deconstruct
definition, it is not so difficult to manually memoize the result.
I disagree. You can't simply use @cache ||= ...
, you need to invalidate @cache
if any dependency of the ...
changes. That may be quite tricky. Let's remember:
There are only two hard things in Computer Science: cache invalidation and naming things.
-- Phil Karlton
I remain convinced that it would be better to cache this result.
Updated by mame (Yusuke Endoh) over 5 years ago
marcandre (Marc-Andre Lafortune) wrote:
mame (Yusuke Endoh) wrote:
I talked with ktsj, the author of pattern matching. He had actually considered caching the result of deconstruct, but we found it difficult because of some reasons.
- If destructive operation is applied to the object being matched (this is possible in a guard expression), the behavior of pattern matching would get messed up.
Is there a valid use case for this though? Seems more like an anti-pattern that is better not being supported at all.
ktsj and I prefer simpleness and robustness to performance when we consider the design of Ruby language. In Ruby, optimization is not primary; it is good as long as it does not change the naive semantics. For example, if we disallow the redefinition of bulit-in methods including Integer#+
, we can make the interpreter faster and can make the implementation much simpler. But we still allow and respect the redefinition.
In this case, more intelligent optimization (including nice cache invalidation) that does not affect the semantics is needed (if the performance is really needed).
- If you need to cache the result of your own
deconstruct
definition, it is not so difficult to manually memoize the result.I disagree. You can't simply use
@cache ||= ...
, you need to invalidate@cache
if any dependency of the...
changes. That may be quite tricky. Let's remember:There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
Agreed. The same goes to Ruby interpreter itself :-)
Updated by ktsj (Kazuki Tsujimoto) over 5 years ago
- Related to Feature #14912: Introduce pattern matching syntax added
Updated by ko1 (Koichi Sasada) over 5 years ago
- Assignee set to ktsj (Kazuki Tsujimoto)
Updated by hsbt (Hiroshi SHIBATA) 9 months ago
- Status changed from Open to Assigned