Project

General

Profile

Feature #18589

Updated by kddnewton (Kevin Newton) over 2 years ago

This is related to https://github.com/ruby/ruby/pull/5433. 

 ## Current behavior 

 Caches depend on a global counter. All constant mutations cause all caches to be invalidated. 

 ```ruby 
 class A 
   B = 1 
 end 

 def foo 
   A::B # inline cache depends on global counter 
 end 

 foo # populate inline cache 
 foo # hit inline cache 

 C = 1 # global counter increments, all caches are invalidated 

 foo # misses inline cache due to `C = 1` 
 ``` 

 ## Proposed behavior 

 Caches depend on name components. Only constant mutations with corresponding names will invalidate the cache. 

 ```ruby 
 class A 
   B = 1 
 end 

 def foo 
   A::B # inline cache depends constants named "A" and "B" 
 end 

 foo # populate inline cache 
 foo # hit inline cache 

 C = 1 # caches that depend on the name "C" are invalidated 

 foo # hits inline cache because IC only depends on "A" and "B" 
 ``` 

 Examples of breaking the new cache: 

 ```ruby 
 module C 
   # Breaks `foo` cache because "A" constant is set and the cache in foo depends 
   # on "A" and "B" 
   class A; end 
 end 

 B = 1 
 ``` 

 We expect the new cache scheme to be invalidated less often because names aren't frequently reused. With the cache being invalidated less, we can rely on its stability more to keep our constant references fast and reduce the need to throw away generated code in YJIT. 

 ## Performance benchmarks 

 The following benchmark (included in this pull request) performs about 2x faster than master. 

 ```ruby 
 CONSTANT1 = 1 
 CONSTANT2 = 1 
 CONSTANT3 = 1 
 CONSTANT4 = 1 
 CONSTANT5 = 1 

 def constants 
   [CONSTANT1, CONSTANT2, CONSTANT3, CONSTANT4, CONSTANT5] 
 end 

 500_000.times do 
   constants 
   INVALIDATE = true 
 end 
 ``` 

 In terms of macro benchmarks, I ran with this code on railsbench and there was not a statistically significant different in startup time or overall runtime performance. 

 @byroot also ran performance benchmarks on our production application. He noticed that there were several cache busts related to Object#extend (from core libraries), ActiveRecord::Relation#extending (from Rails), and autoload (from various gems, both internal and external). After a lot of work, the cache busts went down: 

 ![Cache bust changes](https://user-images.githubusercontent.com/19192189/156726006-75aab77a-7fdf-47cf-88cb-1175f193c18a.png) 

 but they're still frequent enough that it's a problem. These changes had a measurable performance difference in request speed: 

 ![Request speed changes](https://user-images.githubusercontent.com/19192189/156727814-adb0f8b5-9012-4d2c-ab9c-b29d80748a5c.png) 

 ## Memory benchmarks 

 In terms of memory, this includes an increase in VM size by about 500KiB when running on railsbench. This is because we're now tracking cache associations ({ ID => IC[] }) on the VM to know how to invalidate specific caches when constants change. 

 I booted Shopify's core monolith with this branch as well. It increased total retained memory from 1.23Gb to 1.3Gb (about a 0.7% increase). The memory increase is proportional to the number of constant caches found in the application. For each constant cache 1 level deep (e.g., `Foo`) the increase is about 33 bytes. For a constant cache 2 levels deep (e.g., `Foo::Bar`) the increase is about 67 bytes.

Back