Feature #18875


Speed up ISeq marking by introducing a bitmap and rearranging inline caches

Added by tenderlovemaking (Aaron Patterson) about 2 years ago. Updated about 2 years ago.

Target version:


A large percentage of major GC time is spent marking instruction sequence objects. This PR aims to speed up major GC by speeding up marking instruction sequence objects.

Marking ISeq objects

Today we have to disassemble instruction sequences in order to mark them. The disassembly process looks for GC allocated objects and marks them. To disassemble an iseq, we have to iterate over each instruction, convert the instruction from an address back to the original op code integer, then look up the parameters for the op code. Once we know the parameter types, we can iterate though them and mark "interesting" references. We can see this process in the iseq_extract_values function.

According to profile results, the biggest bottleneck in this function is converting addresses back to instruction ids.

Speeding up ISeq marking

To speed up ISeq marking, this PR introduces two changes. The first change is adding a bitmap, and the second change is rearranging inline caches to be more "convenient".


At compilation time, we allocate a bitmap along side of the iseq object. The bitmap indicates offsets of VALUE objects inside the instruction sequences. When marking an instruction, we can simply iterate over the bitmap to find VALUE objects that need to be marked.

Inline Cache Rearrangement

Inline cache types IC, IVC, ICVARC, and ISE are allocated from a buffer that is stored on the iseq constant body. These caches are a union type. Unfortunately, these union types don't have a "type" field, so they can only be distinguished by looking at the parameter types of an instruction.

Take the following Ruby code for example:

Foo =~ /#{foo}/o;

The instruction sequences for this code are as follows:

== disasm: #<ISeq:<main>@-e:1 (1,0)-(1,17)> (catch: FALSE)
0000 opt_getinlinecache                     9, <is:0>                 (   1)[Li]
0003 putobject                              true
0005 getconstant                            :Foo
0007 opt_setinlinecache                     <is:0>
0009 once                                   block in <main>, <is:1>
0012 opt_regexpmatch2                       <calldata!mid:=~, argc:1, ARGS_SIMPLE>[CcCr]
0014 leave

The ISeq object contains two entries in the is_entries buffer, one for the ISE cache associated with the once instruction, and one for the IC cache associated with the opt_getinlinecache and opt_setinlinecache instructions.

Unfortunately we cannot iterate through the caches in the is_entries list because the union types don't have the same layout. Marking an ISE is very different than marking an IC, and we can only differentiate them by disassembling and checking the instruction sequences themselves.

To solve this problem, this PR introduces 3 counters for the different types of inline caches. Then, we group inline cache types within the is_entries buffer.
Since the inline cache types are grouped, we can use the counters to iterate over the buffer and we know what type is being used.

Combining bitmap marking and inline cache arrangement means that we can mark instruction sequences without disassembling the instructions.

Speed impact

I benchmarked this change with a basic Rails application using the following script:


require "benchmark/ips"

Benchmark.ips do |x|"major GC") { GC.start }

Here are the results with the master version of Ruby:

$ RAILS_ENV=production gel exec bin/rails r test.rb
ruby 3.2.0dev (2022-06-22T12:30:39Z master 744d17ff6c) [arm64-darwin21]
Warming up --------------------------------------
            major GC     4.000  i/100ms
Calculating -------------------------------------
            major GC     47.748  (± 2.1%) i/s -    240.000  in   5.028520s

Here it is with these patches applied:

$ RAILS_ENV=production gel exec bin/rails r test.rb
ruby 3.2.0dev (2022-06-22T20:52:13Z iseq-bitmap 2ba736a7f9) [arm64-darwin21]
Warming up --------------------------------------
            major GC     7.000  i/100ms
Calculating -------------------------------------
            major GC     77.208  (± 1.3%) i/s -    392.000  in   5.079023s

With these patches applied, major GC is about 60% faster.

Memory impact

The memory increase is proportional to the number of instructions stored on an iseq. This works about to be about 1% increase in the size of an iseq (ceil(iseq_length / 64) on 64 bit platforms).

Future work

This PR always mallocs a bitmap table. We can eliminate the malloc when:

  1. There is nothing to mark
  2. iseq_length is <= 64

I posted a pull request on GitHub, and I've attached a squashed version of the patches. The diff includes some unused code which I used to verify the bitmaps. If we decide to merge this I'll remove the unused code.


0001-Squashed-commit-of-the-following.patch (23.6 KB) 0001-Squashed-commit-of-the-following.patch tenderlovemaking (Aaron Patterson), 06/22/2022 10:55 PM
Actions #1

Updated by tenderlovemaking (Aaron Patterson) about 2 years ago

  • Status changed from Open to Closed

Applied in changeset git|e23540e5666664e23f2adecdc2cc591f3ff6fe2f.

Speed up ISeq by marking via bitmaps and IC rearranging

This commit adds a bitfield to the iseq body that stores offsets inside
the iseq buffer that contain values we need to mark. We can use this
bitfield to mark objects instead of disassembling the instructions.

This commit also groups inline storage entries and adds a counter for
each entry. This allows us to iterate and mark each entry without
disassembling instructions

Since we have a bitfield and grouped inline caches, we can mark all
VALUE objects associated with instructions without actually
disassembling the instructions at mark time.

[Feature #18875] [ruby-core:109042]


Also available in: Atom PDF