Feature #18339
closedGVL instrumentation API
Description
GVL instrumentation API¶
Context¶
One of the most common if not the most common way to deploy Ruby application these days is through threaded runners (typically puma
, sidekiq
, etc).
While threaded runners can offer more throughput per RAM than forking runners, they do require to carefully set the concurrency level (number of threads).
If you increase concurrency too much, you'll experience GVL contention and the latency will suffer.
The common way today is to start with a relatively low number of threads, and then increase it until CPU usage reach an acceptably high level (generally ~75%
).
But this method really isn't precise, require to saturate one process with fake workload, and doesn't tell how much threads are waiting on the GVLs, just how much the CPU is used.
Because of this, lots of threaded applications are not that well tuned, even more so because the ideal configuration is very dependant on the workload and can vary over time. So a decent setting might not be so good six months later.
Ideally, application owners should be able to continuously see the impact of the GVL contention on their latency metric, so they can more accurately decide what throughput vs latency tradeoff is best for them and regularly adjust it.
Existing instrumentation methods¶
Currently, if you want to measure how much GVL contention is happening, you have to use some lower level tracing tools
such as bpftrace
, or dtrace
. These are quite advanced tools and require either root
access or to compile Ruby with different configuration flags etc.
They're also external, so common Application Performance Monitoring (APM) tools can't really report it.
Proposal¶
I'd like to have a C-level hook API around the GVL, with 3 events:
RUBY_INTERNAL_EVENT_THREAD_READY
RUBY_INTERNAL_EVENT_THREAD_RESUME
RUBY_INTERNAL_EVENT_THREAD_PAUSE
Such API would allow to implement C extensions that collect various metrics about the GVL impact, such as median / p90 / p99 wait time, or even per thread total wait time.
Aditionaly it would be very useful if the hook would pass some metadata, most importantly, the number of threads currently waiting.
People interested in a lower overhead monitoring method not calling clock_gettime
could instrument that number of waiting thread instead. It would be less accurate, but enough to tell wether there might be problem.
With such metrics, application owners would be able to much more precisely tune their concurrency setting, and deliberately chose their own tradeoff between throughput and latency.
Implementation¶
I submitted a PR for it https://github.com/ruby/ruby/pull/5500 (lacking windows support for now)
The API is as follow:
rb_thread_hook_t * rb_thread_event_new(rb_thread_callback callback, rb_event_flag_t event)
bool rb_thread_event_delete(rb_thread_hook_t * hook)
The overhead when no hook is registered is just a single unprotected boolean check, so close to zero.