Project

General

Profile

Actions

Feature #17679

open

Ractor incoming channel can consume unlimited resources

Added by marcotc (Marco Costa) about 3 years ago. Updated 7 months ago.

Status:
Assigned
Target version:
-
[ruby-core:102777]
Tags:

Description

Background

In the ddtrace gem, we want to move telemetry trace sending to a separate background Ractor. We’re concerned that if something goes wrong/gets delayed in this background Ractor, more and more data will accumulate in the send/receive channel until the Ruby VM crashes because it runs out of memory.

How to reproduce (Ruby version & script)

receiver_ractor = Ractor.new do
  loop do
    message = Ractor.receive
    sleep 1
    puts "Processed #{message}"
  end
end

counter = 0
while true
  counter += 1
  receiver_ractor.send(counter)
end

Expectation and result

The result is that the Ruby VM crashes due to out of memory.
We expect the Ruby VM to not crash.

Suggested solutions

Some ideas on how this can be improved:

  • Having a way for the sender of data to detect if the receiver Ractor is falling behind (approximate size of queue, timestamp of last processed item, or similar?).
  • Having a way to limit the Ractor message receive buffer.

Related issues 1 (1 open0 closed)

Related to Ruby master - Feature #18814: Ractor: add method to query incoming message queue size OpenActions

Updated by hsbt (Hiroshi SHIBATA) about 3 years ago

  • Tags set to ractor
  • Status changed from Open to Assigned
  • Assignee set to ko1 (Koichi Sasada)

Updated by marcandre (Marc-Andre Lafortune) about 3 years ago

It's not clear to me that this should be implemented at the Ractor level.

Both suggested approaches can be handled in Ruby, for example using an intermediary Ractor...

DONE = Object.new.freeze

MIDDLEMAN = Ractor.new do
  on_queue = 0
  loop do
    message = Ractor.receive
    if message == DONE
      on_queue -= 1
    else
      if (on_queue > 32_000)
        puts "Too many requests, skipping #{message}"
        next
      end
      on_queue += 1
      DOER.send(message)
    end
  end
end

DOER = Ractor.new do
  loop do
    message = Ractor.receive
    sleep 0.01
    puts "Processed #{message}"
    MIDDLEMAN.send(DONE)
  end
end

counter = 0
while true
  counter += 1
  MIDDLEMAN.send(counter)
end

If/when a non-blocking receive will be available, the receiving ractor could also handle it's waiting queue internally

Updated by ivoanjo (Ivo Anjo) almost 3 years ago

That's a reasonable point, @marcandre (Marc-Andre Lafortune). I also did tried something similar at https://ivoanjo.me/blog/2021/02/14/ractor-experiments-safe-async/ .

Our concern at Datadog (I'm a colleague of @marcotc (Marco Costa)) is that adding these middle layer threads/queues/ractors is error-prone, and this seems like something that every Ractor user may need, so it can probably be solved much cleaner by Ruby itself.

For instance, it really looks like during enqueing of messages in https://github.com/ruby/ruby/blob/9143d21b1bf2f16b1e847d569a588510726d8860/ractor.c#L408 the sender already checks the size of the queue anyway, so having the option to back out of the queue was at a given size seems to be a couple of lines way.

Updated by phigrofi (Philipp Großelfinger) over 1 year ago

I created a different issue, which would help to query to incoming queue size from outside of a ractor: https://bugs.ruby-lang.org/issues/18814

Maybe this could help you.

Updated by ivoanjo (Ivo Anjo) over 1 year ago

Thanks @phigrofi (Philipp Großelfinger) for the hint! Definitely looks like an interesting way out of this.

Actions #6

Updated by jeremyevans0 (Jeremy Evans) 7 months ago

  • Related to Feature #18814: Ractor: add method to query incoming message queue size added
Actions #7

Updated by jeremyevans0 (Jeremy Evans) 7 months ago

  • Tracker changed from Bug to Feature
  • ruby -v deleted (ruby 3.0.0p0 (2020-12-25 revision 95aff21468) [x86_64-linux])
  • Backport deleted (2.5: UNKNOWN, 2.6: UNKNOWN, 2.7: UNKNOWN, 3.0: UNKNOWN)
Actions

Also available in: Atom PDF

Like0
Like0Like0Like0Like0Like0Like0Like0