Project

General

Profile

Feature #14759

[PATCH] set M_ARENA_MAX for glibc malloc

Added by normalperson (Eric Wong) 11 days ago. Updated 5 days ago.

Status:
Open
Priority:
Normal
Assignee:
-
Target version:
-
[ruby-core:87035]

Description

Not everybody benefits from jemalloc and the extra download+install
time is not always worth it. Lets make the user experience for
glibc malloc users better, too.

Personally, I prefer using M_ARENA_MAX=1 (via MALLOC_ARENA_MAX
env) myself, but there is currently a performance penalty for
that.

gc.c (Init_GC): set M_ARENA_MAX=2 for glibc malloc

glibc malloc creates too many arenas and leads to fragmentation.
Given the existence of the GVL, clamping to two arenas seems
to be a reasonable trade-off for performance and memory usage.

Some users (including myself for several years, now) prefer only
one arena, now, so continue to respect users' wishes when
MALLOC_ARENA_MAX is set.

Thanks to Mike Perham for the reminder

This doesn't seem to conflict with jemalloc, so it should be safe
for all glibc-using systems.

History

#1 [ruby-core:87059] Updated by bluz71 (Dennis B) 11 days ago

normalperson (Eric Wong) wrote:

Personally, I prefer using M_ARENA_MAX=1 (via MALLOC_ARENA_MAX
env) myself, but there is currently a performance penalty for
that.

gc.c (Init_GC): set M_ARENA_MAX=2 for glibc malloc

This is not desirable in the longer term.

CRuby will likely get true concurrency in the future via ko1's Guild proposal. Reducing arenas will create new contention and serialisation at the memory allocator level thus negating the full benefits of Guilds.

Debate is currently occurring in feature $14718 about using jemalloc to solve Ruby's memory fragmentation issue on Linux. Resolution of that (one way or the other) should inform what to do here.

I believe this should be paused until #14718 is clarified.

#2 [ruby-core:87063] Updated by normalperson (Eric Wong) 10 days ago

dennisb55@hotmail.com wrote:

This is not desirable in the longer term.

I already had the following comment in the proposed patch:

  • /*
  • * Ruby doesn't benefit from many glibc malloc arenas due to GVL,
  • * remove or increase when we get Guilds
  • */

I believe this should be paused until #14718 is clarified.

I think there's room for both. This patch is a no-op when
jemalloc is linked.

#3 [ruby-core:87187] Updated by bluz71 (Dennis B) 8 days ago

As discussed in #14718 I am now a strong supporter of this change.

My prime concern, for the future, will be what to do when Guilds land?

What do you think Eric? What should happen when Guilds become available?

My thinking for arena count is maybe max (2 || 0.5 * core-count). But that is just speculation. I think quite a bit of testing would need to occur with a proper Guild'ed Ruby benchmark.

Also on the side (as you suggest) is continue to strive to improve glibc memory fragmentation behaviour (I assume by test cases and issue trackers and pull-requests)?

#4 [ruby-core:87193] Updated by mame (Yusuke Endoh) 7 days ago

I tried to change Mike's script to use I/O, and I've created a script that works best with glibc with no MALLOC_ARENA_MAX specified.

# glibc (default)
$ time ./miniruby frag2.rb
VmRSS:    852648 kB
real    0m26.191s

# glibc with MALLOC_ARENA_MAX=2
$ time MALLOC_ARENA_MAX=2 ./miniruby frag2.rb
VmRSS:   1261032 kB
real    0m29.072s

# jemalloc 3.6.0 (shipped with Ubuntu)
$ time LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so ./miniruby frag2.rb
VmRSS:    966624 kB
real    1m6.730s

$ ./miniruby -v
ruby 2.6.0dev (2018-05-19) [x86_64-linux]

As you see, default glibc is the fastest and most memory-efficient.
Comparing to that, glibc with MALLOC_ARENA_MAX=2 uses 1.5x memory, and jemalloc 3.6.0 is twice slow.

I'm unsure what happens. I guess it is difficult to discuss this issue without glibc/jemalloc hackers.

# frag2.rb

THREAD_COUNT = (ARGV[0] || "10").to_i

File.write("tmp.txt", "x" * 1024 * 64)

Threads = []

THREAD_COUNT.times do
  Threads << Thread.new do
    a = []
    100_000.times do
      a << open("tmp.txt") {|f| f.read }
      a.shift if a.size >= 1600
    end
  end
end

Threads.each {|th| th.join }

IO.foreach("/proc/#{$$}/status") do |line|
  print line if line =~ /VmRSS/
end if RUBY_PLATFORM =~ /linux/

#5 [ruby-core:87199] Updated by mperham (Mike Perham) 6 days ago

Yusuke, your script doesn't create any memory fragmentation, it throws away everything after 1600 and reads the exact same amount of data each time. I don't believe this is how Rails apps behave; they fragment over time. My script creates random sized data and holds onto 10% of the data to create "holes" in the heap and fragment memory quickly. I believe this better represents normal app conditions. I've edited your script slightly to randomly keep some data; it better matches the results I posted earlier. I think changing the IO to read random sizes would also exhibit worse memory:

$ time MALLOC_ARENA_MAX=2 /ruby/2.5.1/bin/ruby frag2.rb 
VmRSS:   1620356 kB

real    1m20.755s
user    0m38.057s
sys 1m2.881s

$ time /ruby/2.5.1/bin/ruby frag2.rb 
VmRSS:   1857284 kB

real    1m19.642s
user    0m36.645s
sys 1m4.480s
$ more frag2.rb 
THREAD_COUNT = (ARGV[0] || "10").to_i

File.write("/tmp/tmp.txt", "x" * 1024 * 64)

srand(1234)
Threads = []
Save = []

THREAD_COUNT.times do
  Threads << Thread.new do
    a = []
    100_000.times do
      a = open("/tmp/tmp.txt") {|f| f.read }
      Save << a if rand(100_000) < 1600
    end
  end
end

Threads.each {|th| th.join }
GC.start

IO.foreach("/proc/#{$$}/status") do |line|
  print line if line =~ /VmRSS/
end if RUBY_PLATFORM =~ /linux/

#6 [ruby-core:87209] Updated by bluz71 (Dennis B) 6 days ago

Mike,

Yusuke script is still interesting for the datum that a Ruby script with MALLOC_ARENA_MAX=2 consumed more memory than a case using the default arena count (usually 32 on 4 core machines).

Here are my results of Yusuke's script on my 4-core machine (Intel i5-4590 quad-core, 16GB RAM, Linux Mint 18.3 with kernel 4.15.0).

% time ruby frag2.rb 
VmRSS:   1,238,108 kB
real    0m38.792s

% time MALLOC_ARENA_MAX=2 ruby frag2.rb 
VmRSS:   1,561,624 kB
real    0m39.002s

% time MALLOC_ARENA_MAX=4 ruby frag2.rb 
VmRSS:   1,516,216 kB
real    0m36.614s

% time MALLOC_ARENA_MAX=32 ruby frag2.rb 
VmRSS:   1,218,180 kB
real    0m36.857s

This is perplexing. Clearly Ruby should not be changing defaults until we understand results like this.

Here are jemalloc results (3.6.0 first and 5.0.1 second):

% time LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so ruby frag2.rb 
VmRSS:    933,328 kB
real    1m33.660s

% time LD_PRELOAD=/home/bluz71/.linuxbrew/Cellar/jemalloc/5.0.1/lib/libjemalloc.so ruby frag2.rb 
VmRSS:   1,613,252 kB
real    0m27.530s

jemalloc 3.6.0 is extremely slow but with very low RSS.
jemalloc 5.0.1 is very fast (much faster than glibc) but also has the highest RSS.

Ruby can not be linked against a system supplied jemalloc since different jemalloc versions are extremely different; jemalloc 3.6 and 5.0 are basically different allocators that share the same name.

But we need to always remember that long lived Ruby processes on Linux have a very bad memory fragmentation as seen here:

jemalloc

What can the Ruby maintainers do?

I am less certain now what they should do than a week ago. But I still want to see the memory fragmentation issue reduced for default Ruby.

#7 [ruby-core:87211] Updated by normalperson (Eric Wong) 5 days ago

mame@ruby-lang.org wrote:

I tried to change Mike's script to use I/O, and I've created a
script that works best with glibc with no MALLOC_ARENA_MAX
specified.

Interesting, you found a corner case of some fixed sizes where
the glibc default appears the best.

I tested 16K instead of 64K since my computer is too slow
and 16K is the default buffer size for IO.copy_stream,
net/protocol, etc...) and the default was still best
in that case.

So, I wonder if there is a trim threshold where this happens
and multiple arenas tickles the threshold more frequently.

However, I believe Mike's script of random sizes is more
representative of realistic memory use. Unfortunately,
srand+rand alone is not enough to give consistently reproducible
results for benchmarking with threads...

Maybe a single thread needs to generate all the random numbers
and feed them round-robin to per-thread SizedQueue for
deterministic results.

Also available in: Atom PDF