https://bugs.ruby-lang.org/https://bugs.ruby-lang.org/favicon.ico?17113305112016-01-25T21:02:49ZRuby Issue Tracking SystemRuby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=566622016-01-25T21:02:49Znormalperson (Eric Wong)normalperson@yhbt.net
<ul></ul><p><a href="mailto:email@pitr.ch" class="email">email@pitr.ch</a> wrote:</p>
<blockquote>
<p>This issue proposes to document the Ruby memory model. The above mentioned memory model document which was created for concurrent-ruby can be used as a starting point: <a href="https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#" class="external">https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#</a></p>
</blockquote>
<p>Hello, I am interested in the topic but do not use JavaScript.<br>
Can you please provide a plain-text or basic HTML version?<br>
Thank you.</p>
<p>It tried changing the "/edit#" in the URL to "/pub" but could not see<br>
anything useful.</p>
<p>For reference, C Ruby programmers may find Linux memory-barriers.txt<br>
useful:</p>
<p><a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/memory-barriers.txt" class="external">https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/memory-barriers.txt</a></p>
<p>AFAIK, we remain C89-compatible for old compilers, so we have many undefined<br>
behaviors to define and deal with.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=566652016-01-25T22:19:05ZEregon (Benoit Daloze)
<ul><li><strong>Related to</strong> <i><a class="issue tracker-2 status-1 priority-4 priority-default" href="/issues/12019">Feature #12019</a>: Better low-level support for writing concurrent libraries</i> added</li></ul> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=566752016-01-25T22:26:16ZEregon (Benoit Daloze)
<ul><li><strong>File</strong> <i>RubyMemoryModel.rtf</i> added</li></ul><p>Eric Wong wrote:</p>
<blockquote>
<p><a href="mailto:email@pitr.ch" class="email">email@pitr.ch</a> wrote:</p>
<blockquote>
<p>This issue proposes to document the Ruby memory model. The above mentioned memory model document which was created for concurrent-ruby can be used as a starting point: <a href="https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#" class="external">https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#</a></p>
</blockquote>
<p>Hello, I am interested in the topic but do not use JavaScript.<br>
Can you please provide a plain-text or basic HTML version?<br>
Thank you.</p>
</blockquote>
<p>I attached a RTF version to this issue.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=566872016-01-26T02:41:03Znormalperson (Eric Wong)normalperson@yhbt.net
<ul></ul><p><a href="mailto:eregontp@gmail.com" class="email">eregontp@gmail.com</a> wrote:</p>
<blockquote>
<p>I attached a RTF version to this issue.</p>
</blockquote>
<p>Thanks.</p>
<p>I'm not sure if shared memory is even a good model for Ruby (and not my<br>
decision). Anyways, my comments below if matz/ko1 decide to go down<br>
this route.</p>
<p>Background: I am only a simple C programmer with some familiarity with<br>
Userspace RCU and Linux kernel memory model. I have zero experience in<br>
Java, and I do not know any C++ beyond what is in C.</p>
<p>For those unfamiliar with RCU, it is basically a poor man's GC; and all<br>
Rubies have a GC implementation anyways. In fact, working with the<br>
quirks with our conservative GC is not much different from working with<br>
RCU and the relaxed memory ordering model it favors.</p>
<blockquote>
<p>Core behavior<br>
Following sections covers the various storages in the Ruby language<br>
(e.g. local variable, instance variable, etc.). We consider the<br>
following operations:<br>
●read - reading a value from an already defined storage<br>
●write - writing a value from an already defined storage<br>
●define - creates a new storage and stores the default value or a<br>
supplied value<br>
●undefine - removes an existing storage<br>
Key properties are:<br>
●volatility (V) - A written value is immediately visible to any<br>
subsequent volatile read of the same variable on any Thread. It has<br>
same meaning as in Java, it provides sequential consistency. A volatile<br>
write happens-before any subsequent volatile read of the same variable.</p>
</blockquote>
<p>Perhaps we call this "synchronous" or "coherent" instead.<br>
The word "volatile" is highly misleading and confusing to me<br>
as a C programmer. (Perhaps I am easily confused :x)</p>
<p>Anyways, I am not convinced (volatile|synchronous|coherent) access<br>
should happen anywhere by default for anything because of costs.</p>
<p>Those requiring synchronized data should use special method calls<br>
to ensure memory ordering.</p>
<blockquote>
<p>Constant variables<br>
●volatility - yes<br>
●atomicity - yes<br>
●serializability - yes<br>
●scope - a module<br>
A Module or a Class definition is actually a constant definition. The<br>
definition is atomic, it assigns the Module or the Class to the<br>
constant, then its methods are defined atomically one by one.<br>
It’s desirable that once a constant is defined it and its value is<br>
immediately visible to all threads, therefore it’s volatile.</p>
</blockquote>
<p><snip (thread|fiber)-local, no objections there></p>
<blockquote>
<p>Method table<br>
●volatility - yes<br>
●atomicity - yes<br>
●serializability - yes<br>
●scope - a class<br>
Methods are also stored where operations defacto are: read -> method<br>
lookup, write -> method redefinition, define -> method definition,<br>
undefine -> method removal. Operations over method tables have to be<br>
visible as soon as possible otherwise Threads could execute different<br>
versions of methods leading to unpredictable behaviour, therefore they<br>
are marked volatile. When a method is updated and the method is being<br>
executed by a thread, the thread will finish the method body and it’ll<br>
use the updated method obtained on next method lookup.</p>
</blockquote>
<p>I strongly disagree with volatility in method and constant tables. Any<br>
programs defining methods/constants in parallel threads and expecting<br>
them to be up-to-date deserve all the problems they get.</p>
<p>Maybe volatility for require/autoload is a special case only iff a<br>
method/constant is missing entirely; but hitting old methods/constants<br>
should be allowed by the implementation.</p>
<p>Methods (and all other objects) are already protected from memory<br>
corruption and use-after-free by GC. There is no danger in segfaulting<br>
when old/stale methods get run.</p>
<p>The inline, global (, and perhaps in the future: thread-specific)<br>
caches will all become expensive if we need to ensure read-after-write<br>
consistency by checking for changes on methods and constants made<br>
by other threads.</p>
<blockquote>
<p>Threads<br>
Threads have the same guarantees as in in Java. Thread.new<br>
happens-before the execution of the new thread’s block. All operations<br>
done by the thread happens-before the thread is joined. In other words,<br>
when a thread is started it sees all changes made by its creator and<br>
when a thread is joined, the joining thread will see all changes made<br>
by the joined thread.</p>
</blockquote>
<p>Good. For practical reasons, this should obviate the need for<br>
constant/method volatility specified above.</p>
<blockquote>
<p>Beware of requiring and autoloading in concurrent programs, it's<br>
possible to see partially defined classes. Eager loading or blocking<br>
until classes are fully loaded should be used to mitigate.</p>
</blockquote>
<p>No disagreement, here :)</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=566982016-01-26T08:59:47Znaruse (Yui NARUSE)naruse@airemix.jp
<ul></ul><p>@Petr</p>
<p>You can "publish" the document and provide a simpler view with <a href="https://support.google.com/docs/answer/37579?hl=en" class="external">following process</a>:</p>
<pre><code>To publish a file:
Open a document, spreadsheet, presentation, or drawing.
Click the File menu.
Select Publish to the Web.
While the entire file will be published, some file types have more publishing options:
Spreadsheet: Choose to publish the entire spreadsheet or individual sheets. You can also choose a publishing format (web page, .csv, .tsv, .pdf, .xlsx, .ods).
Presentation: Choose how quickly to advance the slides.
Drawing: Choose the image size for your drawing.
Click Publish.
Copy the URL and send it to anyone you’d like to see the file. Or, embed it into your website.
</code></pre> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=567042016-01-26T15:19:26Zpitr.ch (Petr Chalupa)
<ul></ul><p>Thanks, I've published the document on following address, it'll be updated automatically, works without JS. Sorry i did not think about non-JS viewers. <a href="https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/pub" class="external">https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/pub</a></p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=567052016-01-26T15:20:10Zpitr.ch (Petr Chalupa)
<ul><li><strong>File</strong> deleted (<del><i>RubyMemoryModel.rtf</i></del>)</li></ul> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=568852016-02-03T21:04:59Zpitr.ch (Petr Chalupa)
<ul></ul><p>Thank you, for taking time to read it and for your input. I apologise for delayed answer, I was rather busy lately.</p>
<blockquote>
<blockquote>
<p>●volatility (V) - A written value is immediately visible to any<br>
subsequent volatile read of the same variable on any Thread. It has<br>
same meaning as in Java, it provides sequential consistency. A volatile<br>
write happens-before any subsequent volatile read of the same variable.</p>
</blockquote>
<p>Perhaps we call this "synchronous" or "coherent" instead.<br>
The word "volatile" is highly misleading and confusing to me<br>
as a C programmer. (Perhaps I am easily confused :x)</p>
</blockquote>
<p>We can definitely consider a different name. I would defer it for later though, to avoid confusion now.</p>
<blockquote>
<p>Anyways, I am not convinced (volatile|synchronous|coherent) access<br>
should happen anywhere by default for anything because of costs.</p>
<p>Those requiring synchronized data should use special method calls<br>
to ensure memory ordering.</p>
</blockquote>
<p>I've added following paragraph to the document explaining a little bit why volatility is preferred.</p>
<p>"The volatile property has noticeable impact on performance, on the other hand it’s often quite convenient property, since it simplifies reasoning about the program. Therefore unless it presents a performance issue volatility is preferred."</p>
<p>It tries to be in alignment with rest of the Ruby language to be user-friendly. Therefore the volatility behaviour is on Constants and similar. I've also elaborate in the document in the Constants part why is there no performance loose by making them volatile: "Ruby implementations may take advantage of constancy of the variables to avoid doing volatile reads on each constant variable read. MRI can check a version number. JRuby can use SwitchPoint, and JRuby+Truffle can use Assumptions, where both allow to treat the values as real constants during compilation."</p>
<blockquote>
<blockquote>
<p>Constant variables<br>
●volatility - yes<br>
●atomicity - yes<br>
●serializability - yes<br>
●scope - a module<br>
A Module or a Class definition is actually a constant definition. The<br>
definition is atomic, it assigns the Module or the Class to the<br>
constant, then its methods are defined atomically one by one.<br>
It’s desirable that once a constant is defined it and its value is<br>
immediately visible to all threads, therefore it’s volatile.</p>
</blockquote>
<p><snip (thread|fiber)-local, no objections there></p>
<blockquote>
<p>Method table<br>
●volatility - yes<br>
●atomicity - yes<br>
●serializability - yes<br>
●scope - a class<br>
Methods are also stored where operations defacto are: read -> method<br>
lookup, write -> method redefinition, define -> method definition,<br>
undefine -> method removal. Operations over method tables have to be<br>
visible as soon as possible otherwise Threads could execute different<br>
versions of methods leading to unpredictable behaviour, therefore they<br>
are marked volatile. When a method is updated and the method is being<br>
executed by a thread, the thread will finish the method body and it’ll<br>
use the updated method obtained on next method lookup.</p>
</blockquote>
<p>I strongly disagree with volatility in method and constant tables. Any<br>
programs defining methods/constants in parallel threads and expecting<br>
them to be up-to-date deserve all the problems they get.</p>
</blockquote>
<p>I see that this approach would be easier for Ruby implementers, on the other hand it would create very hard to debug bugs for users. Even though I agree that they should not do parallel loading, I would still like to protect them. Making both volatile should have only minor impact on code loading, if that's not the case, it should definitely be reconsidered.</p>
<blockquote>
<p>Maybe volatility for require/autoload is a special case only iff a<br>
method/constant is missing entirely; but hitting old methods/constants<br>
should be allowed by the implementation.</p>
</blockquote>
<p>Volatility of require/autoload and the fact that it blocks when other thread is loading given file/constant are very useful in parallel environment to make sure that some feature/class is fully loaded before using it. Both are usually used only on program paths which run only once during loading or reloading, therefore there are not performance critical.</p>
<blockquote>
<p>Methods (and all other objects) are already protected from memory<br>
corruption and use-after-free by GC. There is no danger in segfaulting<br>
when old/stale methods get run.</p>
<p>The inline, global (, and perhaps in the future: thread-specific)<br>
caches will all become expensive if we need to ensure read-after-write<br>
consistency by checking for changes on methods and constants made<br>
by other threads.</p>
</blockquote>
<p>I've tried to explain little bit in the document, this should not have any overhead. MRI with GIL does not have to ensure read-after-write consistency, other compiling implementations are actively invalidating the compiled code if it depends on a constant which was just redefined (or a method).</p>
<p>I did not entirely understood why you are against volatility on constants and methods, I tried to explain better why they are suggested to be volatile though. Could you elaborate?</p>
<blockquote>
<blockquote>
<p>Threads<br>
Threads have the same guarantees as in in Java. Thread.new<br>
happens-before the execution of the new thread’s block. All operations<br>
done by the thread happens-before the thread is joined. In other words,<br>
when a thread is started it sees all changes made by its creator and<br>
when a thread is joined, the joining thread will see all changes made<br>
by the joined thread.</p>
</blockquote>
<p>Good. For practical reasons, this should obviate the need for<br>
constant/method volatility specified above.</p>
</blockquote>
<p>It would certainly help if they wouldn't be volatile, require and autoload guaranties as well.</p>
<blockquote>
<blockquote>
<p>Beware of requiring and autoloading in concurrent programs, it's<br>
possible to see partially defined classes. Eager loading or blocking<br>
until classes are fully loaded should be used to mitigate.</p>
</blockquote>
<p>No disagreement, here :)</p>
</blockquote> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=568892016-02-04T17:28:49ZEregon (Benoit Daloze)
<ul></ul><p>Eric Wong wrote:</p>
<blockquote>
<p>I'm not sure if shared memory is even a good model for Ruby (and not my<br>
decision). Anyways, my comments below if matz/ko1 decide to go down<br>
this route.</p>
</blockquote>
<p>Shared memory at the user level is only one possibility indeed.<br>
But it is one the current model supports, even if MRI prevents actual parallelism.<br>
Also, for a memory model we must take the bottom layer, which seems shared memory here.</p>
<blockquote>
<p>Anyways, I am not convinced (volatile|synchronous|coherent) access<br>
should happen anywhere by default for anything because of costs.</p>
</blockquote>
<blockquote>
<p>I strongly disagree with volatility in method and constant tables. Any<br>
programs defining methods/constants in parallel threads and expecting<br>
them to be up-to-date deserve all the problems they get.</p>
</blockquote>
<p>The idea is it's only volatile/coherent for storages which are naturally "global"<br>
<em>and</em> where the performance overhead is very limited.</p>
<p>As Petr said, the impact on constants is only for the uncached case,<br>
which is already much slower than a cached constant lookup.</p>
<p>For methods it is very similar as Ruby implementations invalidate<br>
the method caches when the method table is changed, which mean<br>
there is only a minor overhead on method lookup for populating the cache.</p>
<p>On MRI, there is of course no overhead since the GIL guarantees these properties and much more.</p>
<blockquote>
<p>The inline, global (, and perhaps in the future: thread-specific)<br>
caches will all become expensive if we need to ensure read-after-write<br>
consistency by checking for changes on methods and constants made<br>
by other threads.</p>
</blockquote>
<p>You are right, inline caches would have overhead on some platforms,<br>
unless some form of safepoints/yieldpoints are available to the VM to clear the caches or ensure visibility<br>
(with a serial number check, it could just ensure visibility of the new serial to every thread).<br>
If the VM actually runs Ruby code in parallel, then it also most likely uses safepoints<br>
for the GC so I would guess Ruby VMs either have them or do not run Ruby code in parallel.</p>
<p>The global cache could use a similar approach to avoid overhead.</p>
<p>With this, the overhead would be limited to the slow path method/constant lookup and the additional cost to invalidate.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=569072016-02-05T21:51:32Znormalperson (Eric Wong)normalperson@yhbt.net
<ul></ul><p><a href="mailto:email@pitr.ch" class="email">email@pitr.ch</a> wrote:</p>
<blockquote>
<p>Thank you, for taking time to read it and for your input. I apologise<br>
for delayed answer, I was rather busy lately.</p>
</blockquote>
<p>No worries, I've been busy, too.</p>
<blockquote>
<p>I've added following paragraph to the document explaining a little bit<br>
why volatility is preferred.</p>
<p>"The volatile property has noticeable impact on performance, on the<br>
other hand it’s often quite convenient property, since it simplifies<br>
reasoning about the program. Therefore unless it presents a<br>
performance issue volatility is preferred."</p>
<p>It tries to be in alignment with rest of the Ruby language to be<br>
user-friendly. Therefore the volatility behaviour is on Constants and<br>
similar. I've also elaborate in the document in the Constants part why<br>
is there no performance loose by making them volatile: "Ruby<br>
implementations may take advantage of constancy of the variables to<br>
avoid doing volatile reads on each constant variable read. MRI can<br>
check a version number.</p>
</blockquote>
<p>For MRI, checking a version number still requires a memory model of that<br>
version number to be defined. I'd rather not have consitency guarantees<br>
of the version number.</p>
<p>This goes for constants and methods at least, which are already<br>
versioned in MRI.</p>
<blockquote>
<blockquote>
<p>Maybe volatility for require/autoload is a special case only iff a<br>
method/constant is missing entirely; but hitting old methods/constants<br>
should be allowed by the implementation.</p>
</blockquote>
<p>Volatility of require/autoload and the fact that it blocks when other<br>
thread is loading given file/constant are very useful in parallel<br>
environment to make sure that some feature/class is fully loaded<br>
before using it. Both are usually used only on program paths which run<br>
only once during loading or reloading, therefore there are not<br>
performance critical.</p>
</blockquote>
<p>Agreed. So perhaps missing constant/method falls back to<br>
(volatile|synchronous|coherent) checking.</p>
<p>However, redefined/included/extended existing constant/methods should<br>
only be eventually consistent; they may be cached locally per-thread.</p>
<blockquote>
<blockquote>
<p>The inline, global (, and perhaps in the future: thread-specific)<br>
caches will all become expensive if we need to ensure read-after-write<br>
consistency by checking for changes on methods and constants made<br>
by other threads.</p>
</blockquote>
</blockquote>
<blockquote>
<p>I've tried to explain little bit in the document, this should not have<br>
any overhead. MRI with GIL does not have to ensure read-after-write<br>
consistency, other compiling implementations are actively invalidating<br>
the compiled code if it depends on a constant which was just redefined<br>
(or a method).</p>
</blockquote>
<p>MRI has GIL today, I do not want MRI to have a GIL in the future.</p>
<p>To give us the most freedom in the future, I prefer we have as<br>
few guarantees as practical about consistency.</p>
<blockquote>
<p>I did not entirely understood why you are against volatility on<br>
constants and methods, I tried to explain better why they are<br>
suggested to be volatile though. Could you elaborate?</p>
</blockquote>
<p>See above :> Any version checks for thread-specific caches would<br>
need to define the consistency of the version number itself.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=572192016-02-29T23:06:51Zpitr.ch (Petr Chalupa)
<ul></ul><p>I understand your point, I would like explore how it could be solved in MRI before relaxing the constant and method redefinition though. The relaxation could lead to undesirable unpredictable behaviour for users.</p>
<p>As you've mentioned the version would have to be a volatile (Java) or an atomic (C++11) variable to guarantee that the value is up to date. That would mean volatile read before each method call or constant read, volatile reads are not terribly expensive though. E.g. on x86 it's just a mov instruction (same as regular load) (I am not sure what other platforms MRI targets). Volatile writes are more expensive but that is happening only on rare path, the method or constant redefinition. Without JIT and more optimisations it might have only small or no overhead in MRI, which could be measured in current MRI with GIL just by making the version number atomic (in C terminology). (I am not capable of altering the MRI's source code to measure it though.)</p>
<p>But as Benoit has suggested:</p>
<blockquote>
<p>You are right, inline caches would have overhead on some platforms,<br>
unless some form of safepoints/yieldpoints are available to the VM to clear the caches or ensure visibility<br>
(with a serial number check, it could just ensure visibility of the new serial to every thread).<br>
If the VM actually runs Ruby code in parallel, then it also most likely uses safepoints<br>
for the GC so I would guess Ruby VMs either have them or do not run Ruby code in parallel.</p>
</blockquote>
<p>when MRI has no GIL it will need some king of safepoint to park threads allowing GC to run. It would allow to remove any overhead on the fast path, the version checking. Roughly it would work as follows, a constant redefinition would change the constant, update version number, wait for all threads to reach the safepoint to make sure that all threads will see new version number on next read, finish constant redefinition.</p>
<p>I feel silly for such a late answer, I did not get any email about new comment even though I watch the issue.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=575192016-03-17T06:39:28Zshyouhei (Shyouhei Urabe)shyouhei@ruby-lang.org
<ul><li><strong>Status</strong> changed from <i>Open</i> to <i>Assigned</i></li><li><strong>Assignee</strong> set to <i>ko1 (Koichi Sasada)</i></li></ul><p>Koichi has some opinions in this area and wants to dump them to this thread. Please go ahead.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=575312016-03-17T08:59:47Zpitr.ch (Petr Chalupa)
<ul></ul><p>Great thanks. I am looking forward to continue the discussion.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=580082016-04-11T07:18:19Zko1 (Koichi Sasada)
<ul></ul><p>Sorry for late to comment on this topic.<br>
(and sorry i don't read all of comments on this topic)</p>
<p>At first, I need to appreciate you for such a great document.</p>
<p>However, my opinion is negative.<br>
Basically, (at least on MRI) <em>I</em> against this proposal because it is too difficult to understand and to implement.<br>
I believe we should introduce memory consistency model on more higher-level abstraction, like Go language does.</p>
<a name="Difficulty-of-understanding"></a>
<h3 >Difficulty of understanding<a href="#Difficulty-of-understanding" class="wiki-anchor">¶</a></h3>
<p>For example, JSR-133 is well documented by great people.<br>
But I'm not sure how many people understand it correctly very details (and evils are in details).</p>
<p>To make it easy, we need to introduce more clear, more little rules with higher level abstraction.</p>
<a name="Difficulty-of-implementation"></a>
<h3 >Difficulty of implementation<a href="#Difficulty-of-implementation" class="wiki-anchor">¶</a></h3>
<p>As you know, there are various computer architectures enabling shared memory parallel computing with different memory consistency model.<br>
I'm not sure we can enable to implement on all of them.</p>
<p>For example (trivial example), your all "atomicity" fields are true,<br>
but I'm not sure how to implement them correctly on Float value (as you write in the middle of this document) on any computer architecture.</p>
<p>As you know, some computers reorder memory access.<br>
To serialize them, we need to issue extra instructions.<br>
Maybe we need to issue them many times if we need to satisfies all of them.</p>
<p>Also strict rules will become hurdles for future optimizations.</p>
<p>On MRI, we don't need to care with such memory access reordering<br>
because MRI uses pthread_mutex (or similar API) on switching.</p>
<a name="Note"></a>
<h3 >Note<a href="#Note" class="wiki-anchor">¶</a></h3>
<p>This is my opinion, and also Matz had agreed with this opinion.</p>
<p>However, it is only personal opinion.<br>
We need to discuss about it.<br>
So that your contribution is great.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=581702016-04-20T17:40:07Zpitr.ch (Petr Chalupa)
<ul></ul><p>Thank you for responding and for taking time to read the proposal.</p>
<p>Let me start by elaborating more on the motivation behind all of the related<br>
proposals, since I did not really explained it in detail when I was opening<br>
them. I apologise for not doing that sooner.</p>
<a name="Motivation"></a>
<h3 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h3>
<p>I would like to clear up a possible misunderstanding about the target users of<br>
this document and this memory model. It's not intended to be directly used by<br>
majority of the Ruby programmers. (Even though the document aims to be<br>
understandable it will still be difficult topic.) It's intended to be used by<br>
concurrency enthusiasts, giving them tools to build many different concurrency<br>
abstractions as gems.</p>
<p>At this point Ruby is a general purpose language, with direct support for<br>
Threads and shared memory. As it was announced in few presentations, there are<br>
plans to add new easy to use abstraction to Ruby in some future release and<br>
maybe deprecate Threads. Lets call this scenario A. (Block-quotes are used for<br>
better logical structure.)</p>
<blockquote>
<p>(A) I understand the need to add such abstraction (actors, channels, other?)<br>
to Ruby to enable Ruby users to build concurrent applications with ease. For<br>
future reference let's call the one future abstraction Red. The Red would then<br>
have well documented and defined behaviour in concurrent and parallel execution<br>
(This is what I think you are referring to). However providing just one<br>
abstraction in standard library (and deprecating Threads) will hurt usability<br>
of Ruby language.</p>
</blockquote>
<blockquote>
<p>The problems lies in that there is no single concurrency abstraction which<br>
would fit all problems. Therefore providing just Red will left Ruby language<br>
suitable to just some subset of problems.</p>
</blockquote>
<blockquote>
<p>Assuming: Only the Red would be documented and providing high-level<br>
guarantees; threads would be deprecated; low-level concurrency would not be<br>
documented and guaranteed. Developers who would like to contribute new<br>
abstraction to solve another group of problems would be left with following (I<br>
think not very good) choices:</p>
</blockquote>
<blockquote>
<blockquote>
<p>(1) Implement the abstraction in underlying language used for the<br>
particular Ruby implementation (in C for MRI, in Java for JRuby(+Truffle))<br>
using guarantees provided by the underlying language. Meaning the author of the<br>
new abstraction has to understand 3 programming languages (C, Ruby, Java) and 3<br>
implementations to develop the implementation 3 times. That would discourage<br>
people and also make the whole process error prone and difficult.</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p>(2) Implement the abstraction using the Red. This approach gives users the<br>
desired abstraction (avoiding using different languages and understanding<br>
implementation details) but it will probably have bad performance since the Red<br>
is not suited to solve this problem. For example implementing ConcurrentHashMap<br>
(allowing parallel reads and writes) with actors would perform badly.<br>
(Admittedly this is a little extreme example, but it demonstrates the problem<br>
and I could not think of a better one.)</p>
</blockquote>
</blockquote>
<p>The above is to best of my knowledge where Ruby is heading in future, please<br>
correct me if I misunderstood and/or misrepresented it in any way.</p>
<p>To avoid the above outlined difficulties Ruby could take a different path,<br>
which is related to these proposals (or theirs evolved successors).</p>
<blockquote>
<p>(B) Ruby would stay general purpose language with direct threads support and<br>
shared memory with documented memory model. The low-level documentation would<br>
allow people (who are interested) to write different concurrent abstractions<br>
efficiently. One of them would become the standard and preferred way how to<br>
deal with concurrency in Ruby. Let's call it Blue. The Blue abstraction would<br>
(as Red would) be part of the standard library. Same as Red it would have well<br>
documented and defined behaviour in concurrent and parallel execution, but in<br>
this case based on the lower-level model. The documentation would be directed<br>
at all Ruby users and made as easy to understand as possible.</p>
</blockquote>
<blockquote>
<p>Majority of the Ruby users would use Blue the go-to abstraction as they would<br>
use the Red in scenario A. The key difference is that there is the low-level<br>
model to support creation of new abstractions. Therefore when the Blue cannot<br>
solve a particular issue a Ruby user can pick a different concurrency<br>
abstraction created by somebody else and provided as a gem or create a new one.</p>
</blockquote>
<p>I believe this would make the Ruby language more flexible.</p>
<a name="Difficulty-of-understanding"></a>
<h3 >Difficulty of understanding<a href="#Difficulty-of-understanding" class="wiki-anchor">¶</a></h3>
<p>This is something I believe can be improved over time. Also as mentioned above<br>
it's not intended to be used be everyone. Could you point me to parts which are<br>
not understandable, or lack explanation? I would like to improve them, to make<br>
the document more comprehensible.</p>
<p>The document is intentionally not as detailed and formal as JSR-133, to keep<br>
understandability. The price is as you say and I agree in details and omissions<br>
which may be left unspecified. I believe the high-level documentation for the<br>
Red will unfortunately suffer the same problem of evil details.</p>
<p>If the memory model is reviewed by many people and given some time to mature, I<br>
believe it will cover majority of the situations, omitted corner-cases can be<br>
fixed later. I think the current situation is much worse when each<br>
implementation has different rules and any document will improve the situation<br>
greatly.</p>
<a name="Difficulty-of-implementation"></a>
<h3 >Difficulty of implementation<a href="#Difficulty-of-implementation" class="wiki-anchor">¶</a></h3>
<p>(Various architectures) I am not a C programmer so I am not that well informed<br>
but I believe that in C this is solved by C11 standard and before that by<br>
various libraries. Can MRI use C11 in future when it'll be dropping GIL?</p>
<p>(Atomic Float) I agree that it is more difficult when Floats are required to by<br>
atomic, but if they were not it would be quite a surprising to Ruby users that<br>
a simple reference assignment of a Float object (as it's represented in Ruby)<br>
is not atomic. Therefore this is chosen to be atomic purely not to surprise<br>
users and to avoid educating users about torn reads/writes. Same applies to<br>
Fixnum which is bigger than int and fits into long (using Java primitive names<br>
here). Even though this is more difficult I think it makes sense to protect<br>
users from concerning about torn reads/writes. The implementation itself should<br>
be trivial on all 64-bit platforms, only 32-bit platforms will require some<br>
tricks. This [1] post suggests that it can be done.</p>
<p>(Strict rules) The document tries to be balanced between restricting<br>
optimisation and creating ugly surprises for users. I am expecting there will<br>
be more discussion about the rules:</p>
<ul>
<li>How to implement it on all Ruby implementations?</li>
<li>Will it prevent any optimisations?</li>
<li>Will it expose unexpected behaviour to users?</li>
</ul>
<p>The document is really just a first draft and everything is open for discussion<br>
and improvement which I both hope for. It was prepared not to limit any of the<br>
Ruby implementations, but problems can be missed, if it turns out a rule is too<br>
strict it can be relaxed.</p>
<p>(MRI with GIL) yes MRI already provides all of the guaranties specified thanks<br>
to GIL. It's even stronger. On the other hand if I understand correctly MRI is<br>
looking for ways how to remove GIL and the fact that GIL provides stronger<br>
undocumented guarantees makes this difficult. Users rely on it (intentionally<br>
or unintentionally) even though they shouldn't. Having a document describing<br>
what is guaranteed and what not, may make easier transition to MRI without GIL<br>
in future.</p>
<a name="In-Conclusion"></a>
<h3 >In Conclusion<a href="#In-Conclusion" class="wiki-anchor">¶</a></h3>
<p>I hope that maybe I've changed your mind a little bit about the B scenario and<br>
this proposal, that we could discuss more the issues this model could bring for<br>
MRI. I would like to help to solve them or avoid them by relaxing rules.</p>
<p>I believe that if this model (or its evolved successor) is accepted in all Ruby<br>
implementations over time, it will help the Ruby language a lot to be prepared<br>
for concurrency and parallelism, which is nowadays non-optional.</p>
<p>[1] <a href="http://shipilev.net/blog/2014/all-accesses-are-atomic/" class="external">http://shipilev.net/blog/2014/all-accesses-are-atomic/</a></p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=584012016-04-30T19:30:35Zko1 (Koichi Sasada)
<ul></ul><p>Sorry for late response.</p>
<p>Petr Chalupa wrote:</p>
<blockquote>
<p>Let me start by elaborating more on the motivation behind all of the related<br>
proposals, since I did not really explained it in detail when I was opening<br>
them. I apologise for not doing that sooner.</p>
</blockquote>
<p>No problem. Thank you for your explanation.</p>
<blockquote>
<a name="Motivation"></a>
<h3 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h3>
<p>I would like to clear up a possible misunderstanding about the target users of<br>
this document and this memory model. It's not intended to be directly used by<br>
majority of the Ruby programmers. (Even though the document aims to be<br>
understandable it will still be difficult topic.) It's intended to be used by<br>
concurrency enthusiasts, giving them tools to build many different concurrency<br>
abstractions as gems.</p>
</blockquote>
<p>I (may) understand what you want to say.<br>
As you wrote:</p>
<blockquote>
<p>Even though the document aims to be understandable it will still be difficult topic</p>
</blockquote>
<p>I agree with that, and I believe most of us can't understand and guarantee all of specifications.<br>
At least I don't believe I can implement it.<br>
(Of course, it is because of my low skill. Somebody should be able to implement it)</p>
<blockquote>
<p>At this point Ruby is a general purpose language, with direct support for<br>
Threads and shared memory. As it was announced in few presentations, there are<br>
plans to add new easy to use abstraction to Ruby in some future release and<br>
maybe deprecate Threads. Lets call this scenario A. (Block-quotes are used for<br>
better logical structure.)</p>
<blockquote>
<p>(A) I understand the need to add such abstraction (actors, channels, other?)<br>
to Ruby to enable Ruby users to build concurrent applications with ease. For<br>
future reference let's call the one future abstraction Red. The Red would then<br>
have well documented and defined behaviour in concurrent and parallel execution<br>
(This is what I think you are referring to). However providing just one<br>
abstraction in standard library (and deprecating Threads) will hurt usability<br>
of Ruby language.</p>
</blockquote>
<blockquote>
<p>The problems lies in that there is no single concurrency abstraction which<br>
would fit all problems. Therefore providing just Red will left Ruby language<br>
suitable to just some subset of problems.</p>
</blockquote>
<blockquote>
<p>Assuming: Only the Red would be documented and providing high-level<br>
guarantees; threads would be deprecated; low-level concurrency would not be<br>
documented and guaranteed. Developers who would like to contribute new<br>
abstraction to solve another group of problems would be left with following (I<br>
think not very good) choices:</p>
</blockquote>
</blockquote>
<p>I agree the flexibility should be decreased.</p>
<blockquote>
<blockquote>
<blockquote>
<p>(1) Implement the abstraction in underlying language used for the<br>
particular Ruby implementation (in C for MRI, in Java for JRuby(+Truffle))<br>
using guarantees provided by the underlying language. Meaning the author of the<br>
new abstraction has to understand 3 programming languages (C, Ruby, Java) and 3<br>
implementations to develop the implementation 3 times. That would discourage<br>
people and also make the whole process error prone and difficult.</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p>(2) Implement the abstraction using the Red. This approach gives users the<br>
desired abstraction (avoiding using different languages and understanding<br>
implementation details) but it will probably have bad performance since the Red<br>
is not suited to solve this problem. For example implementing ConcurrentHashMap<br>
(allowing parallel reads and writes) with actors would perform badly.<br>
(Admittedly this is a little extreme example, but it demonstrates the problem<br>
and I could not think of a better one.)</p>
</blockquote>
</blockquote>
<p>The above is to best of my knowledge where Ruby is heading in future, please<br>
correct me if I misunderstood and/or misrepresented it in any way.</p>
<p>To avoid the above outlined difficulties Ruby could take a different path,<br>
which is related to these proposals (or theirs evolved successors).</p>
</blockquote>
<p>I understand your concerns. I agree there are such disadvantages.</p>
<p>However, I believe productivity by avoiding shared-everything will help programmers.</p>
<p>For (1), I agree there is such difficulties.<br>
I don't have any comment on it.<br>
Yes, there is.</p>
<p>For (2), you mentioned about performance.<br>
However, I believe Ruby should contribute programmer's happiness.<br>
I believe performance is not a matter.</p>
<p>It seems strange because parallelism is for performance.<br>
I assume such drawback can be overcame with (a) design patterns (b) parallelism (# of cores).</p>
<p>I also propose problem issue (3).<br>
We need more time to discuss to introduce new abstraction.<br>
(The biggest problem is I couldn't propose Red specifications)<br>
We need more learning cost and need to invent efficient patterns using Red.</p>
<p>Thread model is well known (I don't say thread model is easy to use :p).<br>
This is clearly advantage of thread model.</p>
<p>I agree there are many issues (1 to 3, and moer).<br>
But I believe the productivity by simplicity is most important (for me, a ruby programmer).</p>
<blockquote>
<blockquote>
<p>(B) Ruby would stay general purpose language with direct threads support and<br>
shared memory with documented memory model. The low-level documentation would<br>
allow people (who are interested) to write different concurrent abstractions<br>
efficiently. One of them would become the standard and preferred way how to<br>
deal with concurrency in Ruby. Let's call it Blue. The Blue abstraction would<br>
(as Red would) be part of the standard library. Same as Red it would have well<br>
documented and defined behaviour in concurrent and parallel execution, but in<br>
this case based on the lower-level model. The documentation would be directed<br>
at all Ruby users and made as easy to understand as possible.</p>
</blockquote>
<blockquote>
<p>Majority of the Ruby users would use Blue the go-to abstraction as they would<br>
use the Red in scenario A. The key difference is that there is the low-level<br>
model to support creation of new abstractions. Therefore when the Blue cannot<br>
solve a particular issue a Ruby user can pick a different concurrency<br>
abstraction created by somebody else and provided as a gem or create a new one.</p>
</blockquote>
<p>I believe this would make the Ruby language more flexible.</p>
</blockquote>
<p>I agree it is flexible.<br>
However it will be error prone if shared-everything model is allowed.</p>
<blockquote>
<a name="Difficulty-of-understanding"></a>
<h3 >Difficulty of understanding<a href="#Difficulty-of-understanding" class="wiki-anchor">¶</a></h3>
<p>This is something I believe can be improved over time. Also as mentioned above<br>
it's not intended to be used be everyone. Could you point me to parts which are<br>
not understandable, or lack explanation? I would like to improve them, to make<br>
the document more comprehensible.</p>
<p>The document is intentionally not as detailed and formal as JSR-133, to keep<br>
understandability. The price is as you say and I agree in details and omissions<br>
which may be left unspecified. I believe the high-level documentation for the<br>
Red will unfortunately suffer the same problem of evil details.</p>
<p>If the memory model is reviewed by many people and given some time to mature, I<br>
believe it will cover majority of the situations, omitted corner-cases can be<br>
fixed later. I think the current situation is much worse when each<br>
implementation has different rules and any document will improve the situation<br>
greatly.</p>
</blockquote>
<p>To point out, I need to read more carefully and try to implement with parallel threads.<br>
(evils will be in implementation details)</p>
<blockquote>
<a name="Difficulty-of-implementation"></a>
<h3 >Difficulty of implementation<a href="#Difficulty-of-implementation" class="wiki-anchor">¶</a></h3>
<p>(Various architectures) I am not a C programmer so I am not that well informed<br>
but I believe that in C this is solved by C11 standard and before that by<br>
various libraries. Can MRI use C11 in future when it'll be dropping GIL?</p>
</blockquote>
<p>Not sure, sorry.<br>
From CPU architecture, there are several overhead for strong memory consistency.</p>
<blockquote>
<p>(Atomic Float) I agree that it is more difficult when Floats are required to by<br>
atomic, but if they were not it would be quite a surprising to Ruby users that<br>
a simple reference assignment of a Float object (as it's represented in Ruby)<br>
is not atomic. Therefore this is chosen to be atomic purely not to surprise<br>
users and to avoid educating users about torn reads/writes. Same applies to<br>
Fixnum which is bigger than int and fits into long (using Java primitive names<br>
here). Even though this is more difficult I think it makes sense to protect<br>
users from concerning about torn reads/writes. The implementation itself should<br>
be trivial on all 64-bit platforms, only 32-bit platforms will require some<br>
tricks. This [1] post suggests that it can be done.</p>
</blockquote>
<p>I assume that there are pros and cons about performance.</p>
<p>Shared everything model (thread-model)</p>
<ul>
<li>Pros. we can share everything easily.</li>
<li>Cons. requires fine-grain consistency control for some data structures to guarantee memory model.</li>
</ul>
<p>Shared nothing model (Red):</p>
<ul>
<li>Pros. Do not need to care fine grain memory consistency</li>
<li>Cons. we can't implement shared data structures in Ruby (sometimes, it can be performance overhead).</li>
</ul>
<blockquote>
<p>(Strict rules) The document tries to be balanced between restricting<br>
optimisation and creating ugly surprises for users. I am expecting there will<br>
be more discussion about the rules:</p>
<ul>
<li>How to implement it on all Ruby implementations?</li>
<li>Will it prevent any optimisations?</li>
<li>Will it expose unexpected behaviour to users?</li>
</ul>
<p>The document is really just a first draft and everything is open for discussion<br>
and improvement which I both hope for. It was prepared not to limit any of the<br>
Ruby implementations, but problems can be missed, if it turns out a rule is too<br>
strict it can be relaxed.</p>
<p>(MRI with GIL) yes MRI already provides all of the guaranties specified thanks<br>
to GIL. It's even stronger. On the other hand if I understand correctly MRI is<br>
looking for ways how to remove GIL and the fact that GIL provides stronger<br>
undocumented guarantees makes this difficult. Users rely on it (intentionally<br>
or unintentionally) even though they shouldn't. Having a document describing<br>
what is guaranteed and what not, may make easier transition to MRI without GIL<br>
in future.</p>
</blockquote>
<p>I agree Ruby programmers can rely on GIL guarantees (and it is not good for other implementation).</p>
<p>BTW, such strong GIL guarantee helps people from some kind of thread-safety bugs.<br>
("help" means decreasing bug appearance rate. As you wrote, it is also "bad" thing)</p>
<blockquote>
<a name="In-Conclusion"></a>
<h3 >In Conclusion<a href="#In-Conclusion" class="wiki-anchor">¶</a></h3>
<p>I hope that maybe I've changed your mind a little bit about the B scenario and<br>
this proposal, that we could discuss more the issues this model could bring for<br>
MRI. I would like to help to solve them or avoid them by relaxing rules.</p>
<p>I believe that if this model (or its evolved successor) is accepted in all Ruby<br>
implementations over time, it will help the Ruby language a lot to be prepared<br>
for concurrency and parallelism, which is nowadays non-optional.</p>
<p>[1] <a href="http://shipilev.net/blog/2014/all-accesses-are-atomic/" class="external">http://shipilev.net/blog/2014/all-accesses-are-atomic/</a></p>
</blockquote>
<p>I don't change my mind.<br>
I believe simplicity is more important than flexibility.</p>
<p>However, your comments clear many kind of things.<br>
I agree that many people agree with you.</p>
<p>Again, my comment is only my thoughts.<br>
I don't against B scenario for other implementations, and for MRI if someone contribute.</p>
<p>Actually, sometime Matz said he want to go B scenario.<br>
He proposed Actors on threads (people should care to modify objects inter actors (threads)).<br>
Same approach of Cellroid.<br>
But I'm against on it :p (and Matz said he agree with me, when I asked. I'm not sure current his idea)</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=587502016-05-19T21:02:05Zpitr.ch (Petr Chalupa)
<ul></ul><p>Koichi Sasada wrote:</p>
<blockquote>
<p>Sorry for late response.</p>
<p>Petr Chalupa wrote:</p>
<blockquote>
<p>Let me start by elaborating more on the motivation behind all of the related<br>
proposals, since I did not really explained it in detail when I was opening<br>
them. I apologise for not doing that sooner.</p>
</blockquote>
<p>No problem. Thank you for your explanation.</p>
<blockquote>
<a name="Motivation"></a>
<h3 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h3>
<p>I would like to clear up a possible misunderstanding about the target users of<br>
this document and this memory model. It's not intended to be directly used by<br>
majority of the Ruby programmers. (Even though the document aims to be<br>
understandable it will still be difficult topic.) It's intended to be used by<br>
concurrency enthusiasts, giving them tools to build many different concurrency<br>
abstractions as gems.</p>
</blockquote>
<p>I (may) understand what you want to say.<br>
As you wrote:</p>
<blockquote>
<p>Even though the document aims to be understandable it will still be difficult topic</p>
</blockquote>
<p>I agree with that, and I believe most of us can't understand and guarantee all of specifications.<br>
At least I don't believe I can implement it.<br>
(Of course, it is because of my low skill. Somebody should be able to implement it)</p>
</blockquote>
<p>Luckily there are other languages with their memory models, where the<br>
guaranties are already provided. Their working solutions can be reused and<br>
applied in MRI.</p>
<blockquote>
<blockquote>
<p>At this point Ruby is a general purpose language, with direct support for<br>
Threads and shared memory. As it was announced in few presentations, there are<br>
plans to add new easy to use abstraction to Ruby in some future release and<br>
maybe deprecate Threads. Lets call this scenario A. (Block-quotes are used for<br>
better logical structure.)</p>
<blockquote>
<p>(A) I understand the need to add such abstraction (actors, channels, other?)<br>
to Ruby to enable Ruby users to build concurrent applications with ease. For<br>
future reference let's call the one future abstraction Red. The Red would then<br>
have well documented and defined behaviour in concurrent and parallel execution<br>
(This is what I think you are referring to). However providing just one<br>
abstraction in standard library (and deprecating Threads) will hurt usability<br>
of Ruby language.</p>
</blockquote>
<blockquote>
<p>The problems lies in that there is no single concurrency abstraction which<br>
would fit all problems. Therefore providing just Red will left Ruby language<br>
suitable to just some subset of problems.</p>
</blockquote>
<blockquote>
<p>Assuming: Only the Red would be documented and providing high-level<br>
guarantees; threads would be deprecated; low-level concurrency would not be<br>
documented and guaranteed. Developers who would like to contribute new<br>
abstraction to solve another group of problems would be left with following (I<br>
think not very good) choices:</p>
</blockquote>
</blockquote>
<p>I agree the flexibility should be decreased.</p>
</blockquote>
<p>Just to make sure we understand each other. I agree that the flexibility should<br>
be decreased for users so they can write their concurrent code with ease. I<br>
believe we disagree on which level it should be achieved on though. I am<br>
advocating for library level, you I think for language level.</p>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<p>(1) Implement the abstraction in underlying language used for the<br>
particular Ruby implementation (in C for MRI, in Java for JRuby(+Truffle))<br>
using guarantees provided by the underlying language. Meaning the author of the<br>
new abstraction has to understand 3 programming languages (C, Ruby, Java) and 3<br>
implementations to develop the implementation 3 times. That would discourage<br>
people and also make the whole process error prone and difficult.</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p>(2) Implement the abstraction using the Red. This approach gives users the<br>
desired abstraction (avoiding using different languages and understanding<br>
implementation details) but it will probably have bad performance since the Red<br>
is not suited to solve this problem. For example implementing ConcurrentHashMap<br>
(allowing parallel reads and writes) with actors would perform badly.<br>
(Admittedly this is a little extreme example, but it demonstrates the problem<br>
and I could not think of a better one.)</p>
</blockquote>
</blockquote>
<p>The above is to best of my knowledge where Ruby is heading in future, please<br>
correct me if I misunderstood and/or misrepresented it in any way.</p>
<p>To avoid the above outlined difficulties Ruby could take a different path,<br>
which is related to these proposals (or theirs evolved successors).</p>
</blockquote>
<p>I understand your concerns. I agree there are such disadvantages.</p>
<p>However, I believe productivity by avoiding shared-everything will help programmers.</p>
<p>For (1), I agree there is such difficulties.<br>
I don't have any comment on it.<br>
Yes, there is.</p>
<p>For (2), you mentioned about performance.<br>
However, I believe Ruby should contribute programmer's happiness.<br>
I believe performance is not a matter.</p>
<p>It seems strange because parallelism is for performance.<br>
I assume such drawback can be overcame with (a) design patterns (b) parallelism (# of cores).</p>
</blockquote>
<p>I am using Ruby for 10 years (thank You!) and I see and understand the big<br>
benefit of Ruby caring about programmer's happiness. I care about it very much<br>
too and I try to avoid any suggestions which would lead to sacrificing it. I<br>
think that so far all of the proposals were shaped by user happiness and<br>
performance. (For example: the discussion about volatile constants in this<br>
issue, current rules are harder to implement but better for users.) If it's not<br>
true I would like to fix it.</p>
<p>Regarding (2), Users may sacrifice some performance but in this case it might<br>
perform quite badly. Few examples for consideration follow:</p>
<p>(clojure agents) Implementation of agents using actors: since agent has to be<br>
able to report it's value at any time it would need to be modeled using at<br>
least 2 actors: one to hold and report the value, second to process the updates.</p>
<p>(go channels) Implementing go sized channel using actors: The channel is<br>
blocking. The channel is represented with one or two actors. One is simpler but<br>
has higher contention, using two actors it avoids some contenting between head<br>
and tail of the channel. To simulate blocking: actors, which are sending<br>
messages to the channel, will not continue with other message processing until<br>
they receive confirmation from the channel that they can continue, that they<br>
are not blocked. Actors waiting on messages from channel would have to send<br>
challenge to the channel that they want to receive a message from channel and<br>
do not process other messages until they receive the message from channel.</p>
<p>My intuition is that the slowdown will be 2x and <em>more</em> (I'll do some tests).<br>
The outlined implementations are much more complex compared to the conventional<br>
implementation using shared memory.</p>
<p>It also touches another issue, for some problems just one abstraction will<br>
inevitably lead to some awkward usage patterns and unnecessary complexity for<br>
users, where the Red abstraction does not provide any natural way of support for<br>
solving the problem.</p>
<p>(actor future state) Staying with the hypothetic actors as Red example for one<br>
more paragraph to support previous claim. Supposing there is an application<br>
with some state and background processing. Actors support state and events<br>
generated based on the state changes naturally. However they are not the best<br>
choice to model background processing. An actor doing a background job isn't<br>
responsive to any messages during the execution, therefore the first step is to<br>
always break up state actors and it's background processing to two actors. Then<br>
the actor responsible for background processing is just a wrapper around an<br>
asynchronously executed function without any state, which might be better<br>
modeled by just a block executed on a thread-pool for IO tasks or by a Future<br>
object. Another issue could arise if one general actor is used to process all<br>
the background jobs (which is a good thought at first glance), the actor will<br>
become bottleneck allowing to execute just one background job at a time (tasks<br>
with blocking IO can also easily deadlock it). Easy fix is to introduce a pool<br>
of actors to process background jobs, but then again they will be slower than<br>
shared-memory thread-pool implementation.</p>
<p>Of course all of these examples were for actors not for Red. It's not directly<br>
applicable, but it shows what kind of problems can be (I think unavoidably)<br>
anticipated for Red.</p>
<p>It's not always possible to just throw more cores at the problem, the algorithm<br>
has to support such scaling.</p>
<p>Going back to user happiness, the scenario A sacrifices happiness of some users:</p>
<p>(group1) Concurrent library implementers, because of (1) and (2). This is<br>
probably not a biggest group of users but I think it's an important group since<br>
their work will be used by <em>many</em> users.</p>
<p>(group2) Second group is larger, it's users which would like to use Ruby to<br>
solve a problem where Red would be of limited help. These users will be looking<br>
for alternative solution and will be disappointed that the choice will be<br>
severely limited, because group1 will not write new abstractions (admittedly<br>
this is just my projection).</p>
<p>Therefore scenario A does not have just positive impact on user happiness<br>
(those users, whose problems fit well to be solved by Red (probably bigger<br>
group than group1 and group2 combined)).</p>
<p>Since Ruby is nowadays mostly used in long running processes not in scripts the<br>
performance becomes more important. In my observation performance is the most<br>
common reason why people leave Ruby, not because they are unhappy with the<br>
language but because they pay too much for their servers to run their<br>
applications.</p>
<blockquote>
<p>I also propose problem issue (3).<br>
We need more time to discuss to introduce new abstraction.<br>
(The biggest problem is I couldn't propose Red specifications)<br>
We need more learning cost and need to invent efficient patterns using Red.</p>
<p>Thread model is well known (I don't say thread model is easy to use :p).<br>
This is clearly advantage of thread model.</p>
<p>I agree there are many issues (1 to 3, and moer).<br>
But I believe the productivity by simplicity is most important (for me, a ruby programmer).</p>
</blockquote>
<p>It looks like for the purpose of this discussion I should know more about what<br>
is considered to become Red. Later you mention sharing nothing, how would that<br>
work for classes, constants, method definitions etc.? How would the isolated<br>
parts communicate with each other, deep freezing or copying the messages? Are<br>
there any sources like talks or issues I could read?</p>
<p>When I first heard about Red being planned I was thinking about deep-freezing<br>
or deep-cloning to ensure messages cannot lead to shared memory issues, Red<br>
being actors, channels etc. and isolation achieved only by convention and user<br>
education.</p>
<p>Yeah (3) will take time, that's a common problem for both A and B scenarios. B<br>
might be in a better situation though because more people can get involved<br>
writing more abstractions until a winning one is picked and becomes part of<br>
Ruby standard library.</p>
<blockquote>
<blockquote>
<blockquote>
<p>(B) Ruby would stay general purpose language with direct threads support and<br>
shared memory with documented memory model. The low-level documentation would<br>
allow people (who are interested) to write different concurrent abstractions<br>
efficiently. One of them would become the standard and preferred way how to<br>
deal with concurrency in Ruby. Let's call it Blue. The Blue abstraction would<br>
(as Red would) be part of the standard library. Same as Red it would have well<br>
documented and defined behavior in concurrent and parallel execution, but in<br>
this case based on the lower-level model. The documentation would be directed<br>
at all Ruby users and made as easy to understand as possible.</p>
</blockquote>
<blockquote>
<p>Majority of the Ruby users would use Blue the go-to abstraction as they would<br>
use the Red in scenario A. The key difference is that there is the low-level<br>
model to support creation of new abstractions. Therefore when the Blue cannot<br>
solve a particular issue a Ruby user can pick a different concurrency<br>
abstraction created by somebody else and provided as a gem or create a new one.</p>
</blockquote>
<p>I believe this would make the Ruby language more flexible.</p>
</blockquote>
<p>I agree it is flexible.<br>
However it will be error prone if shared-everything model is allowed.</p>
</blockquote>
<p>Currently Ruby has shared memory, how would that be taken away? It would be<br>
<em>huge</em> incompatible change, I believe.</p>
<p>Yeah it is difficult to use, but I would like to stress that it's only for<br>
group1 (mentioned above). Most of the users would not have to deal with it<br>
since they'll use just one of the available abstractions (blue being the most<br>
common one and advised by Ruby to be used).</p>
<blockquote>
<blockquote>
<a name="Difficulty-of-understanding"></a>
<h3 >Difficulty of understanding<a href="#Difficulty-of-understanding" class="wiki-anchor">¶</a></h3>
<p>This is something I believe can be improved over time. Also as mentioned above<br>
it's not intended to be used be everyone. Could you point me to parts which are<br>
not understandable, or lack explanation? I would like to improve them, to make<br>
the document more comprehensible.</p>
<p>The document is intentionally not as detailed and formal as JSR-133, to keep<br>
understandability. The price is as you say and I agree in details and omissions<br>
which may be left unspecified. I believe the high-level documentation for the<br>
Red will unfortunately suffer the same problem of evil details.</p>
<p>If the memory model is reviewed by many people and given some time to mature, I<br>
believe it will cover majority of the situations, omitted corner-cases can be<br>
fixed later. I think the current situation is much worse when each<br>
implementation has different rules and any document will improve the situation<br>
greatly.</p>
</blockquote>
<p>To point out, I need to read more carefully and try to implement with parallel threads.<br>
(evils will be in implementation details)</p>
</blockquote>
<p>Thanks a lot, I really appreciate that you are looking at it in more detail and<br>
that you are willing to discuss this in length.</p>
<blockquote>
<blockquote>
<a name="Difficulty-of-implementation"></a>
<h3 >Difficulty of implementation<a href="#Difficulty-of-implementation" class="wiki-anchor">¶</a></h3>
<p>(Various architectures) I am not a C programmer so I am not that well informed<br>
but I believe that in C this is solved by C11 standard and before that by<br>
various libraries. Can MRI use C11 in future when it'll be dropping GIL?</p>
</blockquote>
<p>Not sure, sorry.<br>
From CPU architecture, there are several overhead for strong memory consistency.</p>
</blockquote>
<p>Yeah there are, but comparing GIL vs noGIL, running on all cores with some<br>
slight overhead is advantageous.</p>
<blockquote>
<blockquote>
<p>(Atomic Float) I agree that it is more difficult when Floats are required to by<br>
atomic, but if they were not it would be quite a surprising to Ruby users that<br>
a simple reference assignment of a Float object (as it's represented in Ruby)<br>
is not atomic. Therefore this is chosen to be atomic purely not to surprise<br>
users and to avoid educating users about torn reads/writes. Same applies to<br>
Fixnum which is bigger than int and fits into long (using Java primitive names<br>
here). Even though this is more difficult I think it makes sense to protect<br>
users from concerning about torn reads/writes. The implementation itself should<br>
be trivial on all 64-bit platforms, only 32-bit platforms will require some<br>
tricks. This [1] post suggests that it can be done.</p>
</blockquote>
</blockquote>
<p>Atomic float and C11: as far as I know if it's declared as atomic float but<br>
operations load and write are done with memory_order_relaxed then it keeps<br>
atomicity property without any ordering constrains, therefore without<br>
performance overhead (there might be exceptions but even 32bit platforms can<br>
use some tricks like SSE instruction to make the float atomic without overhead).</p>
<blockquote>
<p>I assume that there are pros and cons about performance.</p>
<p>Shared everything model (thread-model)</p>
<ul>
<li>Pros. we can share everything easily.</li>
<li>Cons. requires fine-grain consistency control for some data structures to guarantee memory model.</li>
</ul>
<p>Shared nothing model (Red):</p>
<ul>
<li>Pros. Do not need to care fine grain memory consistency</li>
<li>Cons. we can't implement shared data structures in Ruby (sometimes, it can be performance overhead).</li>
</ul>
</blockquote>
<p>I am sorry, but I'm not sure how to interpret the comparison. It's important to<br>
distinguish where does the pros. and cons. apply. In thread-model case, Pros.<br>
applies to the users and Cons. applies to the Ruby implementers. In Red, Pros.<br>
applies to the implementers and Cons. to the Ruby users. Shared everything<br>
comes out better in this comparison emphasizing users.</p>
<p>Regarding the implementor's point of view, I appreciate the amount of work and<br>
complexity this will be creating. I am part of the JRuby+Truffle team and we<br>
would have to comply and deal with RMM too. Still I believe it's worth the<br>
effort.</p>
<blockquote>
<blockquote>
<p>(Strict rules) The document tries to be balanced between restricting<br>
optimisation and creating ugly surprises for users. I am expecting there will<br>
be more discussion about the rules:</p>
<ul>
<li>How to implement it on all Ruby implementations?</li>
<li>Will it prevent any optimisations?</li>
<li>Will it expose unexpected behaviour to users?</li>
</ul>
<p>The document is really just a first draft and everything is open for discussion<br>
and improvement which I both hope for. It was prepared not to limit any of the<br>
Ruby implementations, but problems can be missed, if it turns out a rule is too<br>
strict it can be relaxed.</p>
<p>(MRI with GIL) yes MRI already provides all of the guaranties specified thanks<br>
to GIL. It's even stronger. On the other hand if I understand correctly MRI is<br>
looking for ways how to remove GIL and the fact that GIL provides stronger<br>
undocumented guarantees makes this difficult. Users rely on it (intentionally<br>
or unintentionally) even though they shouldn't. Having a document describing<br>
what is guaranteed and what not, may make easier transition to MRI without GIL<br>
in future.</p>
</blockquote>
<p>I agree Ruby programmers can rely on GIL guarantees (and it is not good for other implementation).</p>
</blockquote>
<p>Yeah, alternative implementations may suffer the issue of users relying on GIL<br>
already. In practice it may not be that bad though, at least in code which is<br>
meant to be run concurrently or on parallel. These libraries tend to use slower<br>
Mutexes to stay safe, because instance variables do not have precisely defined<br>
behavior. (This is just my personal view, we should ask Charles and Tom how<br>
often this came up in their issues.)</p>
<blockquote>
<p>BTW, such strong GIL guarantee helps people from some kind of thread-safety bugs.<br>
("help" means decreasing bug appearance rate. As you wrote, it is also "bad" thing)</p>
<blockquote>
<a name="In-Conclusion"></a>
<h3 >In Conclusion<a href="#In-Conclusion" class="wiki-anchor">¶</a></h3>
<p>I hope that maybe I've changed your mind a little bit about the B scenario and<br>
this proposal, that we could discuss more the issues this model could bring for<br>
MRI. I would like to help to solve them or avoid them by relaxing rules.</p>
<p>I believe that if this model (or its evolved successor) is accepted in all Ruby<br>
implementations over time, it will help the Ruby language a lot to be prepared<br>
for concurrency and parallelism, which is nowadays non-optional.</p>
<p>[1] <a href="http://shipilev.net/blog/2014/all-accesses-are-atomic/" class="external">http://shipilev.net/blog/2014/all-accesses-are-atomic/</a></p>
</blockquote>
<p>I don't change my mind.<br>
I believe simplicity is more important than flexibility.</p>
</blockquote>
<p>I am of the same opinion simplicity is important for users, however I think we<br>
(whole Ruby community no matter the implementation) could have both simplicity<br>
and flexibility.</p>
<blockquote>
<p>However, your comments clear many kind of things.<br>
I agree that many people agree with you.</p>
<p>Again, my comment is only my thoughts.<br>
I don't against B scenario for other implementations, and for MRI if someone contribute.</p>
</blockquote>
<p>To sum up regarding contribution, headius was so kind and offered to work on<br>
the accompanying proposals since he has an experience with C which I have only<br>
limited. I think the current form of the Ruby Memory Model fits MRI with GIL<br>
so no contribution should be needed.</p>
<p>I suppose you meant contributing a work on removing GIL and ensuring RMM<br>
compliance?</p>
<p>Benoit Daloze (eregon) and I will gladly help to find solutions if needed.</p>
<blockquote>
<p>Actually, sometime Matz said he want to go B scenario.<br>
He proposed Actors on threads (people should care to modify objects inter actors (threads)).<br>
Same approach of Cellroid.<br>
But I'm against on it :p (and Matz said he agree with me, when I asked. I'm not sure current his idea)</p>
</blockquote>
<p>We could also scope down the discussion to just the most important parts of<br>
RMM, which (I think) are local and instance variables, rest of the related<br>
proposals are mostly related to them.</p>
<p>I've also posted a comment to <a href="https://bugs.ruby-lang.org/issues/12021" class="external">https://bugs.ruby-lang.org/issues/12021</a>, it<br>
provides an example of how the low-level model could be used to support a<br>
simple and nice high-level behavior of Proc.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=592132016-06-14T02:19:08Zheadius (Charles Nutter)headius@headius.com
<ul></ul><p>I have had a quick read through comments on this issue, and I have a few responses. Sorry for my late arrival...I had not realized there was this much discussion happening :-)</p>
<p>I think my position on this boils down to three things:</p>
<ol>
<li>Ruby currently has a shared-memory, thread-based concurrency system.</li>
<li>There's no memory model documented for that system.</li>
<li>There needs to be such a model.</li>
</ol>
<p>Anything outside these discussion points seems irrelevant to me. Yes, Ruby 3 may have some new concurrency model, some day. Not even matz knows what it will be. That <em>possible</em> future can't be used as a point against fixing specification gaps in the <em>current</em> model. And waiting until 2020 (most recent estimate for Ruby 3 that I've heard) to formally publish a memory model for Ruby seems unreasonable.</p>
<p>Ruby has needed this memory model formality since we started working on JRuby ten years ago. We had to unilaterally declare a memory model for JRuby in order to reconcile Ruby's lack of a memory model with Java's explicit and strict model.</p>
<p>We don't want to be the only Ruby 2.x implementation that has a memory model, especially if it may conflict with the realities of CRuby. Therefore we need to do something today.</p>
<p>As I understand it, the model described by this document does not force many (or any?) changes on CRuby. CRuby would be able to meet all or most of the requirements of the memory model by having the global lock, but other implementations without a lock would be able to run code in parallel without breaking user expectations.</p>
<p>Having an explicit memory model would actually <em>help</em> people avoid shared-memory problems. Right now, without a memory model, we can't safely build the sorts of concurrency primitives people need. If we can get some formality in Ruby <em>today</em> we can build Actors, Futures, Channels, concurrent lock-free collections, and more...using nothing but Ruby. That's what we want...to empower Ruby the language to solve the concurrency problems of today's Rubyists.</p>
<p>As an example...</p>
<p>I have done development on the JVM for the past 20 years. I have only had to write explicit threaded code since I started working on JRuby. Java users don't Thread.new...they spin up executors, wait on futures, message actors. All the utilities and patterns discussed here are supported by Java and the JDK classes low level, shared-memory, threading primitives backed by a strong memory model. If we had such a memory model for Ruby, we'd be able to provide the same features <em>efficiently</em> and make it <em>less likely</em> that people would be using Thread directly.</p>
<p>I think it's valuable to be able to implement Ruby's future concurrency models in Ruby, isn't it? That won't happen without a memory model.</p>
<p>A few specific responses:</p>
<p>ko1:</p>
<blockquote>
<p>However it will be error prone if shared-everything model is allowed.</p>
</blockquote>
<p>Shared-everything is how Ruby works today. We want a solution for today's Ruby. (Petr said something similar above.) And to repeat my last point...shared-everything would be a whole lot easier for Ruby folks to deal with if they had the kinds of memory guarantees and data structures and concurrency APIs that Java folks take for granted. We're making things MUCH MUCH WORSE by not having a model in place.</p>
<blockquote>
<p>Shared everything model (thread-model)</p>
<ul>
<li>Pros. we can share everything easily.</li>
<li>Cons. requires fine-grain consistency control for some data structures to guarantee memory model.</li>
</ul>
</blockquote>
<p>I've already pointed out that this is what we have today, without any formality. But again, having a memory model means we could build those better abstractions <em>in Ruby</em> so people don't have to worry about fine-grained consistency themselves.</p>
<blockquote>
<p>I believe performance is not a matter.</p>
</blockquote>
<p>It is when it is. People leave CRuby (or Ruby the language) when it becomes important to use up all the cores or run straight-line code really fast. I think we want a language that makes programmers happy <em>both</em> when programming <em>and</em> when running that code, don't we? Programmer happiness is about more than just having a nice language...it's also about having that language's runtime meet your needs.</p>
<blockquote>
<p>Basically, (at least on MRI) I against this proposal because it is too difficult to understand and to implement.</p>
</blockquote>
<p>We want to help you understand this proposal, and I want to help implement any changes that are needed! :-)</p>
<blockquote>
<p>I believe we should introduce memory consistency model on more higher-level abstraction, like Go language does.</p>
</blockquote>
<p>That sounds great, but until it happens we need to fix this gaping hole in Ruby's documentation/specification. We have threads today. We can't build better abstractions above threads without a memory model. But we <em>can</em> build better abstractions once a model is in place, making it <em>less</em> likely people will stumble over threads.</p>
<p>Eric Wong:</p>
<blockquote>
<p>To give us the most freedom in the future, I prefer we have as few guarantees as practical about consistency.</p>
</blockquote>
<p>This is probably the most dangerous way to go of all. Threads are here, they're exposed to Ruby, and they need a solution now. People already rely on Ruby's implicit memory model (whether they realize it or not), and they turn to JRuby's explicit memory model when they run on many cores. We're mostly asking that the implicit become explicit.</p>
<p>This doesn't limit any freedom in the future either...especially when people are talking about even more drastic solutions like removing Thread. Fixing Ruby 2.x threading and memory model does not mean Ruby 3 can't change.</p>
<blockquote>
<p>I strongly disagree with volatility in method and constant tables. Any programs defining methods/constants in parallel threads and expecting them to be up-to-date deserve all the problems they get.</p>
</blockquote>
<p>I agree to a point...and that point is autoload. With the presence of autoload in Ruby, there will <em>always</em> be concurrent modifications of classes happening. I have proposed previously that all requires should be using the SAME lock, so you can't have code running in one load that's affected by code running in another load. I have also proposed that reopening a class should not publish its changes until the class is closed again. Both solutions help the problem, neither was accepted.</p>
<p>Autoload is the problem here, not users loading code in parallel by themselves.</p>
<p>Eric also says:</p>
<blockquote>
<p>The inline, global (, and perhaps in the future: thread-specific) caches will all become expensive if we need to ensure read-after-write consistency by checking for changes on methods and constants made by other threads.</p>
</blockquote>
<p>JRuby does this now, and runs most code much faster than in CRuby. Cache coherency for VM-level structures does not mean you have to sacrifice performance.</p>
<p>...</p>
<p>Sorry my responses are out of order from the comments...and I apologize again for not getting involved earlier. I believe with all my heart we need to make this move for today's Ruby (and today's Rubyists), regardless of what tomorrow's Ruby may or may not do. Please help us!</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=601762016-08-17T09:03:34Zpitr.ch (Petr Chalupa)
<ul></ul><p>I am going to RubyKaigi, I would be very interested to have a meeting face to face there to discuss this topic in depth. Koichi, would you be open to spend some time discussing it? Would anybody else be interested? Afaik Charles is not going, Tom Enebo is, I'll ask him if he would be interested, to bring JRuby's perspective to the discussion.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=601842016-08-17T23:55:55Zko1 (Koichi Sasada)
<ul></ul><blockquote>
<p>Koichi, would you be open to spend some time discussing it?</p>
</blockquote>
<p>Sure! Other than my presentation time, I'll be happy to discuss about it.</p>
<p>Thanks,<br>
Koichi</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=606132016-09-23T10:23:59Zshyouhei (Shyouhei Urabe)shyouhei@ruby-lang.org
<ul></ul><p>Can someone summarize the off-line discussion mentioned earlier? I was not there.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=606152016-09-23T10:43:52Zpitr.ch (Petr Chalupa)
<ul></ul><p>Hi Shyouhei, sorry for not doing it so far, I took vacation to travel Japan right after the conference. It is already on my todolist when I get back.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=606212016-09-24T02:25:24Zshyouhei (Shyouhei Urabe)shyouhei@ruby-lang.org
<ul></ul><p>Great. I'm looking forward.</p> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=607152016-09-29T07:36:31Zpitr.ch (Petr Chalupa)
<ul></ul><p>As the previous comments mention we had a meeting to discus memory<br>
model at RubyKaigi. There were about fifteen Ruby implementers siting around the<br>
table from MRI, JRuby, JRuby+Truffle, and OMR.</p>
<p>The first meeting started by exchanging little bit of background information. We<br>
then freely discussed few topics:</p>
<ul>
<li>why are volatile variables needed and some examples</li>
<li>concurrency models, why is shared memory sometimes convenient and needed</li>
<li>how would be volatile variables made available in Ruby (in every object vs<br>
by a proxy)</li>
</ul>
<p>The topics were useful to exchange background and to understand each other<br>
better, however we did not reach any agreement on any of the topics. We<br>
realised that we need some real examples to discuss.</p>
<p>For the next meeting Benoit had prepared code examples that better illustrate<br>
the problems faced by language implementers without a memory model in place. In<br>
particular, we focused on the problems that affect instance variables in shared<br>
Ruby objects. When operations like: instance variable update, new instance<br>
variable definition, instance variable type profile change are executed on an<br>
object concurrently and/or in parallel the following must not happen:</p>
<ul>
<li>An update of an instance variable is lost<br>
(see <a href="https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#heading=h.eukpz1zhpmm2" class="external">https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#heading=h.eukpz1zhpmm2</a>)</li>
<li>A value out-of-thin-air is read</li>
</ul>
<p>This helped us to reach agreement that <strong>we need a minimal memory model</strong> which<br>
would forbid exactly this type of surprising behaviour. Personally I think this<br>
is great.</p>
<p>The same day we had followup meeting with reduced number of attendees Koichi,<br>
Benoit and me. We've discussed in detail the current proposed properties for<br>
different types of variables. Koichi found them reasonable but of course he<br>
will be evaluating it further and discussing the memory model with other MRI<br>
developers.</p>
<p>The next steps are:</p>
<ul>
<li>Improve the memory model document to make it more understandable (e.g. by<br>
adding examples)</li>
<li>Continue in the discussions around properties of the variables to determine<br>
if the current proposal is fine or if it needs changes</li>
</ul> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=702672018-02-08T09:41:39Zgooglefeud (google feud (spammer, locked))
<ul><li><strong>Subject</strong> changed from <i>Documenting Ruby memory model</i> to <i>Book my show</i></li></ul> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=702712018-02-08T09:59:00Zusa (Usaku NAKAMURA)usa@garbagecollect.jp
<ul><li><strong>Subject</strong> changed from <i>Book my show</i> to <i>Documenting Ruby memory model</i></li></ul> Ruby master - Feature #12020: Documenting Ruby memory modelhttps://bugs.ruby-lang.org/issues/12020?journal_id=955342021-12-23T23:40:34Zhsbt (Hiroshi SHIBATA)hsbt@ruby-lang.org
<ul><li><strong>Project</strong> changed from <i>14</i> to <i>Ruby master</i></li></ul>