summaryrefslogtreecommitdiff
path: root/insns.def
Commit message (Collapse)AuthorAgeFilesLines
* Lazily create singletons on instance_{exec,eval} (#5146)John Hawthorn2021-12-021-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Lazily create singletons on instance_{exec,eval} Previously when instance_exec or instance_eval was called on an object, that object would be given a singleton class so that method definitions inside the block would be added to the object rather than its class. This commit aims to improve performance by delaying the creation of the singleton class unless/until one is needed for method definition. Most of the time instance_eval is used without any method definition. This was implemented by adding a flag to the cref indicating that it represents a singleton of the object rather than a class itself. In this case CREF_CLASS returns the object's existing class, but in cases that we are defining a method (either via definemethod or VM_SPECIAL_OBJECT_CBASE which is used for undef and alias). This also happens to fix what I believe is a bug. Previously instance_eval behaved differently with regards to constant access for true/false/nil than for all other objects. I don't think this was intentional. String::Foo = "foo" "".instance_eval("Foo") # => "foo" Integer::Foo = "foo" 123.instance_eval("Foo") # => "foo" TrueClass::Foo = "foo" true.instance_eval("Foo") # NameError: uninitialized constant Foo This also slightly changes the error message when trying to define a method through instance_eval on an object which can't have a singleton class. Before: $ ruby -e '123.instance_eval { def foo; end }' -e:1:in `block in <main>': no class/module to add method (TypeError) After: $ ./ruby -e '123.instance_eval { def foo; end }' -e:1:in `block in <main>': can't define singleton (TypeError) IMO this error is a small improvement on the original and better matches the (both old and new) message when definging a method using `def self.` $ ruby -e '123.instance_eval{ def self.foo; end }' -e:1:in `block in <main>': can't define singleton (TypeError) Co-authored-by: Matthew Draper <matthew@trebex.net> * Remove "under" argument from yield_under * Move CREF_SINGLETON_SET into vm_cref_new * Simplify vm_get_const_base * Fix leaf VM_SPECIAL_OBJECT_CONST_BASE Co-authored-by: Matthew Draper <matthew@trebex.net>
* Optimize dynamic string interpolation for symbol/true/false/nil/0-9Jeremy Evans2021-11-181-2/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides a significant speedup for symbol, true, false, nil, and 0-9, class/module, and a small speedup in most other cases. Speedups (using included benchmarks): :symbol :: 60% 0-9 :: 50% Class/Module :: 50% nil/true/false :: 20% integer :: 10% [] :: 10% "" :: 3% One reason this approach is faster is it reduces the number of VM instructions for each interpolated value. Initial idea, approach, and benchmarks from Eric Wong. I applied the same approach against the master branch, updating it to handle the significant internal changes since this was first proposed 4 years ago (such as CALL_INFO/CALL_CACHE -> CALL_DATA). I also expanded it to optimize true/false/nil/0-9/class/module, and added handling of missing methods, refined methods, and RUBY_DEBUG. This renames the tostring insn to anytostring, and adds an objtostring insn that implements the optimization. This requires making a few functions non-static, and adding some non-static functions. This disables 4 YJIT tests. Those tests should be reenabled after YJIT optimizes the new objtostring insn. Implements [Feature #13715] Co-authored-by: Eric Wong <e@80x24.org> Co-authored-by: Alan Wu <XrXr@users.noreply.github.com> Co-authored-by: Yusuke Endoh <mame@ruby-lang.org> Co-authored-by: Koichi Sasada <ko1@atdot.net>
* Refactor setclassvariable (#5143)Eileen M. Uchitelle2021-11-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | We only need the cref when we have a cache miss so don't look it up until we need it. This likely speeds up class variable writes in the interpreter but also simplifies the jit code. Before ``` Warming up -------------------------------------- write a cvar 192.280k i/100ms Calculating ------------------------------------- write a cvar 1.915M (± 3.5%) i/s - 9.614M in 5.026694s ``` After ``` Warming up -------------------------------------- write a cvar 216.308k i/100ms Calculating ------------------------------------- write a cvar 2.140M (± 3.1%) i/s - 10.815M in 5.058079s ``` Followup to ruby/ruby#5137
* Refactor getclassvariable (#5137)Eileen M. Uchitelle2021-11-181-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Refactor getclassvariable We only need the cref when we have a cache miss so don't look it up until we need it. This speeds up class variable reads in the interpreter but also simplifies the jit code. Benchmarks for master vs this branch (without yjit): Before: ``` Warming up -------------------------------------- read a cvar 1.276M i/100ms Calculating ------------------------------------- read a cvar 12.596M (± 1.7%) i/s - 63.781M in 5.064902s ``` After: ``` Warming up -------------------------------------- read a cvar 1.336M i/100ms Calculating ------------------------------------- read a cvar 13.114M (± 3.6%) i/s - 65.488M in 5.000584s ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> * Clean up function signatures / remove dead code rb_vm_getclassvariable signature has changed and we don't need rb_vm_get_cref. Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
* Eliminate some redundant checks on `num` in `newhash`Aaron Patterson2021-10-181-2/+4
| | | | | | | | | | The `newhash` instruction was checking if `num` is greater than 0, but so is [`rb_hash_new_with_size`](https://github.com/ruby/ruby/blob/82e2443d8b1e3edd2607c78dddf5aac79a13492d/hash.c#L1564) as well as [`rb_hash_bulk_insert`](https://github.com/ruby/ruby/blob/82e2443d8b1e3edd2607c78dddf5aac79a13492d/hash.c#L4764). If we know the size is 0 in the instruction, we can just directly call `rb_hash_new` and only check the size once. Unfortunately, when num is greater than 0, it's still checked 3 times.
* Make Array#min/max optimization respect refined methodsJeremy Evans2021-09-301-2/+2
| | | | | | | | | Pass in ec to vm_opt_newarray_{max,min}. Avoids having to call GET_EC inside the functions, for better performance. While here, add a test for Array#min/max being redefined to test_optimization.rb. Fixes [Bug #18180]
* Fix typo in insns.def [ci skip]Alan Wu2021-09-231-1/+1
|
* Add a cache for class variableseileencodes2021-06-181-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Redo of 34a2acdac788602c14bf05fb616215187badd504 and 931138b00696419945dc03e10f033b1f53cd50f3 which were reverted. GitHub PR #4340. This change implements a cache for class variables. Previously there was no cache for cvars. Cvar access is slow due to needing to travel all the way up th ancestor tree before returning the cvar value. The deeper the ancestor tree the slower cvar access will be. The benefits of the cache are more visible with a higher number of included modules due to the way Ruby looks up class variables. The benchmark here includes 26 modules and shows with the cache, this branch is 6.5x faster when accessing class variables. ``` compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19] built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19] | |compare-ruby|built-ruby| |:--------|-----------:|---------:| |vm_cvar | 5.681M| 36.980M| | | -| 6.51x| ``` Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails application. ActiveRecord::Base.logger has 71 ancestors. The more ancestors a tree has, the more clear the speed increase. IE if Base had only one ancestor we'd see no improvement. This benchmark is run on a vanilla Rails application. Benchmark code: ```ruby require "benchmark/ips" require_relative "config/environment" Benchmark.ips do |x| x.report "logger" do ActiveRecord::Base.logger end end ``` Ruby 3.0 master / Rails 6.1: ``` Warming up -------------------------------------- logger 155.251k i/100ms Calculating ------------------------------------- ``` Ruby 3.0 with cvar cache / Rails 6.1: ``` Warming up -------------------------------------- logger 1.546M i/100ms Calculating ------------------------------------- logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s ``` Lastly we ran a benchmark to demonstate the difference between master and our cache when the number of modules increases. This benchmark measures 1 ancestor, 30 ancestors, and 100 ancestors. Ruby 3.0 master: ``` Warming up -------------------------------------- 1 module 1.231M i/100ms 30 modules 432.020k i/100ms 100 modules 145.399k i/100ms Calculating ------------------------------------- 1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s 30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s 100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s Comparison: 1 module: 12209958.3 i/s 30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower 100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower ``` Ruby 3.0 with cvar cache: ``` Warming up -------------------------------------- 1 module 1.641M i/100ms 30 modules 1.655M i/100ms 100 modules 1.620M i/100ms Calculating ------------------------------------- 1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s 30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s 100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s Comparison: 1 module: 16279458.0 i/s 100 modules: 16087484.6 i/s - same-ish: difference falls within error 30 modules: 15891406.2 i/s - same-ish: difference falls within error ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
* [Bug #17880] Set leaf false on opt_setinlinecache (#4565)Eileen M. Uchitelle2021-06-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change fixes the bug described in https://bugs.ruby-lang.org/issues/17880. Checking `ractor_shareable_p` will cause the method to call back into Ruby. Anything calling this method can't be a leaf instruction, otherwise it could crash. By adding `attr bool leaf = false` we no longer crash because it marks the function as not a leaf. Here's a simplified reproduction script: ```ruby require "set" class Id attr_reader :db_id def initialize(db_id) @db_id = db_id end def ==(other) other.class == self.class && other.db_id == db_id end alias_method :eql?, :== def hash 10 end def <=>(other) db_id <=> other.db_id if other.is_a?(self.class) end end class Namespace IDS = Set[ Id.new(1).freeze, Id.new(2).freeze, Id.new(3).freeze, Id.new(4).freeze, ].freeze class << self def test?(id) IDS.include?(id) end end end p Namespace.test?(Id.new(1)) p Namespace.test?(Id.new(5)) ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
* Revert "Filling cache values on cvar write"Aaron Patterson2021-05-111-6/+4
| | | | | This reverts commit 08de37f9fa3469365e6b5c964689ae2bae0eb9f3. This reverts commit e8ae922b62adb00a80d3d4c49f7d7b0e6026eaba.
* Filling cache values on cvar writeeileencodes2021-05-111-2/+2
| | | | | | Instead of on read. Once it's in the inline cache we never have to make one again. We want to eventually put the value into the cache, and the best opportunity to do that is when you write the value.
* Add a cache for class variableseileencodes2021-05-111-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change implements a cache for class variables. Previously there was no cache for cvars. Cvar access is slow due to needing to travel all the way up th ancestor tree before returning the cvar value. The deeper the ancestor tree the slower cvar access will be. The benefits of the cache are more visible with a higher number of included modules due to the way Ruby looks up class variables. The benchmark here includes 26 modules and shows with the cache, this branch is 6.5x faster when accessing class variables. ``` compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19] built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19] | |compare-ruby|built-ruby| |:--------|-----------:|---------:| |vm_cvar | 5.681M| 36.980M| | | -| 6.51x| ``` Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails application. ActiveRecord::Base.logger has 71 ancestors. The more ancestors a tree has, the more clear the speed increase. IE if Base had only one ancestor we'd see no improvement. This benchmark is run on a vanilla Rails application. Benchmark code: ```ruby require "benchmark/ips" require_relative "config/environment" Benchmark.ips do |x| x.report "logger" do ActiveRecord::Base.logger end end ``` Ruby 3.0 master / Rails 6.1: ``` Warming up -------------------------------------- logger 155.251k i/100ms Calculating ------------------------------------- ``` Ruby 3.0 with cvar cache / Rails 6.1: ``` Warming up -------------------------------------- logger 1.546M i/100ms Calculating ------------------------------------- logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s ``` Lastly we ran a benchmark to demonstate the difference between master and our cache when the number of modules increases. This benchmark measures 1 ancestor, 30 ancestors, and 100 ancestors. Ruby 3.0 master: ``` Warming up -------------------------------------- 1 module 1.231M i/100ms 30 modules 432.020k i/100ms 100 modules 145.399k i/100ms Calculating ------------------------------------- 1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s 30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s 100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s Comparison: 1 module: 12209958.3 i/s 30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower 100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower ``` Ruby 3.0 with cvar cache: ``` Warming up -------------------------------------- 1 module 1.641M i/100ms 30 modules 1.655M i/100ms 100 modules 1.620M i/100ms Calculating ------------------------------------- 1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s 30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s 100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s Comparison: 1 module: 16279458.0 i/s 100 modules: 16087484.6 i/s - same-ish: difference falls within error 30 modules: 15891406.2 i/s - same-ish: difference falls within error ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
* Fix type-o in insns.defebrohman2021-04-261-1/+1
| | | | "redefine" -> "redefined"
* Remove reverse VM instructionJeremy Evans2021-04-211-19/+0
| | | | | | This was previously only used by the multiple assignment code, but is no longer needed after the multiple assignment execution order fix.
* Use rb_fstring for "defined" strings.Aaron Patterson2021-03-171-2/+2
| | | | | | We can take advantage of fstrings to de-duplicate the defined strings. This means we don't need to keep the list of defined strings on the VM (or register them as mark objects)
* Refactor vm_defined to return a booleanAaron Patterson2021-03-171-3/+1
| | | | | | We just need this function to return whether or not the thing we're looking for is defined. If it's defined, return something true, otherwise false.
* Stop calling `rb_iseq_defined_string` in vm_definedAaron Patterson2021-03-171-1/+1
| | | | | We already have access to the string from the iseqs, so we can stop calling this function.
* Store strings for `defined` in the iseqsAaron Patterson2021-03-171-1/+6
| | | | | We can know the string used for "defined" calls at compile time, then store the string in the instruction sequences
* enable constant cache on ractorsKoichi Sasada2021-01-051-4/+5
| | | | | | | | | | | | | | | | constant cache `IC` is accessed by non-atomic manner and there are thread-safety issues, so Ruby 3.0 disables to use const cache on non-main ractors. This patch enables it by introducing `imemo_constcache` and allocates it by every re-fill of const cache like `imemo_callcache`. [Bug #17510] Now `IC` only has one entry `IC::entry` and it points to `iseq_inline_constant_cache_entry`, managed by T_IMEMO object. `IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and `rb_mjit_after_vm_ic_update()` is not needed.
* Fix a cyclic explanationTakashi Kokubun2020-12-251-1/+1
|
* encourage inlining for vm_sendish()Koichi Sasada2020-12-171-4/+4
| | | | | | | | | Some tunings. * add `inline` for vm_sendish() * pass enum instead of func ptr to vm_sendish() * reorder initial order of `calling` struct. * add ALWAYS_INLINE for vm_search_method_fastpath() * call vm_search_method_fastpath() from vm_sendish()
* Lazily move PC with RUBY_VM_CHECK_INTSTakashi Kokubun2020-12-161-4/+4
| | | | | | | | | | | | | | | | | | | | | | | ``` $ benchmark-driver -v --rbenv 'before --jit;after --jit' --repeat-count=12 --alternate --output=all benchmark.yml before --jit: ruby 3.0.0dev (2020-12-17T06:17:46Z master 3b4d698e0b) +JIT [x86_64-linux] after --jit: ruby 3.0.0dev (2020-12-17T07:01:48Z master 843abb96f0) +JIT [x86_64-linux] last_commit=Lazily move PC with RUBY_VM_CHECK_INTS Calculating ------------------------------------- before --jit after --jit Optcarrot Lan_Master.nes 80.29343646660429 83.15779723251525 fps 82.26755637885149 85.50197941326810 83.50682959728820 88.14657804306270 85.01236533133049 88.78201988978667 87.81799334561326 88.94841008936447 87.88228562393064 89.37925215601926 88.06695585889995 89.86143277214475 88.84730834922165 90.00773346420887 90.46317871213088 90.82603371104014 90.96308347148916 91.29797694822179 90.97945938504556 91.31086331868738 91.57127890154500 91.49949184318844 ```
* Inline getconstant on JIT (#3906)Takashi Kokubun2020-12-161-1/+1
| | | | | * Inline getconstant on JIT * Support USE_MJIT=0
* fix inline method cache sync bugKoichi Sasada2020-12-151-5/+0
| | | | | | | | | `cd` is passed to method call functions to method invocation functions, but `cd` can be manipulated by other ractors simultaneously so it contains thread-safety issue. To solve this issue, this patch stores `ci` and found `cc` to `calling` and stops to pass `cd`.
* Unfortunately getinstancevariable was still not leafTakashi Kokubun2020-12-101-0/+2
| | | | https://github.com/ruby/ruby/runs/1533401436
* Make getinstancevariable a leaf instructionJeremy Evans2020-12-101-2/+0
| | | | It can no longer issue a warning.
* tuning trial: newobj with current ecKoichi Sasada2020-12-071-2/+2
| | | | | Passing current ec can improve performance of newobj. This patch tries it for Array and String literals ([] and '').
* sync RClass::ext::iv_index_tblKoichi Sasada2020-10-171-2/+2
| | | | | | | | | | | | iv_index_tbl manages instance variable indexes (ID -> index). This data structure should be synchronized with other ractors so introduce some VM locks. This patch also introduced atomic ivar cache used by set/getinlinecache instructions. To make updating ivar cache (IVC), we changed iv_index_tbl data structure to manage (ID -> entry) and an entry points serial and index. IVC points to this entry so that cache update becomes atomically.
* Interpolated strings are no longer frozen with frozen-string-literal: trueBenoit Daloze2020-09-151-10/+0
| | | | | * Remove freezestring instruction since this was the only usage for it. * [Feature #17104]
* precalc invokebuiltin destinations卜部昌平2020-07-131-4/+4
| | | | | | Noticed that struct rb_builtin_function is a purely compile-time constant. MJIT can eliminate some runtime calculations by statically generate dedicated C code generator for each builtin functions.
* Use ID instead of GENTRY for gvars. (#3278)Koichi Sasada2020-07-031-8/+6
| | | | | | | | | | | | Use ID instead of GENTRY for gvars. Global variables are compiled into GENTRY (a pointer to struct rb_global_entry). This patch replace this GENTRY to ID and make the code simple. We need to search GENTRY from ID every time (st_lookup), so additional overhead will be introduced. However, the performance of accessing global variables is not important now a day and this simplicity helps Ractor development.
* Trace :return of builtin methodsTakashi Kokubun2020-06-231-1/+1
| | | | | | using opt_invokebuiltin_delegate_leave insn. Since Ruby 2.7, :return of methods using builtin have not been traced properly.
* Remove obsoleted opt_call_c_function insn (#3232)Takashi Kokubun2020-06-171-1/+1
| | | | | * Remove obsoleted opt_call_c_function insn * Keep opt_call_c_function with DEFINE_INSN_IF
* vm_insnhelper.c: merge opt_eq_func / opt_eql_func卜部昌平2020-06-021-1/+1
| | | | | | | | These two function were almost identical, except in case of T_STRING/T_FLOAT. Why not merge them into one, and let the difference be handled in normal method calls (slowpath). This does not improve runtime performance for me, but at least reduces for instance rb_eql_opt from 653 bytes to 86 bytes on my machine, according to nm(1).
* Turn class variable warnings into exceptionsJeremy Evans2020-04-101-2/+2
| | | | | | | | | | | | | | | | | | This changes the following warnings: * warning: class variable access from toplevel * warning: class variable @foo of D is overtaken by C into RuntimeErrors. Handle defined?(@@foo) at toplevel by returning nil instead of raising an exception (the previous behavior warned before returning nil when defined? was used). Refactor the specs to avoid the warnings even in older versions. The specs were checking for the warnings, but the purpose of the related specs as evidenced from their description is to test for behavior, not for warnings. Fixes [Bug #14541]
* Introduce disposable call-cache.Koichi Sasada2020-02-221-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch contains several ideas: (1) Disposable inline method cache (IMC) for race-free inline method cache * Making call-cache (CC) as a RVALUE (GC target object) and allocate new CC on cache miss. * This technique allows race-free access from parallel processing elements like RCU. (2) Introduce per-Class method cache (pCMC) * Instead of fixed-size global method cache (GMC), pCMC allows flexible cache size. * Caching CCs reduces CC allocation and allow sharing CC's fast-path between same call-info (CI) call-sites. (3) Invalidate an inline method cache by invalidating corresponding method entries (MEs) * Instead of using class serials, we set "invalidated" flag for method entry itself to represent cache invalidation. * Compare with using class serials, the impact of method modification (add/overwrite/delete) is small. * Updating class serials invalidate all method caches of the class and sub-classes. * Proposed approach only invalidate the method cache of only one ME. See [Feature #16614] for more details.
* VALUE size packed callinfo (ci).Koichi Sasada2020-02-221-6/+6
| | | | | | | | | | | | | | | | | | | | Now, rb_call_info contains how to call the method with tuple of (mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and mid+argc+flags only requires 64bits. So this patch packed rb_call_info to VALUE (1 word) on such cases. If we can not represent it in VALUE, then use imemo_callinfo which contains conventional callinfo (rb_callinfo, renamed from rb_call_info). iseq->body->ci_kw_size is removed because all of callinfo is VALUE size (packed ci or a pointer to imemo_callinfo). To access ci information, we need to use these functions: vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci). struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg. rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc() is temporary removed because cd->ci should be marked.
* Fixed a typo, missing "i" [ci skip]Nobuyoshi Nakada2020-01-271-1/+1
|
* Introduce an "Inline IVAR cache" structAaron Patterson2019-12-051-3/+3
| | | | | | | | | This commit introduces an "inline ivar cache" struct. The reason we need this is so compaction can differentiate from an ivar cache and a regular inline cache. Regular inline caches contain references to `VALUE` and ivar caches just contain references to the ivar index. With this new struct we can easily update references for inline caches (but not inline var caches as they just contain an int)
* check interrupts at each frame pop timing.Koichi Sasada2019-11-291-3/+0
| | | | | | | | | | | | | | | | | | Asynchronous events such as signal trap, finalization timing, thread switching and so on are managed by "interrupt_flag". Ruby's threads check this flag periodically and if a thread does not check this flag, above events doesn't happen. This checking is CHECK_INTS() (related) macro and it is placed at some places (laeve instruction and so on). However, at the end of C methods, C blocks (IMEMO_IFUNC) etc there are no checking and it can introduce uninterruptible thread. To modify this situation, we decide to place CHECK_INTS() at vm_pop_frame(). It increases interrupt checking points. [Bug #16366] This patch can introduce unexpected events...
* Revert "export for MJIT"Koichi Sasada2019-11-291-0/+2
| | | | This reverts commit 2e6f1cf8b264f4c8499c4e5f18bf662fdade04ff.
* export for MJITKoichi Sasada2019-11-291-2/+0
|
* add casts.Koichi Sasada2019-11-181-2/+2
| | | | | add casts to avoid compile error. http://ci.rvm.jp/results/trunk_clang_39@silicon-docker/2402215
* vm_invoke_builtin_delegate with start index.Koichi Sasada2019-11-181-5/+5
| | | | | | | | | | | | | | | | | opt_invokebuiltin_delegate and opt_invokebuiltin_delegate_leave invokes builtin functions with same parameters of the method. This technique eliminate stack push operations. However, delegation parameters should be completely same as given parameters. (e.g. `def foo(a, b, c) __builtin_foo(a, b, c)` is okay, but __builtin_foo(b, c) is not allowed) This patch relaxes this restriction. ISeq has a local variables table which includes parameters. For example, the method defined as `def foo(a, b, c) x=y=nil`, then local variables table contains [a, b, c, x, y]. If calling builtin-function with arguments which are sub-array of the lvar table, use opt_invokebuiltin_delegate instruction with start index. For example, `__builtin_foo(b, c)`, `__builtin_bar(c, x, y)` is okay, and so on.
* Revert "Method reference operator"Nobuyoshi Nakada2019-11-121-11/+0
| | | | | This reverts commit 67c574736912003c377218153f9d3b9c0c96a17b. [Feature #16275]
* use STACK_ADDR_FROM_TOP()Koichi Sasada2019-11-091-1/+1
| | | | | | vm_invoke_builtin() accesses VM stack via cfp->sp. However, MJIT can use their own stack. To access them appropriately, we need to use STACK_ADDR_FROM_TOP().
* support builtin features with Ruby and C.Koichi Sasada2019-11-081-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support loading builtin features written in Ruby, which implement with C builtin functions. [Feature #16254] Several features: (1) Load .rb file at boottime with native binary. Now, prelude.rb is loaded at boottime. However, this file is contained into the interpreter as a text format and we need to compile it. This patch contains a feature to load from binary format. (2) __builtin_func() in Ruby call func() written in C. In Ruby file, we can write `__builtin_func()` like method call. However this is not a method call, but special syntax to call a function `func()` written in C. C functions should be defined in a file (same compile unit) which load this .rb file. Functions (`func` in above example) should be defined with (a) 1st parameter: rb_execution_context_t *ec (b) rest parameters (0 to 15). (c) VALUE return type. This is very similar requirements for functions used by rb_define_method(), however `rb_execution_context_t *ec` is new requirement. (3) automatic C code generation from .rb files. tool/mk_builtin_loader.rb creates a C code to load .rb files needed by miniruby and ruby command. This script is run by BASERUBY, so *.rb should be written in BASERUBY compatbile syntax. This script load a .rb file and find all of __builtin_ prefix method calls, and generate a part of C code to export functions. tool/mk_builtin_binary.rb creates a C code which contains binary compiled Ruby files needed by ruby command.
* Combine call info and cache to speed up method invocationAlan Wu2019-10-241-48/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | To perform a regular method call, the VM needs two structs, `rb_call_info` and `rb_call_cache`. At the moment, we allocate these two structures in separate buffers. In the worst case, the CPU needs to read 4 cache lines to complete a method call. Putting the two structures together reduces the maximum number of cache line reads to 2. Combining the structures also saves 8 bytes per call site as the current layout uses separate two pointers for the call info and the call cache. This saves about 2 MiB on Discourse. This change improves the Optcarrot benchmark at least 3%. For more details, see attached bugs.ruby-lang.org ticket. Complications: - A new instruction attribute `comptime_sp_inc` is introduced to calculate SP increase at compile time without using call caches. At compile time, a `TS_CALLDATA` operand points to a call info struct, but at runtime, the same operand points to a call data struct. Instruction that explicitly define `sp_inc` also need to define `comptime_sp_inc`. - MJIT code for copying call cache becomes slightly more complicated. - This changes the bytecode format, which might break existing tools. [Misc #16258]
* Revert https://github.com/ruby/ruby/pull/2486卜部昌平2019-10-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commits: 10d6a3aca7 8ba48c1b85 fba8627dc1 dd883de5ba 6c6a25feca 167e6b48f1 7cb96d41a5 3207979278 595b3c4fdd 1521f7cf89 c11c5e69ac cf33608203 3632a812c0 f56506be0d 86427a3219 . The reason for the revert is that we observe ABA problem around inline method cache. When a cache misshits, we search for a method entry. And if the entry is identical to what was cached before, we reuse the cache. But the commits we are reverting here introduced situations where a method entry is freed, then the identical memory region is used for another method entry. An inline method cache cannot detect that ABA. Here is a code that reproduce such situation: ```ruby require 'prime' class << Integer alias org_sqrt sqrt def sqrt(n) raise end GC.stress = true Prime.each(7*37){} rescue nil # <- Here we populate CC class << Object.new; end # These adjacent remove-then-alias maneuver # frees a method entry, then immediately # reuses it for another. remove_method :sqrt alias sqrt org_sqrt end Prime.each(7*37).to_a # <- SEGV ```
* delete unnecessary branch卜部昌平2019-09-301-1/+1
| | | | | | | | | | | | | | | | At last, not only myself but also your compiler are fully confident that the method entries pointed from call caches are immutable. We don't have to worry about silent updates. Just delete the branch that is now always false. Calculating ------------------------------------- ours trunk vm2_poly_same_method 2.142M 2.070M i/s - 6.000M times in 2.801148s 2.898994s Comparison: vm2_poly_same_method ours: 2141979.2 i/s trunk: 2069683.8 i/s - 1.03x slower