summaryrefslogtreecommitdiff
path: root/iseq.h
Commit message (Collapse)AuthorAgeFilesLines
* Allow passing a Rust closure to rb_iseq_callback (#6575)Takashi Kokubun2022-10-181-1/+1
|
* Make mjit_cont sharable with YJIT (#6556)Takashi Kokubun2022-10-171-0/+1
| | | | | | | * Make mjit_cont sharable with YJIT * Update dependencies * Update YJIT binding
* Remove rb_iseq_eachJohn Hawthorn2022-09-011-4/+0
|
* New constant caching insn: opt_getconstant_pathJohn Hawthorn2022-09-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously YARV bytecode implemented constant caching by having a pair of instructions, opt_getinlinecache and opt_setinlinecache, wrapping a series of getconstant calls (with putobject providing supporting arguments). This commit replaces that pattern with a new instruction, opt_getconstant_path, handling both getting/setting the inline cache and fetching the constant on a cache miss. This is implemented by storing the full constant path as a null-terminated array of IDs inside of the IC structure. idNULL is used to signal an absolute constant reference. $ ./miniruby --dump=insns -e '::Foo::Bar::Baz' == disasm: #<ISeq:<main>@-e:1 (1,0)-(1,13)> (catch: FALSE) 0000 opt_getconstant_path <ic:0 ::Foo::Bar::Baz> ( 1)[Li] 0002 leave The motivation for this is that we had increasingly found the need to disassemble the instructions between the opt_getinlinecache and opt_setinlinecache in order to determine the constant we are fetching, or otherwise store metadata. This disassembly was done: * In opt_setinlinecache, to register the IC against the constant names it is using for granular invalidation. * In rb_iseq_free, to unregister the IC from the invalidation table. * In YJIT to find the position of a opt_getinlinecache instruction to invalidate it when the cache is populated * In YJIT to register the constant names being used for invalidation. With this change we no longe need disassemly for these (in fact rb_iseq_each is now unused), as the list of constant names being referenced is held in the IC. This should also make it possible to make more optimizations in the future. This may also reduce the size of iseqs, as previously each segment required 32 bytes (on 64-bit platforms) for each constant segment. This implementation only stores one ID per-segment. There should be no significant performance change between this and the previous implementation. Previously opt_getinlinecache was a "leaf" instruction, but it included a jump (almost always to a separate cache line). Now opt_getconstant_path is a non-leaf (it may raise/autoload/call const_missing) but it does not jump. These seem to even out.
* Add "rb_" prefixes to toplevel enum definitionsYusuke Endoh2022-07-221-2/+2
| | | | ... as per ko1's request.
* Move enum definitions out of struct definitionYusuke Endoh2022-07-221-21/+22
|
* Expand tabs [ci skip]Takashi Kokubun2022-07-211-19/+19
| | | | [Misc #18891]
* Use `roomof` macroNobuyoshi Nakada2022-07-081-1/+1
| | | | | | The masking is not only unnecessary but works only when the masking value is a power of 2. Also suppress unary minus operator to unsigned type warnings.
* Remove ISEQ_MARKABLE_ISEQ flagAaron Patterson2022-07-071-1/+0
| | | | | We don't need this flag anymore. We have all the info we need via the bitmap and the is_entries list.
* Move function to `static inline` so we don't have leaked globalsAaron Patterson2022-06-291-1/+0
| | | | | This function shouldn't leak and is only needed during instruction assembly
* Fix ISeq dump / load in array casesAaron Patterson2022-06-291-0/+1
| | | | | | | | | | | We need to dump relative offsets for inline storage entries so that loading iseqs as an array works as well. This commit also has some minor refactoring to make computing relative ISE information easier. This should fix the iseq dump / load as array tests we're seeing fail in CI. Co-Authored-By: John Hawthorn <john@hawthorn.email>
* Speed up ISeq by marking via bitmaps and IC rearrangingAaron Patterson2022-06-231-0/+6
| | | | | | | | | | | | | | | | This commit adds a bitfield to the iseq body that stores offsets inside the iseq buffer that contain values we need to mark. We can use this bitfield to mark objects instead of disassembling the instructions. This commit also groups inline storage entries and adds a counter for each entry. This allows us to iterate and mark each entry without disassembling instructions Since we have a bitfield and grouped inline caches, we can mark all VALUE objects associated with instructions without actually disassembling the instructions at mark time. [Feature #18875] [ruby-core:109042]
* Finer-grained constant cache invalidation (take 2)Kevin Newton2022-04-011-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | This commit reintroduces finer-grained constant cache invalidation. After 8008fb7 got merged, it was causing issues on token-threaded builds (such as on Windows). The issue was that when you're iterating through instruction sequences and using the translator functions to get back the instruction structs, you're either using `rb_vm_insn_null_translator` or `rb_vm_insn_addr2insn2` depending if it's a direct-threading build. `rb_vm_insn_addr2insn2` does some normalization to always return to you the non-trace version of whatever instruction you're looking at. `rb_vm_insn_null_translator` does not do that normalization. This means that when you're looping through the instructions if you're trying to do an opcode comparison, it can change depending on the type of threading that you're using. This can be very confusing. So, this commit creates a new translator function `rb_vm_insn_normalizing_translator` to always return the non-trace version so that opcode comparisons don't have to worry about different configurations. [Feature #18589]
* Revert "Finer-grained inline constant cache invalidation"Nobuyoshi Nakada2022-03-251-3/+0
| | | | | | | | | | | | This reverts commits for [Feature #18589]: * 8008fb7352abc6fba433b99bf20763cf0d4adb38 "Update formatting per feedback" * 8f6eaca2e19828e92ecdb28b0fe693d606a03f96 "Delete ID from constant cache table if it becomes empty on ISEQ free" * 629908586b4bead1103267652f8b96b1083573a8 "Finer-grained inline constant cache invalidation" MSWin builds on AppVeyor have been crashing since the merger.
* Finer-grained inline constant cache invalidationKevin Newton2022-03-241-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current behavior - caches depend on a global counter. All constant mutations cause caches to be invalidated. ```ruby class A B = 1 end def foo A::B # inline cache depends on global counter end foo # populate inline cache foo # hit inline cache C = 1 # global counter increments, all caches are invalidated foo # misses inline cache due to `C = 1` ``` Proposed behavior - caches depend on name components. Only constant mutations with corresponding names will invalidate the cache. ```ruby class A B = 1 end def foo A::B # inline cache depends constants named "A" and "B" end foo # populate inline cache foo # hit inline cache C = 1 # caches that depend on the name "C" are invalidated foo # hits inline cache because IC only depends on "A" and "B" ``` Examples of breaking the new cache: ```ruby module C # Breaks `foo` cache because "A" constant is set and the cache in foo depends # on "A" and "B" class A; end end B = 1 ``` We expect the new cache scheme to be invalidated less often because names aren't frequently reused. With the cache being invalidated less, we can rely on its stability more to keep our constant references fast and reduce the need to throw away generated code in YJIT.
* Add ISEQ_BODY macroPeter Zhu2022-03-241-11/+11
| | | | | | Use ISEQ_BODY macro to get the rb_iseq_constant_body of the ISeq. Using this macro will make it easier for us to change the allocation strategy of rb_iseq_constant_body when using Variable Width Allocation.
* add `rb_iseq_type()` to return iseq type in SymbolKoichi Sasada2021-12-191-0/+1
| | | | It is shorthand `ISeq#to_a[9]`.
* Rework tracing for blocks running as methodsAlan Wu2021-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The main impetus for this change is to fix [Bug #13392]. Previously, we fired the "return" TracePoint event after popping the stack frame for the block running as method (BMETHOD). This gave undesirable source location outputs as the return event normally fires right before the frame going away. The iseq for each block can run both as a block and as a method. To accommodate that, this commit makes vm_trace() fire call/return events for instructions that have b_call/b_return events attached when the iseq is running as a BMETHOD. The logic for rewriting to "trace_*" instruction is tweaked so that when the user listens to call/return events, instructions with b_call/b_return become trace variants. To continue to provide the return value for non-local returns done using the "return" or "break" keyword inside BMETHODs, the stack unwinding code is tweaked. b_return events now provide the same return value as return events for these non-local cases. A pre-existing test deemed not providing a return value for these b_return events as a limitation. This commit removes the checks for call/return TracePoint events that happen when calling into BMETHODs when no TracePoints are active. Technically, migrating just the return event is enough to fix the bug, but migrating both call and return removes our reliance on `VM_FRAME_FLAG_FINISH` and re-entering the interpreter when the caller is already in the interpreter.
* `Primitive.mandatory_only?` for fast pathKoichi Sasada2021-11-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Compare with the C methods, A built-in methods written in Ruby is slower if only mandatory parameters are given because it needs to check the argumens and fill default values for optional and keyword parameters (C methods can check the number of parameters with `argc`, so there are no overhead). Passing mandatory arguments are common (optional arguments are exceptional, in many cases) so it is important to provide the fast path for such common cases. `Primitive.mandatory_only?` is a special builtin function used with `if` expression like that: ```ruby def self.at(time, subsec = false, unit = :microsecond, in: nil) if Primitive.mandatory_only? Primitive.time_s_at1(time) else Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end end ``` and it makes two ISeq, ``` def self.at(time, subsec = false, unit = :microsecond, in: nil) Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end def self.at(time) Primitive.time_s_at1(time) end ``` and (2) is pointed by (1). Note that `Primitive.mandatory_only?` should be used only in a condition of an `if` statement and the `if` statement should be equal to the methdo body (you can not put any expression before and after the `if` statement). A method entry with `mandatory_only?` (`Time.at` on the above case) is marked as `iseq_overload`. When the method will be dispatch only with mandatory arguments (`Time.at(0)` for example), make another method entry with ISeq (2) as mandatory only method entry and it will be cached in an inline method cache. The idea is similar discussed in https://bugs.ruby-lang.org/issues/16254 but it only checks mandatory parameters or more, because many cases only mandatory parameters are given. If we find other cases (optional or keyword parameters are used frequently and it hurts performance), we can extend the feature.
* Cleanup diff against upstream. Add commentsAlan Wu2021-10-201-2/+0
| | | | | I did a `git diff --stat` against upstream and looked at all the files that are outside of YJIT to come up with these minor changes.
* Yet Another Ruby JIT!Jose Narvaez2021-10-201-1/+1
| | | | Renaming uJIT to YJIT. AKA s/ujit/yjit/g.
* Restore interpreter regs in ujit hook. Implement leave bytecode.Maxime Chevalier-Boisvert2021-10-201-3/+1
|
* Add to the MicroJIT scraper an example that passes ecAlan Wu2021-10-201-0/+1
|
* MicroJIT: Don't compile trace instructionsAlan Wu2021-10-201-0/+2
|
* Remove PC argument from ujit instructionsMaxime Chevalier-Boisvert2021-10-201-1/+1
|
* Yeah, this actually works!Alan Wu2021-10-201-1/+1
|
* Add example handler for ujit and scrape it from vm.oAlan Wu2021-10-201-0/+3
|
* Allow tracing of optimized methodsJeremy Evans2021-08-211-0/+2
| | | | | | | | | | | | | This updates the trace instructions to directly dispatch to opt_send_without_block. So this should cause no slowdown in non-trace mode. To enable the tracing of the optimized methods, RUBY_EVENT_C_CALL and RUBY_EVENT_C_RETURN are added as events to the specialized instructions. Fixes [Bug #14870] Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
* Make RubyVM::AbstractSyntaxTree.of raise for method/proc created in evalJeremy Evans2021-07-291-0/+1
| | | | | | | | | | | This changes Thread::Location::Backtrace#absolute_path to return nil for methods/procs defined in eval. If the realpath of an iseq is nil, that indicates it was defined in eval, in which case you cannot use RubyVM::AbstractSyntaxTree.of. Fixes [Bug #16983] Co-authored-by: Koichi Sasada <ko1@atdot.net>
* Enable USE_ISEQ_NODE_ID by defaultYusuke Endoh2021-06-181-3/+5
| | | | | | | | ... which is formally called EXPERIMENTAL_ISEQ_NODE_ID. See also ff69ef27b06eed1ba750e7d9cab8322f351ed245. https://bugs.ruby-lang.org/issues/17930
* compile.c: Pass node instead of nd_line(node) to ADD_INSN* functionsYusuke Endoh2021-05-071-0/+8
| | | | | | | | | | | | | | | ... then, new_insn_core extracts nd_line(node). Also, if a macro "EXPERIMENTAL_ISEQ_NODE_ID" is defined, this changeset keeps nd_node_id(node) for each instruction. This is intended for TypeProf to identify what AST::Node corresponds to each instruction. This patch is originally authored by @yui-knk for showing which column a NoMethodError occurred. https://github.com/ruby/ruby/compare/master...yui-knk:feature/node_id Co-Authored-By: Yuichiro Kaneko <yui-knk@ruby-lang.org>
* fix raise in exception with jumpKoichi Sasada2021-04-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | add_ensure_iseq() adds ensure block to the end of jump such as next/redo/return. However, if the rescue cause are in the body, this rescue catches the exception in ensure clause. iter do next rescue R ensure raise end In this case, R should not be executed, but executed without this patch. Fixes [Bug #13930] Fixes [Bug #16618] A part of tests are written by @jeremyevans https://github.com/ruby/ruby/pull/4291
* Remove DEFINED_IVAR2 from enumJohn Hawthorn2021-03-101-1/+0
| | | | This version of defined? doesn't seem to be possible to emit anymore.
* check isolated Proc more strictlyKoichi Sasada2020-10-291-0/+1
| | | | | Isolated Proc prohibit to access outer local variables, but it was violated by binding and so on, so they should be error.
* add #include guard hack卜部昌平2020-04-131-3/+2
| | | | | | | | | | | | | | | | | | | | | | According to MSVC manual (*1), cl.exe can skip including a header file when that: - contains #pragma once, or - starts with #ifndef, or - starts with #if ! defined. GCC has a similar trick (*2), but it acts more stricter (e. g. there must be _no tokens_ outside of #ifndef...#endif). Sun C lacked #pragma once for a looong time. Oracle Developer Studio 12.5 finally implemented it, but we cannot assume such recent version. This changeset modifies header files so that each of them include strictly one #ifndef...#endif. I believe this is the most portable way to trigger compiler optimizations. [Bug #16770] *1: https://docs.microsoft.com/en-us/cpp/preprocessor/once *2: https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html
* Merge pull request #2991 from shyouhei/ruby.h卜部昌平2020-04-081-1/+1
| | | Split ruby.h
* VALUE size packed callinfo (ci).Koichi Sasada2020-02-221-9/+0
| | | | | | | | | | | | | | | | | | | | Now, rb_call_info contains how to call the method with tuple of (mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and mid+argc+flags only requires 64bits. So this patch packed rb_call_info to VALUE (1 word) on such cases. If we can not represent it in VALUE, then use imemo_callinfo which contains conventional callinfo (rb_callinfo, renamed from rb_call_info). iseq->body->ci_kw_size is removed because all of callinfo is VALUE size (packed ci or a pointer to imemo_callinfo). To access ci information, we need to use these functions: vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci). struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg. rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc() is temporary removed because cd->ci should be marked.
* decouple internal.h headers卜部昌平2019-12-261-0/+2
| | | | | | | | | | | | | | | | | | Saves comitters' daily life by avoid #include-ing everything from internal.h to make each file do so instead. This would significantly speed up incremental builds. We take the following inclusion order in this changeset: 1. "ruby/config.h", where _GNU_SOURCE is defined (must be the very first thing among everything). 2. RUBY_EXTCONF_H if any. 3. Standard C headers, sorted alphabetically. 4. Other system headers, maybe guarded by #ifdef 5. Everything else, sorted alphabetically. Exceptions are those win32-related headers, which tend not be self- containing (headers have inclusion order dependencies).
* vm_args.c (rb_warn_check): Use iseq_unique_id instead of its pointerYusuke Endoh2019-12-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | (This is the second try of 036bc1da6c6c9b0fa9b7f5968d897a9554dd770e.) If iseq is GC'ed, the pointer of iseq may be reused, which may hide a deprecation warning of keyword argument change. http://ci.rvm.jp/results/trunk-test1@phosphorus-docker/2474221 ``` 1) Failure: TestKeywordArguments#test_explicit_super_kwsplat [/tmp/ruby/v2/src/trunk-test1/test/ruby/test_keyword.rb:549]: --- expected +++ actual @@ -1 +1 @@ -/The keyword argument is passed as the last hash parameter.* for `m'/m +"" ``` This change ad-hocly adds iseq_unique_id for each iseq, and use it instead of iseq pointer. This covers the case where caller is GC'ed. Still, the case where callee is GC'ed, is not covered. But anyway, it is very rare that iseq is GC'ed. Even when it occurs, it just hides some warnings. It's no big deal.
* make functions static卜部昌平2019-11-191-2/+0
| | | | | | | These functions are used from within a compilation unit so we can make them static, for better binary size. This changeset reduces the size of generated ruby binary from 26,590,128 bytes to 26,584,472 bytes on my macihne.
* delete unused functions卜部昌平2019-11-141-2/+0
| | | | | | | | | | | | Looking at the list of symbols inside of libruby-static.a, I found hundreds of functions that are defined, but used from nowhere. There can be reasons for each of them (e.g. some functions are specific to some platform, some are useful when debugging, etc). However it seems the functions deleted here exist for no reason. This changeset reduces the size of ruby binary from 26,671,456 bytes to 26,592,864 bytes on my machine.
* Avoid top-level search for nested constant reference from nil in defined?Dylan Thacker-Smith2019-11-131-1/+2
| | | | | | | | | | | | Fixes [Bug #16332] Constant access was changed to no longer allow top-level constant access through `nil`, but `defined?` wasn't changed at the same time to stay consistent. Use a separate defined type to distinguish between a constant referenced from the current lexical scope and one referenced from another namespace.
* cstr -> bytesKoichi Sasada2019-11-081-1/+1
| | | | | rb_iseq_ibf_load_cstr() accepts bytes, but not NUL-terminate C string. To make it clear, rename it to _bytes.
* support builtin features with Ruby and C.Koichi Sasada2019-11-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support loading builtin features written in Ruby, which implement with C builtin functions. [Feature #16254] Several features: (1) Load .rb file at boottime with native binary. Now, prelude.rb is loaded at boottime. However, this file is contained into the interpreter as a text format and we need to compile it. This patch contains a feature to load from binary format. (2) __builtin_func() in Ruby call func() written in C. In Ruby file, we can write `__builtin_func()` like method call. However this is not a method call, but special syntax to call a function `func()` written in C. C functions should be defined in a file (same compile unit) which load this .rb file. Functions (`func` in above example) should be defined with (a) 1st parameter: rb_execution_context_t *ec (b) rest parameters (0 to 15). (c) VALUE return type. This is very similar requirements for functions used by rb_define_method(), however `rb_execution_context_t *ec` is new requirement. (3) automatic C code generation from .rb files. tool/mk_builtin_loader.rb creates a C code to load .rb files needed by miniruby and ruby command. This script is run by BASERUBY, so *.rb should be written in BASERUBY compatbile syntax. This script load a .rb file and find all of __builtin_ prefix method calls, and generate a part of C code to export functions. tool/mk_builtin_binary.rb creates a C code which contains binary compiled Ruby files needed by ruby command.
* avoid overflow in integer multiplication卜部昌平2019-10-091-2/+4
| | | | | | | This changeset basically replaces `ruby_xmalloc(x * y)` into `ruby_xmalloc2(x, y)`. Some convenient functions are also provided for instance `rb_xmalloc_mul_add(x, y, z)` which allocates x * y + z byes.
* Remove mark arrayAaron Patterson2019-09-261-1/+0
| | | | We don't use this array anymore so we can remove it
* Scan the ISEQ arena for markables and mark themAaron Patterson2019-09-261-0/+1
| | | | | This commit scans the ISEQ arena for objects that can be marked and marks them. This should make the mark array unnecessary.
* Introduce a secondary arenaAaron Patterson2019-09-261-2/+8
| | | | | | We'll scan the secondary arena during GC mark. So, we should only allocate "markable" instruction linked list nodes out of the secondary arena.
* Unify SUPPORT_JOKE and OPT_SUPPORT_JOKETakashi Kokubun2019-09-031-1/+1
| | | | | | | | | | for simplicity and consistency. Now SUPPORT_JOKE needs to be prefixed with OPT_ to make the config visible in `RubyVM::VmOptsH`, and the inconsistency was introduced. As it has never been available for override in configure (no #ifndef guard), it should be fine to rename the config.
* decouple compile.c usage of imemo_ifunc卜部昌平2019-08-271-1/+1
| | | | | | | After 5e86b005c0f2ef30df2f9906c7e2f3abefe286a2, I now think ANYARGS is dangerous and should be extinct. This commit deletes ANYARGS from struct vm_ifunc, but in doing so we also have to decouple the usage of this struct in compile.c, which (I think) is an abuse of ANYARGS.