summaryrefslogtreecommitdiff
path: root/gc.c
Commit message (Collapse)AuthorAgeFilesLines
...
* * expand tabs. [ci skip]git2022-09-271-2/+2
| | | | | Tabs were expanded because the file did not have any tab indentation in unedited lines. Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
* This commit implements the Object Shapes technique in CRuby.Jemma Issroff2022-09-261-74/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Object Shapes is used for accessing instance variables and representing the "frozenness" of objects. Object instances have a "shape" and the shape represents some attributes of the object (currently which instance variables are set and the "frozenness"). Shapes form a tree data structure, and when a new instance variable is set on an object, that object "transitions" to a new shape in the shape tree. Each shape has an ID that is used for caching. The shape structure is independent of class, so objects of different types can have the same shape. For example: ```ruby class Foo def initialize # Starts with shape id 0 @a = 1 # transitions to shape id 1 @b = 1 # transitions to shape id 2 end end class Bar def initialize # Starts with shape id 0 @a = 1 # transitions to shape id 1 @b = 1 # transitions to shape id 2 end end foo = Foo.new # `foo` has shape id 2 bar = Bar.new # `bar` has shape id 2 ``` Both `foo` and `bar` instances have the same shape because they both set instance variables of the same name in the same order. This technique can help to improve inline cache hits as well as generate more efficient machine code in JIT compilers. This commit also adds some methods for debugging shapes on objects. See `RubyVM::Shape` for more details. For more context on Object Shapes, see [Feature: #18776] Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org> Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com> Co-Authored-By: John Hawthorn <john@hawthorn.email>
* Rework vm_core to use `int first_lineno` struct member.Samuel Williams2022-09-261-3/+2
|
* Skip poisoned regionsNobuyoshi Nakada2022-08-091-1/+2
| | | | | | | | | Poisoned regions cannot be accessed without unpoisoning outside gc.c. Specifically, debug.gem is terminated by AddressSanitizer. ``` SUMMARY: AddressSanitizer: use-after-poison iseq_collector.c:39 in iseq_i ```
* Lock the VM for rb_gc_writebarrier_unprotectPeter Zhu2022-07-281-13/+17
| | | | | When using Ractors, rb_gc_writebarrier_unprotect requries a VM lock since it modifies the bitmaps.
* Make array slices views rather than copiesPeter Zhu2022-07-281-0/+14
| | | | | | Before this commit, if the slice fits in VWA, it would make a copy rather than a view. This is slower as it requires a memcpy of the contents.
* Refactor gc_ref_update_arrayPeter Zhu2022-07-281-20/+18
|
* Suppress use-after-free warning by gcc-12Nobuyoshi Nakada2022-07-281-0/+1
|
* Adjust styles [ci skip]Nobuyoshi Nakada2022-07-271-2/+4
|
* * expand tabs. [ci skip]git2022-07-271-4/+4
| | | | | Tabs were expanded because the file did not have any tab indentation in unedited lines. Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
* Refactored poisoning and unpoisoning freelist to simpler APIJemma Issroff2022-07-261-22/+40
|
* Rename rb_ary_tmp_new to rb_ary_hidden_newPeter Zhu2022-07-261-2/+2
| | | | | | rb_ary_tmp_new suggests that the array is temporary in some way, but that's not true, it just creates an array that's hidden and not on the transient heap. This commit renames it to rb_ary_hidden_new.
* Fix format specifierNobuyoshi Nakada2022-07-251-1/+1
| | | | | `uintptr_t` is not always `unsigned long`, but can be casted to void pointer safely.
* Expand tabs [ci skip]Takashi Kokubun2022-07-211-1681/+1681
| | | | [Misc #18891]
* [Bug #18929] Fix heap creation thrashing in GCPeter Zhu2022-07-211-0/+13
| | | | | | | | | | | Before this commit, if we don't have enough slots after sweeping but had pages on the tomb heap, then the GC would frequently allocate and deallocate pages. This is because after sweeping it would set allocatable pages (since there were not enough slots) but free the pages on the tomb heap. This commit reuses pages on the tomb heap if there's not enough slots after sweeping.
* Refactor macros of array.cPeter Zhu2022-07-211-19/+9
| | | | | Move some macros in array.c to internal/array.h so that other files can also access these macros.
* Ensure _id2ref finds symbols with the correct typeDaniel Colson2022-07-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to this commit it was possible to call `ObjectSpace._id2ref` with an offset static symbol object_id and get back a new, incorrectly tagged symbol: ``` > sensible_sym = ObjectSpace._id2ref(:a.object_id) => :a > nonsense_sym = ObjectSpace._id2ref(:a.object_id + 40) => :a > sensible_sym == nonsense_sym => false ``` `nonsense_sym` ends up tagged with `RUBY_ID_INSTANCE` instead of `RB_ID_LOCAL`. That means we can do silly things like: ``` > foo = Object.new > foo.instance_variable_set(:a, 123) (irb):2:in `instance_variable_set': `a' is not allowed as an instance variable name (NameError) > foo.instance_variable_set(ObjectSpace._id2ref(:a.object_id + 40), 123) => 123 > foo.instance_variables => [:a] ``` This was happening because `get_id_entry` ignores the tag bits when looking up the symbol. So `rb_id2str(symid)` would return a value and then we'd continue on with the nonsense `symid`. This commit prevents the situation by checking that the `symid` actually matches what we get back from `get_id_entry`. Now we get a `RangeError` for the nonsense id: ``` > ObjectSpace._id2ref(:a.object_id) => :a > ObjectSpace._id2ref(:a.object_id + 40) (irb):1:in `_id2ref': 0x000000000013f408 is not symbol id value (RangeError) ``` Co-authored-by: John Hawthorn <jhawthorn@github.com>
* [Bug #18928] Fix crash in WeakMapPeter Zhu2022-07-201-10/+11
| | | | | | In wmap_live_p, if is_pointer_to_heap returns false, then the page is either in the tomb or has already been freed, so the object is dead. In this case, wmap_live_p should return false.
* Fix free objects count conditionNobuyoshi Nakada2022-07-201-2/+3
| | | | Free objects have `T_NONE` as the builtin type. A pointer to a valid array element will never be `NULL`.
* Implement Objects on VWAPeter Zhu2022-07-151-26/+94
| | | | | | This commit implements Objects on Variable Width Allocation. This allows Objects with more ivars to be embedded (i.e. contents directly follow the object header) which improves performance through better cache locality.
* [Feature #18901] Support size pool movement for ArraysMatt Valentine-House2022-07-121-7/+18
| | | | | | | | | | | | | This commit enables Arrays to move between size pools during compaction. This can occur if the array is mutated such that it would fit in a different size pool when embedded. The move is carried out in two stages: 1. The RVALUE is moved to a destination heap during object movement phase of compaction 2. The array data is re-embedded and the original buffer free'd if required. This happens during the update references step
* Add expand_heap option to GC.verify_compaction_referencesMatt Valentine-House2022-07-111-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to reliably test compaction we need to be able to move objects between size pools. In order for this to happen there must be pages in a size pool into which we can allocate. The existing implementation of `double_heap` only doubled the existing number of pages in the heap, so if a size pool had a low number of pages (or 0) it's not guaranteed that enough space will be created to move objects into that size pool. This commit deprecates the `double_heap` option and replaces it with `expand_heap` instead. expand heap will expand each heap by enough pages to hold a number of slots defined by `GC_HEAP_INIT_SLOTS` or by `heap->total_pags` whichever is larger. If both `double_heap` and `expand_heap` are present, a deprecation warning will be shown for `double_heap` and the `expand_heap` behaviour will take precedence Given that this is an API intended for debugging and testing GC compaction I'm not concerned about the extra memory usage or time taken to create the pages. However, for completeness: Running the following `test.rb` and using `time` on my Macbook Pro shows the following memory usage and time impact: pp "RSS (kb): #{`ps -o rss #{Process.pid}`.lines.last.to_i}" GC.verify_compaction_references(double_heap: true, toward: :empty) pp "RSS (kb): #{`ps -o rss #{Process.pid}`.lines.last.to_i}" ❯ time make run ./miniruby -I./lib -I. -I.ext/common -r./arm64-darwin21-fake ./test.rb "RSS (kb): 24000" <internal:gc>:251: warning: double_heap is deprecated and will be removed "RSS (kb): 25232" ________________________________________________________ Executed in 124.37 millis fish external usr time 82.22 millis 0.09 millis 82.12 millis sys time 28.76 millis 2.61 millis 26.15 millis ❯ time make run ./miniruby -I./lib -I. -I.ext/common -r./arm64-darwin21-fake ./test.rb "RSS (kb): 24000" "RSS (kb): 49040" ________________________________________________________ Executed in 150.13 millis fish external usr time 103.32 millis 0.10 millis 103.22 millis sys time 35.73 millis 2.59 millis 33.14 millis
* Extract `atomic_inc_wraparound` functionNobuyoshi Nakada2022-07-101-10/+12
|
* Add `asan_unpoisoning_object` to execute the block with unpoisoningNobuyoshi Nakada2022-07-101-8/+19
|
* Split `rb_raw_obj_info`Nobuyoshi Nakada2022-07-101-12/+35
|
* Cycle `obj_info_buffers_index` atomicallyNobuyoshi Nakada2022-07-101-7/+14
|
* `APPEND_S` for no conversion formatsNobuyoshi Nakada2022-07-101-6/+17
|
* Rewrite `APPENDF` using variadic argumentsNobuyoshi Nakada2022-07-101-42/+42
|
* Use `size_t` for `rb_raw_obj_info`Nobuyoshi Nakada2022-07-101-3/+3
|
* Use `asan_unpoison_object_temporary`Nobuyoshi Nakada2022-07-101-24/+12
|
* Get rid of static buffer in `obj_info`Nobuyoshi Nakada2022-07-101-3/+5
|
* Gather heap page size conditions combinationNobuyoshi Nakada2022-07-071-31/+38
| | | | | When similar combination of conditions are separated in two places, it is harder to make sure the conditional blocks match each other,
* Improve error message for segv in read_barrier_handlerPeter Zhu2022-07-071-3/+12
| | | | | | | | | | | | | If the page_body is a null pointer, then read_barrier_handler will crash with an unrelated message. This commit improves the error message. Before: test.rb:1: [BUG] Couldn't unprotect page 0x0000000000000000, errno: Cannot allocate memory After: test.rb:1: [BUG] read_barrier_handler: segmentation fault at 0x14
* Fix crash in compaction due to unlocked pagePeter Zhu2022-07-071-0/+5
| | | | | | The page of src could be partially compacted, so it may contain T_MOVED. Sweeping a page may read objects on this page, so we need to lock the page.
* Fix typo in gc_compact_movePeter Zhu2022-07-071-1/+5
| | | | | The page we're sweeping is on the destination heap `dheap`, not the source heap `heap`.
* Adjust indents [ci skip]Nobuyoshi Nakada2022-07-061-15/+16
|
* Extract `protect_page_body` to fix mismatched bracesNobuyoshi Nakada2022-06-181-13/+15
|
* Disable Mach exception handlers when read barriers in placeKJ Tsanaktsidis2022-06-181-1/+49
| | | | | | | | | | | | | | | | | | | | | | | | The GC compaction mechanism implements a kind of read barrier by marking some (OS) pages as unreadable, and installing a SIGBUS/SIGSEGV handler to detect when they're accessed and invalidate an attempt to move the object. Unfortunately, when a debugger is attached to the Ruby interpreter on Mac OS, the debugger will trap the EXC_BAD_ACCES mach exception before the runtime can transform that into a SIGBUS signal and dispatch it. Thus, execution gets stuck; any attempt to continue from the debugger re-executes the line that caused the exception and no forward progress can be made. This makes it impossible to debug either the Ruby interpreter or a C extension whilst compaction is in use. To fix this, we disable the EXC_BAD_ACCESS handler when installing the SIGBUS/SIGSEGV handlers, and re-enable them once the compaction is done. The debugger will still trap on the attempt to read the bad page, but it will be trapping the SIGBUS signal, rather than the EXC_BAD_ACCESS mach exception. It's possible to continue from this in the debugger, which invokes the signal handler and allows forward progress to be made.
* Suppress code unused unless GC_CAN_COMPILE_COMPACTIONNobuyoshi Nakada2022-06-171-0/+22
|
* Include runtime checks for compaction supportPeter Zhu2022-06-161-48/+26
| | | | | | | | | | | | Commit 0c36ba53192c5a0d245c9b626e4346a32d7d144e changed GC compaction methods to not be implemented when not supported. However, that commit only does compile time checks (which currently only checks for WASM), but there are additional compaction support checks during run time. This commit changes it so that GC compaction methods aren't defined during run time if the platform does not support GC compaction. [Bug #18829]
* Rename GC_COMPACTION_SUPPORTEDPeter Zhu2022-06-161-14/+14
| | | | | | | Naming this macro GC_COMPACTION_SUPPORTED is misleading because it only checks whether compaction is supported at compile time. [Bug #18829]
* Remove MJIT worker thread (#6006)Takashi Kokubun2022-06-151-3/+0
| | | [Misc #18830]
* Move String RVALUES between poolsMatt Valentine-House2022-06-131-27/+75
| | | | | And re-embed any strings that can now fit inside the slot they've been moved to
* Fix major GC thrashingPeter Zhu2022-06-081-3/+5
| | | | | | | | | | | | | | Only growth heaps are allowed to start major GCs. Before this patch, growth heaps are defined as size pools that freed more slots than had empty slots (i.e. there were more dead objects that empty space). But if the size pool is relatively stable and tightly packed with mostly old objects and has allocatable pages, then it would be incorrectly classified as a growth heap and trigger major GC. But since it's stable, it would not use any of the allocatable pages and forever be classified as a growth heap, causing major GC thrashing. This commit changes the definition of growth heap to require that the size pool to have no allocatable pages.
* Fix compilation error when USE_RVARGC=0Peter Zhu2022-06-081-3/+1
| | | | force_major_gc_count was not defined when USE_RVARGC=0.
* Add key force_major_gc_count to GC.stat_heapPeter Zhu2022-06-081-0/+3
| | | | | force_major_gc_count is the number of times the size pool forced major GC to run.
* Remove while loop over heap_preparePeter Zhu2022-06-071-8/+52
| | | | | | | Having a while loop over `heap_prepare` makes the GC logic difficult to understand (it is difficult to understand when and why `heap_prepare` yields a free page). It is also a source of bugs and can cause an infinite loop if `heap_page` never yields a free page.
* Typedef built-in function typesNobuyoshi Nakada2022-06-021-1/+1
|
* Move `GC.verify_compaction_references` [Bug #18779]Nobuyoshi Nakada2022-06-021-38/+8
| | | | | Define `GC.verify_compaction_references` as a built-in ruby method, according to GC compaction support via `GC::OPTS`.
* Adjust indent and nesting [ci skip]Nobuyoshi Nakada2022-06-021-3/+1
|