summaryrefslogtreecommitdiff
path: root/gc.c
Commit message (Collapse)AuthorAgeFilesLines
* gc_rest can change the total pages, so we need to do that firstAaron Patterson2020-11-051-2/+2
|
* add asserts to find crashAaron Patterson2020-11-051-3/+3
|
* Refactor verification methodAaron Patterson2020-11-051-43/+37
| | | | Combine everything in to one C function
* take VM lock when mutating the heapAaron Patterson2020-11-051-19/+27
|
* ensure T_OBJECT objects have internals initializedAaron Patterson2020-11-041-1/+5
|
* Add `GC.auto_compact= true/false` and `GC.auto_compact`Aaron Patterson2020-11-021-351/+642
| | | | | | | | | | * `GC.auto_compact=`, `GC.auto_compact` can be used to control when compaction runs. Setting `auto_compact=` to true will cause compaction to occurr duing major collections. At the moment, compaction adds significant overhead to major collections, so please test first! [Feature #17176]
* suppport Ractor.send(move: true) for more detaKoichi Sasada2020-11-021-12/+0
| | | | This patch allows to move more data types.
* Objects are born embedded, so we don't need to check ivprAaron Patterson2020-10-281-2/+1
| | | | | It's not necessary to check ivpt because objects are allocated as "embedded" by default
* Remove another unnecessary testAaron Patterson2020-10-281-5/+3
| | | | Same as 5be42c1ef4f7ed0a8004cad750a9ce61869bd768
* Remove unnecessary conditionalAaron Patterson2020-10-281-8/+6
| | | | | | | | As of 0b81a484f3453082d28a48968a063fd907daa5b5, `ROBJECT_IVPTR` will always return a value, so we don't need to test whether or not we got one. T_OBJECTs always come to life as embedded objects, so they will return an ivptr, and when they become "unembedded" they will have an ivptr at that point too
* If an object isn't embedded it will have an ivptrAaron Patterson2020-10-281-3/+2
| | | | | We don't need to check the existence if an ivptr because non-embedded objects will always have one
* Use a lock level for a less granular lock.Aaron Patterson2020-10-221-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are seeing an error where code that is generated with MJIT contains references to objects that have been moved. I believe this is due to a race condition in the compaction function. `gc_compact` has two steps: 1. Run a full GC to pin objects 2. Compact / update references Step one is executed with `garbage_collect`. `garbage_collect` calls `gc_enter` / `gc_exit`, these functions acquire a JIT lock and release a JIT lock. So a lock is held for the duration of step 1. Step two is executed by `gc_compact_after_gc`. It also holds a JIT lock. I believe the problem is that the JIT is free to execute between step 1 and step 2. It copies call cache values, but doesn't pin them when it copies them. So the compactor thinks it's OK to move the call cache even though it is not safe. We need to hold a lock for the duration of `garbage_collect` *and* `gc_compact_after_gc`. This patch introduces a lock level which increments and decrements. The compaction function can increment and decrement the lock level and prevent MJIT from executing during both steps.
* Ractor-safe rb_objspace_reachable_objects_fromKoichi Sasada2020-10-211-76/+82
| | | | | | | | | | rb_objspace_reachable_objects_from(obj) is used to traverse all reachable objects from obj. This function modify objspace but it is not ractor-safe (thread-safe). This patch fix the problem. Strategy: (1) call GC mark process during_gc (2) call Ractor-local custom mark func when !during_gc
* ObjectSpace.each_object with RactorsKoichi Sasada2020-10-201-2/+4
| | | | | | | Unshareable objects should not be touched from multiple ractors so ObjectSpace.each_object should be restricted. On multi-ractor mode, ObjectSpace.each_object only iterates shareable objects. [Feature #17270]
* sync RClass::ext::iv_index_tblKoichi Sasada2020-10-171-1/+29
| | | | | | | | | | | | iv_index_tbl manages instance variable indexes (ID -> index). This data structure should be synchronized with other ractors so introduce some VM locks. This patch also introduced atomic ivar cache used by set/getinlinecache instructions. To make updating ivar cache (IVC), we changed iv_index_tbl data structure to manage (ID -> entry) and an entry points serial and index. IVC points to this entry so that cache update becomes atomically.
* add NULL check.Koichi Sasada2020-10-031-1/+3
| | | | DATA_PTR(ractor) can be NULL just after creation.
* Fix ASAN and don't check SPECIAL_CONST_PAaron Patterson2020-09-281-24/+21
| | | | | | | Heap allocated objects are never special constants. Since we're walking the heap, we know none of these objects can be special. Also, adding the object to the freelist will poison the object, so we can't check that the type is T_NONE after poison.
* Fix ASAN errors when updating call cacheAaron Patterson2020-09-281-0/+8
| | | | | Invalidating call cache walks the heap, so we need to take care to un-poison objects when examining them
* sync rb_gc_register_mark_object()Koichi Sasada2020-09-241-7/+11
| | | | | rb_vm_t::mark_object_ary is global resource so we need to synchronize to access it.
* Add a comment about why we're checking the finalizer tableAaron Patterson2020-09-221-0/+5
|
* Revert "Pin values in the finalizer table"Aaron Patterson2020-09-221-0/+6
| | | | | | If an object has a finalizer flag set on it, prevent it from moving. This partially reverts commit 1a9dd31910699c7cd69f2a84c94af20eacd5875c.
* Update heap_pages_himem after freeing pagesPeter Zhu2020-09-201-0/+6
|
* strip trailing spaces [ci skip]Nobuyoshi Nakada2020-09-191-3/+3
|
* Pin values in the finalizer tableAaron Patterson2020-09-181-7/+9
| | | | | | | | | | When finalizers run (in `rb_objspace_call_finalizer`) the table is copied to a linked list that is not managed by the GC. If compaction runs, the references in the linked list can go bad. Finalizer table shouldn't be used frequently, so lets pin references in the table so that the linked list in `rb_objspace_call_finalizer` is safe.
* rb_obj_info() shows more info for T_SYMBOLKoichi Sasada2020-09-181-0/+11
|
* Warn on a finalizer that captures the object to be finalizedChris Seaton2020-09-161-3/+61
| | | | | Also improve specs and documentation for finalizers and more clearly recommend a safe code pattern to use them.
* pointers on the stack need to be pinnedAaron Patterson2020-09-151-1/+1
|
* Fix incorrect initialization of `rb_io_t::self`.Samuel Williams2020-09-151-0/+2
|
* Fixed heap-use-after-free on racterNobuyoshi Nakada2020-09-041-0/+3
|
* Avoid potential for rb_raise() while crashingAlan Wu2020-09-031-1/+10
| | | | | | rb_obj_raw_info is called while printing out crash messages and sometimes called during garbage collection. Calling rb_raise() in these situations is undesirable because it can start executing ensure blocks.
* Introduce Ractor mechanism for parallel executionKoichi Sasada2020-09-031-124/+223
| | | | | | | | | | | | | | | | This commit introduces Ractor mechanism to run Ruby program in parallel. See doc/ractor.md for more details about Ractor. See ticket [Feature #17100] to see the implementation details and discussions. [Feature #17100] This commit does not complete the implementation. You can find many bugs on using Ractor. Also the specification will be changed so that this feature is experimental. You will see a warning when you make the first Ractor with `Ractor.new`. I hope this feature can help programmers from thread-safety issues.
* Initialize new T_OBJECT as ROBJECT_EMBEDJohn Hawthorn2020-09-021-0/+7
| | | | | | | | | | | | | | | | | | | | | | Previously, when an object is first initialized, ROBJECT_EMBED isn't set. This means that for brand new objects, ROBJECT_NUMIV(obj) is 0 and ROBJECT_IV_INDEX_TBL(obj) is NULL. Previously, this combination meant that the inline cache would never be initialized when setting an ivar on an object for the first time since iv_index_tbl was NULL, and if it were it would never be used because ROBJECT_NUMIV was 0. Both cases always fell through to the generic rb_ivar_set which would then set the ROBJECT_EMBED flag and initialize the ivar array. This commit changes rb_class_allocate_instance to set the ROBJECT_EMBED flag on the object initially and to initialize all members of the embedded array to Qundef. This allows the inline cache to be set correctly on first use and to be used on future uses. This moves rb_class_allocate_instance to gc.c, so that it has access to newobj_of. This seems appropriate given that there are other allocating methods in this file (ex. rb_data_object_wrap, rb_imemo_new).
* Fix error message for wb unprotected objects countPeter Zhu2020-09-011-1/+1
| | | | This error is about wb unprotected objects, not old objects.
* Fixed argument typesNobuyoshi Nakada2020-09-021-4/+4
|
* Format with proper conversion specifiers instead of castsNobuyoshi Nakada2020-09-021-54/+74
|
* Use RSTRING_LENINT for overflow checkNobuyoshi Nakada2020-09-011-1/+1
|
* Don't read past the end of the Ruby stringPeter Zhu2020-09-011-1/+1
| | | | | | Ruby strings don't always have a null terminator, so we can't use it as a regular C string. By reading only the first len bytes of the Ruby string, we won't read past the end of the Ruby string.
* include/ruby/backward/2/r_cast.h: deprecate卜部昌平2020-08-271-0/+2
| | | | | Remove all usages of RCAST() so that the header file can be excluded from ruby/ruby.h's dependency.
* Correctly account for heap_pages_final_slots so it does not underflowPeter Zhu2020-08-251-3/+7
| | | | | | `rb_objspace_call_finalizer` creates zombies, but does not do the correct accounting (it should increment `heap_pages_final_slots` whenever it creates a zombie). When we do correct accounting, `heap_pages_final_slots` should never underflow (the check for underflow was introduced in 39725a4db6b121c7779b2b34f7da9d9339415a1c). The implementation moves the accounting from the functions that call `make_zombie` into `make_zombie` itself, which reduces code duplication.
* Remove write barrier exemption for T_ICLASSAlan Wu2020-08-171-6/+3
| | | | | | | | | | | | | | | | | | | Before this commit, iclasses were "shady", or not protected by write barriers. Because of that, the GC needs to spend more time marking these objects than otherwise. Applications that make heavy use of modules should see reduction in GC time as they have a significant number of live iclasses on the heap. - Put logic for iclass method table ownership into a function - Remove calls to WB_UNPROTECT and insert write barriers for iclasses This commit relies on the following invariant: for any non oirigin iclass `I`, `RCLASS_M_TBL(I) == RCLASS_M_TBL(RBasic(I)->klass)`. This invariant did not hold prior to 98286e9 for classes and modules that have prepended modules. [Feature #16984]
* Enable arm64 optimizations that exist for power/x86 (#3393)AGSaidi2020-08-141-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Enable unaligned accesses on arm64 64-bit Arm platforms support unaligned accesses. Running the string benchmarks this change improves performance by an average of 1.04x, min .96x, max 1.21x, median 1.01x * arm64 enable gc optimizations Similar to x86 and powerpc optimizations. | |compare-ruby|built-ruby| |:------|-----------:|---------:| |hash1 | 0.225| 0.237| | | -| 1.05x| |hash2 | 0.110| 0.110| | | 1.00x| -| * vm_exec.c: improve performance for arm64 | |compare-ruby|built-ruby| |:------------------------------|-----------:|---------:| |vm_array | 26.501M| 27.959M| | | -| 1.06x| |vm_attr_ivar | 21.606M| 31.429M| | | -| 1.45x| |vm_attr_ivar_set | 21.178M| 26.113M| | | -| 1.23x| |vm_backtrace | 6.621| 6.668| | | -| 1.01x| |vm_bigarray | 26.205M| 29.958M| | | -| 1.14x| |vm_bighash | 504.155k| 479.306k| | | 1.05x| -| |vm_block | 16.692M| 21.315M| | | -| 1.28x| |block_handler_type_iseq | 5.083| 7.004| | | -| 1.38x|
* Don't pin objects if we're just walking the heapAaron Patterson2020-08-031-1/+3
| | | | | Walking the heap can inadvertently pin objects. Only mark the object's pin bit if the mark_func_data pointer is NULL (similar to the mark bits)
* fix mark bit operation.Koichi Sasada2020-08-021-1/+5
| | | | | | | | | | | | To optimize the sweep phase, there is bit operation to set mark bits for out-of-range bits in the last bit_t. However, if there is no out-of-ragnge bits, it set all last bit_t as mark bits and it braek the assumption (unmarked objects will be swept). GC_DEBUG=1 makes sizeof(RVALUE)=64 on my machine and this condition happens. It took me one Saturday to debug this.
* Add memsize support for the call cache tableAlan Wu2020-07-201-0/+24
| | | | | Each class/module/iclass can potentially have their own cc table. Count their malloc usage.
* Fix missing imemo cases in objspace_dump by refactoringAlan Wu2020-07-101-20/+25
| | | | | | imemo_callcache and imemo_callinfo were not handled by the `objspace` module and were showing up as "unknown" in the dump. Extract the code for naming imemos and use that in both the GC and the `objspace` module.
* gc.c: Cast int literal "1" to bits_tYusuke Endoh2020-07-081-1/+1
| | | | | ... because shifting by more than 31 bits has undefined behavior (depending upon platform). Coverity Scan found this issue.
* Expand heap pages to be exactly 16kbAaron Patterson2020-07-061-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit expands heap pages to be exactly 16KiB and eliminates the `REQUIRED_SIZE_BY_MALLOC` constant. I believe the goal of `REQUIRED_SIZE_BY_MALLOC` was to make the heap pages consume some multiple of OS page size. 16KiB is convenient because OS page size is typically 4KiB, so one Ruby page is four OS pages. Do not guess how malloc works ============================= We should not try to guess how `malloc` works and instead request (and use) four OS pages. Here is my reasoning: 1. Not all mallocs will store metadata in the same region as user requested memory. jemalloc specifically states[1]: > Information about the states of the runs is stored as a page map at the beginning of each chunk. 2. We're using `posix_memalign` to request memory. This means that the first address must be divisible by the alignment. Our allocation is page aligned, so if malloc is storing metadata *before* the page, then we've already crossed page boundaries. 3. Some allocators like glibc will use the memory at the end of the page. I am able to demonstrate that glibc will return pointers within the page boundary that contains `heap_page_body`[2]. We *expected* the allocation to look like this: ![Expected alignment](https://user-images.githubusercontent.com/3124/85803661-8a81d600-b6fc-11ea-8cb6-7dbdb434a43b.png) But since `heap_page` is allocated immediately after `heap_page_body`[3], instead the layout looks like this: ![Actual alignment](https://user-images.githubusercontent.com/3124/85803714-a1c0c380-b6fc-11ea-8c17-8b37369e17ee.png) This is not optimal because `heap_page` gets allocated immediately after `heap_page_body`. We frequently write to `heap_page`, so the bottom OS page of `heap_page_body` is very likely to be copied. One more object per page ======================== In jemalloc, allocation requests are rounded to the nearest boundary, which in this case is 16KiB[4], so `REQUIRED_SIZE_BY_MALLOC` space is just wasted on jemalloc. On glibc, the space is not wasted, but instead it is very likely to cause page faults. Instead of wasting space or causing page faults, lets just use the space to store one more Ruby object. Using the space to store one more Ruby object will prevent page faults, stop wasting space, decrease memory usage, decrease GC time, etc. 1. https://people.freebsd.org/~jasone/jemalloc/bsdcan2006/jemalloc.pdf 2. https://github.com/ruby/ruby/commit/33390d15e7a6f803823efcb41205167c8b126fbb 3 https://github.com/ruby/ruby/blob/289a28e68f30e879760fd000833b512d506a0805/gc.c#L1757-L1763 4. https://people.freebsd.org/~jasone/jemalloc/bsdcan2006/jemalloc.pdf page 4 Co-authored-by: John Hawthorn <john@hawthorn.email>
* get_envparam_double: do not goto into a branch卜部昌平2020-06-291-4/+6
| | | | | I'm not necessarily against every goto in general, but jumping into a branch is definitely a bad idea. Better refactor.
* gc_marks_finish: do not goto into a branch卜部昌平2020-06-291-3/+3
| | | | | I'm not necessarily against every goto in general, but jumping into a branch is definitely a bad idea. Better refactor.
* Convert RMoved to a doubly linked listAaron Patterson2020-06-221-35/+34
| | | | | | | | This commit converts RMoved slots to a doubly linked list. I want to convert this to a doubly linked list because the read barrier (currently in development) must remove nodes from the moved list sometimes. Removing nodes from the list is much easier if the list is doubly linked. In addition, we can reuse the list manipulation routines.