| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This reverts commit b9f030954a8a1572032f3548b39c5b8ac35792ce.
[Bug #18170]
|
|
|
|
|
|
|
|
| |
UTF-16/UTF-32
* And simplify callers of get_actual_encoding().
* See [Feature #18949].
* See https://github.com/ruby/ruby/pull/6322#issuecomment-1242758474
|
| |
|
|
|
|
|
|
| |
rb_ary_tmp_new suggests that the array is temporary in some way, but
that's not true, it just creates an array that's hidden and not on the
transient heap. This commit renames it to rb_ary_hidden_new.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The RARRAY_LITERAL_FLAG was added in commit
5871ecf956711fcacad7c03f2aef95115ed25bc4 to improve CoW performance for
array literals by not keeping track of reference counts.
This commit reverts that commit and has an alternate implementation that
is more generic for all frozen arrays. Since frozen arrays cannot be
modified, we don't need to set the RARRAY_SHARED_ROOT_FLAG and we don't
need to do reference counting.
|
| |
|
|
|
|
|
| |
Move some macros in array.c to internal/array.h so that other files
can also access these macros.
|
|
|
|
|
|
|
| |
This commit prevents the stack from being marked twice: once via the
Fiber, and once via the Thread. It introduces an assertion to assert
that the ec on the thread is the same as the ec on the Fiber being
marked via the thread.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to this commit it was possible to call `ObjectSpace._id2ref` with
an offset static symbol object_id and get back a new, incorrectly tagged
symbol:
```
> sensible_sym = ObjectSpace._id2ref(:a.object_id)
=> :a
> nonsense_sym = ObjectSpace._id2ref(:a.object_id + 40)
=> :a
> sensible_sym == nonsense_sym
=> false
```
`nonsense_sym` ends up tagged with `RUBY_ID_INSTANCE` instead of
`RB_ID_LOCAL`. That means we can do silly things like:
```
> foo = Object.new
> foo.instance_variable_set(:a, 123)
(irb):2:in `instance_variable_set': `a' is not allowed as an instance variable name (NameError)
> foo.instance_variable_set(ObjectSpace._id2ref(:a.object_id + 40), 123)
=> 123
> foo.instance_variables
=> [:a]
```
This was happening because `get_id_entry` ignores the tag bits when
looking up the symbol. So `rb_id2str(symid)` would return a value and
then we'd continue on with the nonsense `symid`.
This commit prevents the situation by checking that the `symid` actually
matches what we get back from `get_id_entry`. Now we get a `RangeError`
for the nonsense id:
```
> ObjectSpace._id2ref(:a.object_id)
=> :a
> ObjectSpace._id2ref(:a.object_id + 40)
(irb):1:in `_id2ref': 0x000000000013f408 is not symbol id value (RangeError)
```
Co-authored-by: John Hawthorn <jhawthorn@github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Array created as literals during iseq compilation don't need a
reference count since they can never be modified. The previous
implementation would mutate the hidden array's reference count,
causing copy-on-write invalidation.
This commit adds a RARRAY_LITERAL_FLAG for arrays created through
rb_ary_literal_new. Arrays created with this flag do not have reference
count stored and just assume they have infinite number of references.
Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit enables Arrays to move between size pools during compaction.
This can occur if the array is mutated such that it would fit in a
different size pool when embedded.
The move is carried out in two stages:
1. The RVALUE is moved to a destination heap during object movement
phase of compaction
2. The array data is re-embedded and the original buffer free'd if
required. This happens during the update references step
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes invalid and inconsistent results for the Fixnum*Fixnum case
where the result of the multiplication does not fit in 64-bit
on OpenBSD/mips64. For example:
$ for x in 1 23; do ruby31 -e 'p(54306000000000*86400)'; done
14409380628474329524
11410664325873689790
Cases where an argument was Bignum, as well as cases where the result
of the multiplication fits in 64-bit are fine:
$ for x in 1 23; do ruby31 -e 'p(54306000*86400)'; done
4692038400000
4692038400000
$ for x in 1 23; do ruby31 -e 'p(5430600000000000000000*86400)'; done
469203840000000000000000000
469203840000000000000000000
This was originally discovered by running the tests for the openssl gem
on OpenBSD/mips64 and having one test fail for a date far in the future.
I eventually traced this to the generic multiplication issue.
The underlying cause is using the int128_t type. This avoids use of the
int128_t type in this case, falling back to the slower conversion code,
which in the overflow case, turns the Fixnums into Bignums, then
performs the multiplication.
|
| |
|
|
|
|
| |
This was a public method, so we should probably keep it.
|
|
|
|
|
| |
And re-embed any strings that can now fit inside the slot they've been
moved to
|
|
|
|
| |
rb_imemo_new is defined again later in the file.
|
|
|
|
| |
This reverts commit 9d927204e7b86eb00bfd07a060a6383139edf741.
|
|
|
|
|
|
|
|
|
|
|
|
| |
... only when the message string has a newline.
`p StandardError.new("foo\nbar")` now prints `#<StandardError: "foo\nbar">'
instead of:
#<StandardError:
bar>
[Bug #18170]
|
|
|
|
|
| |
Implements [Feature #12655]
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Having more size pools will allow us to allocate larger objects
through Variable Width Allocation.
I have attached some benchmark results below.
Discourse:
On Discourse, we don't see much change in response times. We do see
a small reduction in RSS.
Branch RSS: 377.8 MB
Master RSS: 396.3 MB
railsbench:
On railsbench, we don't see a big change in RPS or p99 performance.
We see a small increase in RSS.
Branch RPS: 815.38
Master RPS: 811.73
Branch p99: 1.69 ms
Master p99: 1.68 ms
Branch RSS: 90.6 MB
Master RSS: 89.4 MB
liquid:
We don't see a significant change in liquid performance.
Branch parse & render: 29.041 I/s
Master parse & render: 29.211 I/s
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In December 2021, we opened an [issue] to solicit feedback regarding the
porting of the YJIT codebase from C99 to Rust. There were some
reservations, but this project was given the go ahead by Ruby core
developers and Matz. Since then, we have successfully completed the port
of YJIT to Rust.
The new Rust version of YJIT has reached parity with the C version, in
that it passes all the CRuby tests, is able to run all of the YJIT
benchmarks, and performs similarly to the C version (because it works
the same way and largely generates the same machine code). We've even
incorporated some design improvements, such as a more fine-grained
constant invalidation mechanism which we expect will make a big
difference in Ruby on Rails applications.
Because we want to be careful, YJIT is guarded behind a configure
option:
```shell
./configure --enable-yjit # Build YJIT in release mode
./configure --enable-yjit=dev # Build YJIT in dev/debug mode
```
By default, YJIT does not get compiled and cargo/rustc is not required.
If YJIT is built in dev mode, then `cargo` is used to fetch development
dependencies, but when building in release, `cargo` is not required,
only `rustc`. At the moment YJIT requires Rust 1.60.0 or newer.
The YJIT command-line options remain mostly unchanged, and more details
about the build process are documented in `doc/yjit/yjit.md`.
The CI tests have been updated and do not take any more resources than
before.
The development history of the Rust port is available at the following
commit for interested parties:
https://github.com/Shopify/ruby/commit/1fd9573d8b4b65219f1c2407f30a0a60e537f8be
Our hope is that Rust YJIT will be compiled and included as a part of
system packages and compiled binaries of the Ruby 3.2 release. We do not
anticipate any major problems as Rust is well supported on every
platform which YJIT supports, but to make sure that this process works
smoothly, we would like to reach out to those who take care of building
systems packages before the 3.2 release is shipped and resolve any
issues that may come up.
[issue]: https://bugs.ruby-lang.org/issues/18481
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Noah Gibbs <the.codefolio.guy@gmail.com>
Co-authored-by: Kevin Newton <kddnewton@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit reintroduces finer-grained constant cache invalidation.
After 8008fb7 got merged, it was causing issues on token-threaded
builds (such as on Windows).
The issue was that when you're iterating through instruction sequences
and using the translator functions to get back the instruction structs,
you're either using `rb_vm_insn_null_translator` or
`rb_vm_insn_addr2insn2` depending if it's a direct-threading build.
`rb_vm_insn_addr2insn2` does some normalization to always return to
you the non-trace version of whatever instruction you're looking at.
`rb_vm_insn_null_translator` does not do that normalization.
This means that when you're looping through the instructions if you're
trying to do an opcode comparison, it can change depending on the type
of threading that you're using. This can be very confusing. So, this
commit creates a new translator function
`rb_vm_insn_normalizing_translator` to always return the non-trace
version so that opcode comparisons don't have to worry about different
configurations.
[Feature #18589]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the number of incremental marking steps is calculated based
on the number of pooled pages available. This means that if we make Ruby
heap pages larger, it would run fewer incremental marking steps (which
would mean each incremental marking step takes longer).
This commit changes incremental marking to run after every
INCREMENTAL_MARK_STEP_ALLOCATIONS number of allocations. This means that
the behaviour of incremental marking remains the same regardless of the
Ruby heap page size.
I've benchmarked against discourse benchmarks and did not get a
significant change in response times beyond the margin of error. This is
expected as this new incremental marking algorithm behaves very
similarly to the previous one.
|
|
|
|
| |
Currently it has only one function prototype.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commits for [Feature #18589]:
* 8008fb7352abc6fba433b99bf20763cf0d4adb38
"Update formatting per feedback"
* 8f6eaca2e19828e92ecdb28b0fe693d606a03f96
"Delete ID from constant cache table if it becomes empty on ISEQ free"
* 629908586b4bead1103267652f8b96b1083573a8
"Finer-grained inline constant cache invalidation"
MSWin builds on AppVeyor have been crashing since the merger.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current behavior - caches depend on a global counter. All constant mutations cause caches to be invalidated.
```ruby
class A
B = 1
end
def foo
A::B # inline cache depends on global counter
end
foo # populate inline cache
foo # hit inline cache
C = 1 # global counter increments, all caches are invalidated
foo # misses inline cache due to `C = 1`
```
Proposed behavior - caches depend on name components. Only constant mutations with corresponding names will invalidate the cache.
```ruby
class A
B = 1
end
def foo
A::B # inline cache depends constants named "A" and "B"
end
foo # populate inline cache
foo # hit inline cache
C = 1 # caches that depend on the name "C" are invalidated
foo # hits inline cache because IC only depends on "A" and "B"
```
Examples of breaking the new cache:
```ruby
module C
# Breaks `foo` cache because "A" constant is set and the cache in foo depends
# on "A" and "B"
class A; end
end
B = 1
```
We expect the new cache scheme to be invalidated less often because names aren't frequently reused. With the cache being invalidated less, we can rely on its stability more to keep our constant references fast and reduce the need to throw away generated code in YJIT.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we would build a new `superclasses` array for each class,
even though for all immediate subclasses of a class, the array is
identical.
This avoids duplicating the arrays on leaf classes (those without
subclasses) by calculating and storing a "superclasses including self"
array on a class when it's first inherited and sharing that among all
superclasses.
An additional trick used is that the "superclass array including self"
is valid as "self"'s superclass array. It just has it's own class at the
end. We can use this to avoid an extra pointer of storage and can use
one bit of a flag to track that we've "upgraded" the array.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously when checking ancestors, we would walk all the way up the
ancestry chain checking each parent for a matching class or module.
I believe this was especially unfriendly to CPU cache since for each
step we need to check two cache lines (the class and class ext).
This check is used quite often in:
* case statements
* rescue statements
* Calling protected methods
* Class#is_a?
* Module#===
* Module#<=>
I believe it's most common to check a class against a parent class, to
this commit aims to improve that (unfortunately does not help checking
for an included Module).
This is done by storing on each class the number and an array of all
parent classes, in order (BasicObject is at index 0). Using this we can
check whether a class is a subclass of another in constant time since we
know the location to expect it in the hierarchy.
|
|
|
|
|
|
|
| |
Changes size and capacity of darray to size_t to support more
elements.
Adds functions to darray that use GC allocation functions.
|
|
|
|
|
|
| |
In the past, many internal functions are declared in intern.h
under include/ruby directory, because there were no headers for
internal use.
|
| |
|
|
|
|
|
| |
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
|
| |
|
|
|
|
|
|
|
|
| |
On 32-bit systems, VWA causes class_serial to not be aligned (it only
guarantees 4 byte alignment but class_serial is 8 bytes and requires 8
byte alignment). This commit uses a hack to allocate class_serial
through malloc. Once VWA allocates with 8 byte alignment in the future,
we will revert this commit.
|
|
|
|
|
|
|
|
|
| |
This check is needed to fix a bug of error_highlight when NameError
occurred in eval'ed code.
https://github.com/ruby/error_highlight/pull/16
The same check for proc/method has been already introduced since
64ac984129a7a4645efe5ac57c168ef880b479b2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds a Ractor cache for every size pool. Previously, all VWA
allocated objects used the slowpath and locked the VM.
On a micro-benchmark that benchmarks String allocation:
VWA turned off:
29.196591 0.889709 30.086300 ( 9.434059)
VWA before this commit:
29.279486 41.477869 70.757355 ( 12.527379)
VWA after this commit:
16.782903 0.557117 17.340020 ( 4.255603)
|
|
|
|
|
|
|
|
| |
Dumped iseq binary can not have unnamed symbols/IDs, and ID 0 is
stored instead. As `struct rb_id_table` disallows ID 0, also for
the distinction, re-assign a new temporary ID based on the local
variable table index when loading from the binary, as well as the
parser.
|
|
|
|
|
|
|
|
| |
Updating RCLASS_PARENT_SUBCLASSES and RCLASS_MODULE_SUBCLASSES while
compacting can trigger the read barrier. This commit makes
RCLASS_SUBCLASSES a doubly linked list with a dedicated head object so
that we can add and remove entries from the list without having to touch
an object in the Ruby heap
|
|
|
|
|
|
|
| |
With RVARGC we always store the rb_classext_t in the same slot as the
RClass struct that refers to it. So we don't need to store the pointer
or access through the pointer anymore and can switch the RCLASS_EXT
macro to use an offset
|
|
|
|
| |
... to allow class.c to use the function
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This commit adds support for embedded strings with variable capacity and
uses Variable Width Allocation to allocate strings.
|
|
|
|
|
| |
The allocation functions no longer assume that one RVALUE needs to be
allocated.
|
|
|
| |
ast.c: Use kept script_lines data instead of re-open the source file
|
|
|
|
|
|
|
|
|
|
| |
* process.c: Add Process._fork
This API is supposed for application monitoring libraries to hook fork
event.
[Feature #17795]
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`RubyVM.keep_script_lines` enables to keep script lines
for each ISeq and AST. This feature is for debugger/REPL
support.
```ruby
RubyVM.keep_script_lines = true
RubyVM::keep_script_lines = true
eval("def foo = nil\ndef bar = nil")
pp RubyVM::InstructionSequence.of(method(:foo)).script_lines
```
|