summaryrefslogtreecommitdiff
path: root/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
diff options
context:
space:
mode:
authorSajan Karumanchi <sajan.karumanchi@amd.com>2021-02-02 12:42:14 +0100
committerFlorian Weimer <fweimer@redhat.com>2021-02-02 12:42:15 +0100
commit6e02b3e9327b7dbb063958d2b124b64fcb4bbe3f (patch)
treef5fa119e5c2db62c16cdbaaa01d856da390e607a /sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
parentcaa60b79f8c98e97455078542a14b4c750e48ede (diff)
downloadglibc-6e02b3e9327b7dbb063958d2b124b64fcb4bbe3f.tar.gz
x86: Adding an upper bound for Enhanced REP MOVSB.
In the process of optimizing memcpy for AMD machines, we have found the vector move operations are outperforming enhanced REP MOVSB for data transfers above the L2 cache size on Zen3 architectures. To handle this use case, we are adding an upper bound parameter on enhanced REP MOVSB:'__x86_rep_movsb_stop_threshold'. As per large-bench results, we are configuring this parameter to the L2 cache size for AMD machines and applicable from Zen3 architecture supporting the ERMS feature. For architectures other than AMD, it is the computed value of non-temporal threshold parameter. Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
Diffstat (limited to 'sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S')
-rw-r--r--sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S7
1 files changed, 5 insertions, 2 deletions
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 0980c95378..50bb1fccb2 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -30,7 +30,10 @@
load and aligned store. Load the last 4 * VEC and first VEC
before the loop and store them after the loop to support
overlapping addresses.
- 6. If size >= __x86_shared_non_temporal_threshold and there is no
+ 6. On machines with ERMS feature, if size greater than equal or to
+ __x86_rep_movsb_threshold and less than
+ __x86_rep_movsb_stop_threshold, then REP MOVSB will be used.
+ 7. If size >= __x86_shared_non_temporal_threshold and there is no
overlap between destination and source, use non-temporal store
instead of aligned store. */
@@ -240,7 +243,7 @@ L(return):
ret
L(movsb):
- cmp __x86_shared_non_temporal_threshold(%rip), %RDX_LP
+ cmp __x86_rep_movsb_stop_threshold(%rip), %RDX_LP
jae L(more_8x_vec)
cmpq %rsi, %rdi
jb 1f