summaryrefslogtreecommitdiff
path: root/compiler/GHC/CmmToAsm.hs
diff options
context:
space:
mode:
authorAndreas Klebinger <klebinger.andreas@gmx.at>2020-04-04 02:52:12 +0200
committerMarge Bot <ben+marge-bot@smart-cactus.org>2020-05-21 12:13:45 -0400
commit13f6c9d0376214b22d4cd16bd3a8cd7b8d864990 (patch)
tree58c8a3f7fa56f89112f3cd3182c8aa50eaeece5a /compiler/GHC/CmmToAsm.hs
parent78c6523c5106fc56b653fc14fda5741913da8fdc (diff)
downloadhaskell-13f6c9d0376214b22d4cd16bd3a8cd7b8d864990.tar.gz
Refactor linear reg alloc to remember past assignments.
When assigning registers we now first try registers we assigned to in the past, instead of picking the "first" one. This is in extremely helpful when dealing with loops for which variables are dead for part of the loop. This is important for patterns like this: foo = arg1 loop: use(foo) ... foo = getVal() goto loop; There we: * assign foo to the register of arg1. * use foo, it's dead after this use as it's overwritten after. * do other things. * look for a register to put foo in. If we pick an arbitrary one it might differ from the register the start of the loop expect's foo to be in. To fix this we simply look for past register assignments for the given variable. If we find one and the register is free we use that register. This reduces the need for fixup blocks which match the register assignment between blocks. In the example above between the end and the head of the loop. This patch also moves branch weight estimation ahead of register allocation and adds a flag to control it (cmm-static-pred). * It means the linear allocator is more likely to assign the hotter code paths first. * If it assign these first we are: + Less likely to spill on the hot path. + Less likely to introduce fixup blocks on the hot path. These two measure combined are surprisingly effective. Based on nofib we get in the mean: * -0.9% instructions executed * -0.1% reads/writes * -0.2% code size. * -0.1% compiler allocations. * -0.9% compile time. * -0.8% runtime. Most of the benefits are simply a result of removing redundant moves and spills. Reduced compiler allocations likely are the result of less code being generated. (The added lookup is mostly non-allocating).
Diffstat (limited to 'compiler/GHC/CmmToAsm.hs')
-rw-r--r--compiler/GHC/CmmToAsm.hs2
1 files changed, 1 insertions, 1 deletions
diff --git a/compiler/GHC/CmmToAsm.hs b/compiler/GHC/CmmToAsm.hs
index 544edc801e..f771d79254 100644
--- a/compiler/GHC/CmmToAsm.hs
+++ b/compiler/GHC/CmmToAsm.hs
@@ -728,7 +728,7 @@ cmmNativeGen dflags this_mod modLoc ncgImpl us fileIds dbgMap cmm count
let optimizedCFG :: Maybe CFG
optimizedCFG =
- optimizeCFG (cfgWeightInfo dflags) cmm <$!> postShortCFG
+ optimizeCFG (gopt Opt_CmmStaticPred dflags) (cfgWeightInfo dflags) cmm <$!> postShortCFG
maybeDumpCfg dflags optimizedCFG "CFG Weights - Final" proc_name