| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multiple home units allows you to load different packages which may depend on
each other into one GHC session. This will allow both GHCi and HLS to support
multi component projects more naturally.
Public Interface
~~~~~~~~~~~~~~~~
In order to specify multiple units, the -unit @⟨filename⟩ flag
is given multiple times with a response file containing the arguments for each unit.
The response file contains a newline separated list of arguments.
```
ghc -unit @unitLibCore -unit @unitLib
```
where the `unitLibCore` response file contains the normal arguments that cabal would pass to `--make` mode.
```
-this-unit-id lib-core-0.1.0.0
-i
-isrc
LibCore.Utils
LibCore.Types
```
The response file for lib, can specify a dependency on lib-core, so then modules in lib can use modules from lib-core.
```
-this-unit-id lib-0.1.0.0
-package-id lib-core-0.1.0.0
-i
-isrc
Lib.Parse
Lib.Render
```
Then when the compiler starts in --make mode it will compile both units lib and lib-core.
There is also very basic support for multiple home units in GHCi, at the
moment you can start a GHCi session with multiple units but only the
:reload is supported. Most commands in GHCi assume a single home unit,
and so it is additional work to work out how to modify the interface to
support multiple loaded home units.
Options used when working with Multiple Home Units
There are a few extra flags which have been introduced specifically for
working with multiple home units. The flags allow a home unit to pretend
it’s more like an installed package, for example, specifying the package
name, module visibility and reexported modules.
-working-dir ⟨dir⟩
It is common to assume that a package is compiled in the directory
where its cabal file resides. Thus, all paths used in the compiler
are assumed to be relative to this directory. When there are
multiple home units the compiler is often not operating in the
standard directory and instead where the cabal.project file is
located. In this case the -working-dir option can be passed which
specifies the path from the current directory to the directory the
unit assumes to be it’s root, normally the directory which contains
the cabal file.
When the flag is passed, any relative paths used by the compiler are
offset by the working directory. Notably this includes -i and
-I⟨dir⟩ flags.
-this-package-name ⟨name⟩
This flag papers over the awkward interaction of the PackageImports
and multiple home units. When using PackageImports you can specify
the name of the package in an import to disambiguate between modules
which appear in multiple packages with the same name.
This flag allows a home unit to be given a package name so that you
can also disambiguate between multiple home units which provide
modules with the same name.
-hidden-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules in a home unit should not be visible outside of the unit it
belongs to.
The main use of this flag is to be able to recreate the difference
between an exposed and hidden module for installed packages.
-reexported-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules are not defined in a unit but should be reexported. The
effect is that other units will see this module as if it was defined
in this unit.
The use of this flag is to be able to replicate the reexported
modules feature of packages with multiple home units.
Offsetting Paths in Template Haskell splices
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When using Template Haskell to embed files into your program,
traditionally the paths have been interpreted relative to the directory
where the .cabal file resides. This causes problems for multiple home
units as we are compiling many different libraries at once which have
.cabal files in different directories.
For this purpose we have introduced a way to query the value of the
-working-dir flag to the Template Haskell API. By using this function we
can implement a makeRelativeToProject function which offsets a path
which is relative to the original project root by the value of
-working-dir.
```
import Language.Haskell.TH.Syntax ( makeRelativeToProject )
foo = $(makeRelativeToProject "./relative/path" >>= embedFile)
```
> If you write a relative path in a Template Haskell splice you should use the makeRelativeToProject function so that your library works correctly with multiple home units.
A similar function already exists in the file-embed library. The
function in template-haskell implements this function in a more robust
manner by honouring the -working-dir flag rather than searching the file
system.
Closure Property for Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For tools or libraries using the API there is one very important closure
property which must be adhered to:
> Any dependency which is not a home unit must not (transitively) depend
on a home unit.
For example, if you have three packages p, q and r, then if p depends on
q which depends on r then it is illegal to load both p and r as home
units but not q, because q is a dependency of the home unit p which
depends on another home unit r.
If you are using GHC by the command line then this property is checked,
but if you are using the API then you need to check this property
yourself. If you get it wrong you will probably get some very confusing
errors about overlapping instances.
Limitations of Multiple Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a few limitations of the initial implementation which will be smoothed out on user demand.
* Package thinning/renaming syntax is not supported
* More complicated reexports/renaming are not yet supported.
* It’s more common to run into existing linker bugs when loading a
large number of packages in a session (for example #20674, #20689)
* Backpack is not yet supported when using multiple home units.
* Dependency chasing can be quite slow with a large number of
modules and packages.
* Loading wired-in packages as home units is currently not supported
(this only really affects GHC developers attempting to load
template-haskell).
* Barely any normal GHCi features are supported, it would be good to
support enough for ghcid to work correctly.
Despite these limitations, the implementation works already for nearly
all packages. It has been testing on large dependency closures,
including the whole of head.hackage which is a total of 4784 modules
from 452 packages.
Internal Changes
~~~~~~~~~~~~~~~~
* The biggest change is that the HomePackageTable is replaced with the
HomeUnitGraph. The HomeUnitGraph is a map from UnitId to HomeUnitEnv,
which contains information specific to each home unit.
* The HomeUnitEnv contains:
- A unit state, each home unit can have different package db flags
- A set of dynflags, each home unit can have different flags
- A HomePackageTable
* LinkNode: A new node type is added to the ModuleGraph, this is used to
place the linking step into the build plan so linking can proceed in
parralel with other packages being built.
* New invariant: Dependencies of a ModuleGraphNode can be completely
determined by looking at the value of the node. In order to achieve
this, downsweep now performs a more complete job of downsweeping and
then the dependenices are recorded forever in the node rather than
being computed again from the ModSummary.
* Some transitive module calculations are rewritten to use the
ModuleGraph which is more efficient.
* There is always an active home unit, which simplifies modifying a lot
of the existing API code which is unit agnostic (for example, in the
driver).
The road may be bumpy for a little while after this change but the
basics are well-tested.
One small metric increase, which we accept and also submodule update to
haddock which removes ExtendedModSummary.
Closes #10827
-------------------------
Metric Increase:
MultiLayerModules
-------------------------
Co-authored-by: Fendor <power.walross@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The reqlib modifer was supposed to indicate that a test needed a certain
library in order to work. If the library happened to be installed then
the test would run as normal.
However, CI has never run these tests as the packages have not been
installed and we don't want out tests to depend on things which might
get externally broken by updating the compiler.
The new strategy is to run these tests in head.hackage, where the tests
have been cabalised as well as possible. Some tests couldn't be
transferred into the normal style testsuite but it's better than never
running any of the reqlib tests. https://gitlab.haskell.org/ghc/head.hackage/-/merge_requests/169
A few submodules also had reqlib tests and have been updated to remove
it.
Closes #16264 #20032 #17764 #16561
|
|
|
|
| |
We use the parser generated by stack to ensure reproducibility
|
|
|
|
| |
Fixes #20621
|
|
|
|
|
|
| |
This test triggers the bad code path identified by #20509 where an entry
into the EPS caused by importing Control.Applicative will retain a stale
HomePackageTable.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While investigating #20106, I made a few refactorings to the pattern-match
checker that I don't want to lose. Here are the changes:
* Some key functions of the checker now have SCC annotations
* Better `-ddump-ec-trace` diagnostics for easier debugging. I added
'traceWhenFailPm' to see *why* a particular `MaybeT` computation fails and
made use of it in `instCon`.
I also increased the acceptance threshold of T11545, which seems to fail
randomly lately due to ghc/max flukes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch, provoked by regressions in the text package
(#19557), improves sharing of join points. This also fixes
the terrible behaviour in #20049.
See Note [Duplicating join points] in GHC.Core.Opt.Simplify.
* In the StrictArg case of mkDupableContWithDmds, don't
use Plan A for data constructors
* In postInlineUnconditionally, don't inline JoinIds
Avoids inlining join $j x = Just x
in case blah of
A -> $j x1
B -> $j x2
C -> $j x3
* In mkDupableStrictBind and mkDupableStrictAlt, create
join points (much) more often: exprIsTrivial rather than
exprIsDupable. This may be much, but we'll see.
Metric Decrease:
T12545
T13253-spj
T13719
T18140
T18282
T18304
T18698a
T18698b
Metric Increase:
T16577
T18923
T9961
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a sequel of #19414, I wrote a script that measures min and max allocation
bounds of T12545 based on randomly modifying -dunique-increment. I got a spread
of as much as 4.8%. But instead of widening the acceptance window further (to
5%), I committed the script as part of this commit, so that false positive
increases can easily be diagnosed by comparing min and max bounds to HEAD.
Indeed, for !5814 we have seen T12545 go from -0.3% to 3.3% after a rebase.
I made sure that the min and max bounds actually stayed the same.
In the future, this kind of check can very easily be done in a matter of a
minute. Maybe we should increase the acceptance threshold if we need to check
often (leave a comment on #19414 if you had to check), but I've not been bitten
by it for half a year, which seems OK.
Metric Increase:
T12545
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch comprises of four different but closely related ideas. The
net result is fixing a large number of open issues with the driver
whilst making it simpler to understand.
1. Use the hash of the source file to determine whether the source file
has changed or not. This makes the recompilation checking more robust to
modern build systems which are liable to copy files around changing
their modification times.
2. Remove the concept of a "stable module", a stable module was one
where the object file was older than the source file, and all transitive
dependencies were also stable. Now we don't rely on the modification
time of the source file, the notion of stability is moot.
3. Fix TH/plugin recompilation after the removal of stable modules. The
TH recompilation check used to rely on stable modules. Now there is a
uniform and simple way, we directly track the linkables which were
loaded into the interpreter whilst compiling a module. This is an
over-approximation but more robust wrt package dependencies changing.
4. Fix recompilation checking for dynamic object files. Now we actually
check if the dynamic object file exists when compiling with -dynamic-too
Fixes #19774 #19771 #19758 #17434 #11556 #9121 #8211 #16495 #7277 #16093
|
|
|
|
|
| |
This makes it more robust to people running it with `quick` flavour and
so on.
|
|
|
|
|
| |
The test max memory usage improves dramatically with the fixes to
memory usage in demand analyser from #15455
|
|
|
|
| |
Fixes #11545
|
| |
|
| |
|
|
|
|
|
|
|
| |
As #19293 realises, this one keeps on flip flopping by 2.5%
depending on how many modules there are within the GHC package.
We should revert this once we figured out how to fix what's going on.
|
|
|
|
|
|
|
|
|
| |
This test flip-flops by +-1% in arbitrary changes in CI.
While playing around with `-dunique-increment`, I could reproduce
variations of 3% in compiler allocations, so I set the acceptance window
accordingly.
Fixes #19414.
|
|
|
|
| |
This reverts commit 4a9d856d21c67b3328e26aa68a071ec9a824a7bb.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As outlined in #18903, interleaving usage and strictness demands not
only means a more compact demand representation, but also allows us to
express demands that we weren't easily able to express before.
Call demands are *relative* in the sense that a call demand `Cn(cd)`
on `g` says "`g` is called `n` times. *Whenever `g` is called*, the
result is used according to `cd`". Example from #18903:
```hs
h :: Int -> Int
h m =
let g :: Int -> (Int,Int)
g 1 = (m, 0)
g n = (2 * n, 2 `div` n)
{-# NOINLINE g #-}
in case m of
1 -> 0
2 -> snd (g m)
_ -> uncurry (+) (g m)
```
Without the interleaved representation, we would just get `L` for the
strictness demand on `g`. Now we are able to express that whenever
`g` is called, its second component is used strictly in denoting `g`
by `1C1(P(1P(U),SP(U)))`. This would allow Nested CPR to unbox the
division, for example.
Fixes #18903.
While fixing regressions, I also discovered and fixed #18957.
Metric Decrease:
T13253-spj
|
|\ |
|
| |
| |
| |
| | |
These all have a maximum residency of over 2 GB.
|
|/
|
|
|
|
|
| |
Progress towards #18842. As @sgraf812 points out, widening the window is
dangerous until the exponential described in #17658 is fixed. But this
test has caused enough misery and is low stakes enough that we and
@bgamari think it's worth it in this one case for the time being.
|
|
|
|
|
|
|
|
| |
Makes it possible for GHC to optimize away intermediate Generic representation
for more types.
Metric Increase:
T12227
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes #18223, which made GHC generate an exponential
amount of code. There are three quite separate changes in here
1. Re-engineer eta-expansion (again). The eta-expander was
generating lots of intermediate stuff, which could be optimised
away, but which choked the simplifier meanwhile. Relatively
easy to kill it off at source.
See Note [The EtaInfo mechanism] in GHC.Core.Opt.Arity.
The main new thing is the use of pushCoArg in getArg_maybe.
2. Stop Specialise specalising DFuns. This is the cause of a huge
(and utterly unnecessary) blowup in program size in #18223.
See Note [Do not specialise DFuns] in GHC.Core.Opt.Specialise.
I also refactored the Specialise monad a bit... it was silly,
because it passed on unchanging values as if they were mutable
state.
3. Do an extra Simplifer run, after SpecConstra and before
late-Specialise. I found (investigating perf/compiler/T16473)
that failing to do this was crippling *both* SpecConstr *and*
Specialise. See Note [Simplify after SpecConstr] in
GHC.Core.Opt.Pipeline.
This change does mean an extra run of the Simplifier, but only
with -O2, and I think that's acceptable.
T16473 allocates *three* times less with this change. (I changed
it to check runtime rather than compile time.)
Some smaller consequences
* I moved pushCoercion, pushCoArg and friends from SimpleOpt
to Arity, because it was needed by the new etaInfoApp.
And pushCoValArg now returns a MCoercion rather than Coercion for
the argument Coercion.
* A minor, incidental improvement to Core pretty-printing
This does fix #18223, (which was otherwise uncompilable. Hooray. But
there is still a big intermediate because there are some very deeply
nested types in that program.
Modest reductions in compile-time allocation on a couple of benchmarks
T12425 -2.0%
T13253 -10.3%
Metric increase with -O2, due to extra simplifier run
T9233 +5.8%
T12227 +1.8%
T15630 +5.0%
There is a spurious apparent increase on heap residency on T9630,
on some architectures at least. I tried it with -G1 and the residency
is essentially unchanged.
Metric Increase
T9233
T12227
T9630
Metric Decrease
T12425
T13253
|
|
|
|
|
|
|
| |
Previously it collected everything, including "max bytes used". This is
problematic since the test makes no attempt to control for deviations in
GC timing, resulting in high variability. Fix this by only collecting
"bytes allocated".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically:
#13253 exponential inlining
#10421 ditto
#18140 strict constructors
#18282 another nested-function call case
This patch makes one really significant changes: change the way that
mkDupableCont handles StrictArg. The details are explained in
GHC.Core.Opt.Simplify Note [Duplicating StrictArg].
Specific changes
* In mkDupableCont, when making auxiliary bindings for the other arguments
of a call, add extra plumbing so that we don't forget the demand on them.
Otherwise we haev to wait for another round of strictness analysis. But
actually all the info is to hand. This change affects:
- Make the strictness list in ArgInfo be [Demand] instead of [Bool],
and rename it to ai_dmds.
- Add as_dmd to ValArg
- Simplify.makeTrivial takes a Demand
- mkDupableContWithDmds takes a [Demand]
There are a number of other small changes
1. For Ids that are used at most once in each branch of a case, make
the occurrence analyser record the total number of syntactic
occurrences. Previously we recorded just OneBranch or
MultipleBranches.
I thought this was going to be useful, but I ended up barely
using it; see Note [Note [Suppress exponential blowup] in
GHC.Core.Opt.Simplify.Utils
Actual changes:
* See the occ_n_br field of OneOcc.
* postInlineUnconditionally
2. I found a small perf buglet in SetLevels; see the new
function GHC.Core.Opt.SetLevels.hasFreeJoin
3. Remove the sc_cci field of StrictArg. I found I could get
its information from the sc_fun field instead. Less to get
wrong!
4. In ArgInfo, arrange that ai_dmds and ai_discs have a simpler
invariant: they line up with the value arguments beyond ai_args
This allowed a bit of nice refactoring; see isStrictArgInfo,
lazyArgcontext, strictArgContext
There is virtually no difference in nofib. (The runtime numbers
are bogus -- I tried a few manually.)
Program Size Allocs Runtime Elapsed TotalMem
--------------------------------------------------------------------------------
fft +0.0% -2.0% -48.3% -49.4% 0.0%
multiplier +0.0% -2.2% -50.3% -50.9% 0.0%
--------------------------------------------------------------------------------
Min -0.4% -2.2% -59.2% -60.4% 0.0%
Max +0.0% +0.1% +3.3% +4.9% 0.0%
Geometric Mean +0.0% -0.0% -33.2% -34.3% -0.0%
Test T18282 is an existing example of these deeply-nested strict calls.
We get a big decrease in compile time (-85%) because so much less
inlining takes place.
Metric Decrease:
T18282
|
|
|
|
|
|
| |
This test is positively tiny and consequently the bytes allocated
measurement will be relatively noisy. Consequently I have seen this
fail spuriously quite often.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ticket #18282 showed that the result discount given by conSize
was massively too large. This patch reduces that discount to
a constant 10, which just balances the cost of the constructor
application itself.
Note [Constructor size and result discount] elaborates, as
does the ticket #18282.
Reducing result discount reduces inlining, which affects perf. I
found that I could increase the unfoldingUseThrehold from 80 to 90 in
compensation; in combination with the result discount change I get
these overall nofib numbers:
Program Size Allocs Runtime Elapsed TotalMem
--------------------------------------------------------------------------------
boyer -0.2% +5.4% -3.2% -3.4% 0.0%
cichelli -0.1% +5.9% -11.2% -11.7% 0.0%
compress2 -0.2% +9.6% -6.0% -6.8% 0.0%
cryptarithm2 -0.1% -3.9% -6.0% -5.7% 0.0%
gamteb -0.2% +2.6% -13.8% -14.4% 0.0%
genfft -0.1% -1.6% -29.5% -29.9% 0.0%
gg -0.0% -2.2% -17.2% -17.8% -20.0%
life -0.1% -2.2% -62.3% -63.4% 0.0%
mate +0.0% +1.4% -5.1% -5.1% -14.3%
parser -0.2% -2.1% +7.4% +6.7% 0.0%
primetest -0.2% -12.8% -14.3% -14.2% 0.0%
puzzle -0.2% +2.1% -10.0% -10.4% 0.0%
rsa -0.2% -11.7% -3.7% -3.8% 0.0%
simple -0.2% +2.8% -36.7% -38.3% -2.2%
wheel-sieve2 -0.1% -19.2% -48.8% -49.2% -42.9%
--------------------------------------------------------------------------------
Min -0.4% -19.2% -62.3% -63.4% -42.9%
Max +0.3% +9.6% +7.4% +11.0% +16.7%
Geometric Mean -0.1% -0.3% -17.6% -18.0% -0.7%
I'm ok with these numbers, remembering that this change removes
an *exponential* increase in code size in some in-the-wild cases.
I investigated compress2. The difference is entirely caused by this
function no longer inlining
WriteRoutines.$woutputCodes
= \ (w :: [CodeEvent]) ->
let result_s1Sr
= case WriteRoutines.outputCodes_$s$woutput w 0# 0# 8# 9# of
(# ww1, ww2 #) -> (ww1, ww2)
in (# case result_s1Sr of (x, _) ->
map @Int @Char WriteRoutines.outputCodes1 x
, case result_s1Sr of { (_, y) -> y } #)
It was right on the cusp before, driven by the excessive result
discount. Too bad!
Happily, the compiler/perf tests show a number of improvements:
T12227 compiler bytes-alloc -6.6%
T12545 compiler bytes-alloc -4.7%
T13056 compiler bytes-alloc -3.3%
T15263 runtime bytes-alloc -13.1%
T17499 runtime bytes-alloc -14.3%
T3294 compiler bytes-alloc -1.1%
T5030 compiler bytes-alloc -11.7%
T9872a compiler bytes-alloc -2.0%
T9872b compiler bytes-alloc -1.2%
T9872c compiler bytes-alloc -1.5%
Metric Decrease:
T12227
T12545
T13056
T15263
T17499
T3294
T5030
T9872a
T9872b
T9872c
|
| |
|
|
|
|
|
| |
Previously it wasn't uncommon to see +/-1% fluctuations in compiler
allocations on this test.
|
| |
|
|
|
|
|
|
|
| |
This test performs little work, so the most minor allocation
changes often cause the test to fail.
Increasing the threshold to 2% should help with this.
|
|
|
|
|
| |
Just adding `{-# LANGUAGE BangPatterns #-}` makes the two other metrics
fluctuate by 13%.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ticket #18304 showed that we need to be very careful
when exploring the demand (esp usage demand) on recursive
product types.
This patch solves the problem by trimming the demand on such types --
in effect, a form of "widening".
See the Note [Trimming a demand to a type] in DmdAnal, which explains
how I did this by piggy-backing on an existing mechansim for trimming
demands becuase of GADTs. The significant payload of this patch is
very small indeed:
* Make GHC.Core.Opt.WorkWrap.Utils.typeShape use RecTcChecker to
avoid looking through recursive types.
But on the way
* I found that ae_rec_tc was entirely inoperative and did nothing.
So I removed it altogether from DmdAnal.
* I moved some code around in DmdAnal and Demand.
(There are no actual changes in dmdFix.)
* I changed the API of DmsAnal.dmdAnalRhsLetDown to return
a StrictSig rather than a decorated Id
* I removed the dead function peelTsFuns from Demand
Performance effects:
Nofib: 0.0% changes. Not surprising, because they don't
use recursive products
Perf tests
T12227:
1% increase in compiler allocation, becuase $cto gets w/w'd.
It did not w/w before because it takes a deeply nested
argument, so the worker gets too many args, so we abandon w/w
altogether (see GHC.Core.Opt.WorkWrap.Utils.isWorkerSmallEnough)
With this patch we trim the demands. That is not strictly
necessary (since these Generic type constructors are like
tuples -- they can't cause a loop) but the net result is that
we now w/w $cto which is fine.
UniqLoop:
16% decrease in /runtime/ allocation. The UniqSupply is a
recursive product, so currently we abandon all strictness on
'churn'. With this patch 'churn' gets useful strictness, and
we w/w it. Hooray
Metric Decrease:
UniqLoop
Metric Increase:
T12227
|
|
|
|
|
| |
T16190 is meant to test a NCG feature. It has already caused spurious
failures in other MRs (e.g. !2165) when LLVM is used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- We don't want to benchmark linting so disable lints in hie002 perf
test
- Move no_lint to the top-level to be able to use it in tests other than
those in `testsuite/tests/perf/compiler`.
- Filter out -dstg-lint in no_lint.
- hie002 allocation numbers on 32-bit are unstable, so skip it on 32-bit
Metric Decrease:
hie002
ManyConstructors
T12150
T12234
T13035
T1969
T4801
T9233
T9961
|
|
|
|
| |
We were mistakenly measuring program stats
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we'd override the existing {run,hc} opts in
extra_{run,hc}_opts, which caused flakiness in T1969, see #17712.
extra_{run,hc}_opts now extends {run,hc} opts, instead of overriding.
Also we shrank the allocation area for T1969 in order to increase
residency sampling frequency.
Fixes #17712
|
|
|
|
|
| |
I have seen >20% fluctuations in this number, leading to spurious
failures.
|
|
|
|
|
|
|
|
| |
As noted in #17624, it's quite unstable, especially, for some reason, on
i386 and armv7 (something about 32-bit platforms perhaps?).
Metric Increase:
T1969
|
|
|
|
|
|
| |
I previously increased the size of the acceptance window from 2% to 5%
but this still isn't enough. Regardless, measuring bytes allocated
should be sufficient to catch any regressions.
|
|
|
|
| |
This statistic is rather unstable. Hopefully fixes #17475.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As suggested in #17387; this helps reduce the variance in our residency
sampling.
Metric Increase:
T10370
T3586
lazy-bs-alloc
Metric Decrease 'compile_time/peak_megabytes_allocated':
T1969
Metric Decrease 'runtime/bytes allocated':
space_leak_001
Metric Increase 'compile_time/bytes allocated':
T1969
Metric Increase 'runtime/peak_megabytes_allocated':
space_leak_001
Metric Decrease:
T3064
T9675
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This introduces a concurrent mark & sweep garbage collector to manage the old
generation. The concurrent nature of this collector typically results in
significantly reduced maximum and mean pause times in applications with large
working sets.
Due to the large and intricate nature of the change I have opted to
preserve the fully-buildable history, including merge commits, which is
described in the "Branch overview" section below.
Collector design
================
The full design of the collector implemented here is described in detail
in a technical note
> B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
> Compiler" (2018)
This document can be requested from @bgamari.
The basic heap structure used in this design is heavily inspired by
> K. Ueno & A. Ohori. "A fully concurrent garbage collector for
> functional programs on multicore processors." /ACM SIGPLAN Notices/
> Vol. 51. No. 9 (presented at ICFP 2016)
This design is intended to allow both marking and sweeping
concurrent to execution of a multi-core mutator. Unlike the Ueno design,
which requires no global synchronization pauses, the collector
introduced here requires a stop-the-world pause at the beginning and end
of the mark phase.
To avoid heap fragmentation, the allocator consists of a number of
fixed-size /sub-allocators/. Each of these sub-allocators allocators into
its own set of /segments/, themselves allocated from the block
allocator. Each segment is broken into a set of fixed-size allocation
blocks (which back allocations) in addition to a bitmap (used to track
the liveness of blocks) and some additional metadata (used also used
to track liveness).
This heap structure enables collection via mark-and-sweep, which can be
performed concurrently via a snapshot-at-the-beginning scheme (although
concurrent collection is not implemented in this patch).
Implementation structure
========================
The majority of the collector is implemented in a handful of files:
* `rts/Nonmoving.c` is the heart of the beast. It implements the entry-point
to the nonmoving collector (`nonmoving_collect`), as well as the allocator
(`nonmoving_allocate`) and a number of utilities for manipulating the heap.
* `rts/NonmovingMark.c` implements the mark queue functionality, update
remembered set, and mark loop.
* `rts/NonmovingSweep.c` implements the sweep loop.
* `rts/NonmovingScav.c` implements the logic necessary to scavenge the
nonmoving heap.
Branch overview
===============
```
* wip/gc/opt-pause:
| A variety of small optimisations to further reduce pause times.
|
* wip/gc/compact-nfdata:
| Introduce support for compact regions into the non-moving
|\ collector
| \
| \
| | * wip/gc/segment-header-to-bdescr:
| | | Another optimization that we are considering, pushing
| | | some segment metadata into the segment descriptor for
| | | the sake of locality during mark
| | |
| * | wip/gc/shortcutting:
| | | Support for indirection shortcutting and the selector optimization
| | | in the non-moving heap.
| | |
* | | wip/gc/docs:
| |/ Work on implementation documentation.
| /
|/
* wip/gc/everything:
| A roll-up of everything below.
|\
| \
| |\
| | \
| | * wip/gc/optimize:
| | | A variety of optimizations, primarily to the mark loop.
| | | Some of these are microoptimizations but a few are quite
| | | significant. In particular, the prefetch patches have
| | | produced a nontrivial improvement in mark performance.
| | |
| | * wip/gc/aging:
| | | Enable support for aging in major collections.
| | |
| * | wip/gc/test:
| | | Fix up the testsuite to more or less pass.
| | |
* | | wip/gc/instrumentation:
| | | A variety of runtime instrumentation including statistics
| | / support, the nonmoving census, and eventlog support.
| |/
| /
|/
* wip/gc/nonmoving-concurrent:
| The concurrent write barriers.
|
* wip/gc/nonmoving-nonconcurrent:
| The nonmoving collector without the write barriers necessary
| for concurrent collection.
|
* wip/gc/preparation:
| A merge of the various preparatory patches that aren't directly
| implementing the GC.
|
|
* GHC HEAD
.
.
.
```
|
| |
| |
| |
| | |
The nonmoving collector doesn't support -G1
|