| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test checks for #22744 by compiling 100 modules which each have
a dependency on 1000 distinct external files.
Previously, when loading these interfaces from disk, each individual instance
of a filepath in the interface will would be allocated as an individual object
on the heap, meaning we have heap objects for 100*1000 files, when there are
only 1000 distinct files we care about.
This test checks this by first compiling the module normally, then measuring
the peak memory usage in a no-op recompile, as the recompilation checking will
force the allocation of all these filepaths.
|
|
|
|
| |
This patch includes all wasm32-specific testsuite fixes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch moves the field-based logic for disambiguating record updates
to the renamer. The type-directed logic, scheduled for removal, remains
in the typechecker.
To do this properly (and fix the myriad of bugs surrounding the treatment
of duplicate record fields), we took the following main steps:
1. Create GREInfo, a renamer-level equivalent to TyThing which stores
information pertinent to the renamer.
This allows us to uniformly treat imported and local Names in the
renamer, as described in Note [GREInfo].
2. Remove GreName. Instead of a GlobalRdrElt storing GreNames, which
distinguished between normal names and field names, we now store
simple Names in GlobalRdrElt, along with the new GREInfo information
which allows us to recover the FieldLabel for record fields.
3. Add namespacing for record fields, within the OccNames themselves.
This allows us to remove the mangling of duplicate field selectors.
This change ensures we don't print mangled names to the user in
error messages, and allows us to handle duplicate record fields
in Template Haskell.
4. Move record disambiguation to the renamer, and operate on the
level of data constructors instead, to handle #21443.
The error message text for ambiguous record updates has also been
changed to reflect that type-directed disambiguation is on the way
out.
(3) means that OccEnv is now a bit more complex: we first key on the
textual name, which gives an inner map keyed on NameSpace:
OccEnv a ~ FastStringEnv (UniqFM NameSpace a)
Note that this change, along with (2), both increase the memory residency
of GlobalRdrEnv = OccEnv [GlobalRdrElt], which causes a few tests to
regress somewhat in compile-time allocation.
Even though (3) simplified a lot of code (in particular the treatment of
field selectors within Template Haskell and in error messages), it came
with one important wrinkle: in the situation of
-- M.hs-boot
module M where { data A; foo :: A -> Int }
-- M.hs
module M where { data A = MkA { foo :: Int } }
we have that M.hs-boot exports a variable foo, which is supposed to match
with the record field foo that M exports. To solve this issue, we add a
new impedance-matching binding to M
foo{var} = foo{fld}
This mimics the logic that existed already for impedance-binding DFunIds,
but getting it right was a bit tricky.
See Note [Record field impedance matching] in GHC.Tc.Module.
We also needed to be careful to avoid introducing space leaks in GHCi.
So we dehydrate the GlobalRdrEnv before storing it anywhere, e.g. in
ModIface. This means stubbing out all the GREInfo fields, with the
function forceGlobalRdrEnv.
When we read it back in, we rehydrate with rehydrateGlobalRdrEnv.
This robustly avoids any space leaks caused by retaining old type
environments.
Fixes #13352 #14848 #17381 #17551 #19664 #21443 #21444 #21720 #21898 #21946 #21959 #22125 #22160 #23010 #23062 #23063
Updates haddock submodule
-------------------------
Metric Increase:
MultiComponentModules
MultiLayerModules
MultiLayerModulesDefsGhci
MultiLayerModulesNoCode
T13701
T14697
hard_hole_fits
-------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This MR runs the testsuite for the JS backend. Note that this is a
temporary solution until !9515 is merged.
Key point: The CI runs hadrian on the built cross compiler _but not_ on
the bindist.
Other Highlights:
- stm submodule gets a bump to mark tests as broken
- several tests are marked as broken or are fixed by adding more
- conditions to their test runner instance.
List of working commit messages:
CI: test cross target _and_ emulator
CI: JS: Try run testsuite with hadrian
JS.CI: cleanup and simplify hadrian invocation
use single bracket, print info
JS CI: remove call to test_compiler from hadrian
don't build haddock
JS: mark more tests as broken
Tracked in https://gitlab.haskell.org/ghc/ghc/-/issues/22576
JS testsuite: don't skip sum_mod test
Its expected to fail, yet we skipped it which automatically makes it
succeed leading to an unexpected success,
JS testsuite: don't mark T12035j as skip
leads to an unexpected pass
JS testsuite: remove broken on T14075
leads to unexpected pass
JS testsuite: mark more tests as broken
JS testsuite: mark T11760 in base as broken
JS testsuite: mark ManyUnbSums broken
submodules: bump process and hpc for JS tests
Both submodules has needed tests skipped or marked broken for th JS
backend. This commit now adds these changes to GHC.
See:
HPC: https://gitlab.haskell.org/hpc/hpc/-/merge_requests/21
Process: https://github.com/haskell/process/pull/268
remove js_broken on now passing tests
separate wasm and js backend ci
test: T11760: add threaded, non-moving only_ways
test: T10296a add req_c
T13894: skip for JS backend
tests: jspace, T22333: mark as js_broken(22573)
test: T22513i mark as req_th
stm submodule: mark stm055, T16707 broken for JS
tests: js_broken(22374) on unpack_sums_6, T12010
dont run diff on JS CI, cleanup
fixup: More CI cleanup
fix: align text to master
fix: align exceptions submodule to master
CI: Bump DOCKER_REV
Bump to ci-images commit that has a deb11 build with node. Required for
!9552
testsuite: mark T22669 as js_skip
See #22669
This test tests that .o-boot files aren't created when run in using the
interpreter backend. Thus this is not relevant for the JS backend.
testsuite: mark T22671 as broken on JS
See #22835
base.testsuite: mark Chan002 fragile for JS
see #22836
revert: submodule process bump
bump stm submodule
New hash includes skips for the JS backend.
testsuite: mark RnPatternSynonymFail broken on JS
Requires TH:
- see !9779
- and #22261
compiler: GHC.hs ifdef import Utils.Panic.Plain
|
|
|
|
|
|
|
|
|
| |
...the testsuite doesn't handle this properly since it
also collects run-time metrics. Compile-time metrics
for this test are already tracked via T21839c.
Metric Decrease:
T21839r
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add JS backend adapted from the GHCJS project by Luite Stegeman.
Some features haven't been ported or implemented yet. Tests for these
features have been disabled with an associated gitlab ticket.
Bump array submodule
Work funded by IOG.
Co-authored-by: Jeffrey Young <jeffrey.young@iohk.io>
Co-authored-by: Luite Stegeman <stegeman@gmail.com>
Co-authored-by: Josh Meredith <joshmeredith2008@gmail.com>
|
| |
|
|
|
|
|
|
|
|
| |
Also add perf test for infinite list fusion.
In particular, in `GHC.Core`, often we deal with infinite lists of roles. Also in a few locations we deal with infinite lists of names.
Thanks to simonpj for helping to write the Note [Fusion for `Infinite` lists].
|
|
|
|
|
|
|
|
|
| |
This code showed a strong shift between compile time (got worse) and
run time (got a lot better) recently which is perfectly acceptable.
However it wasn't clear why the compile time regression was happening
initially so I'm adding this test to make it easier to track such changes
in the future.
|
|
|
|
|
|
|
|
|
| |
The theory is that on windows there is some difference in the
environment between pipelines on master and merge requests which affects
all tests equally but because T16875 barely allocates anything it is the
test which is affected the most.
See #21557
|
|
|
|
| |
This distills the essence of the Sigs.hs program found in the ticket.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In #19644, we discovered that the ClassOp/DFun rules from
Note [ClassOp/DFun selection] inhibit transitive specialisation in a scenario
like
```
class C a where m :: Show b => a -> b -> ...; n :: ...
instance C Int where m = ... -- $cm :: Show b => Int -> b -> ...
f :: forall a b. (C a, Show b) => ...
f $dC $dShow = ... m @a $dC @b $dShow ...
main = ... f @Int @Bool ...
```
After we specialise `f` for `Int`, we'll see `m @a $dC @b $dShow` in the body of
`$sf`. But before this patch, Specialise doesn't apply the ClassOp/DFun rule to
rewrite to a call of the instance method for `C Int`, e.g., `$cm @Bool $dShow`.
As a result, Specialise couldn't further specialise `$cm` for `Bool`.
There's a better example in `Note [Specialisation modulo dictionary selectors]`.
This patch enables proper Specialisation, as follows:
1. In the App case of `specExpr`, try to apply the CalssOp/DictSel rule on the
head of the application
2. Attach an unfolding to freshly-bound dictionary ids such as `$dC` and
`$dShow` in `bindAuxiliaryDict`
NB: Without (2), (1) would be pointless, because `lookupRule` wouldn't be able
to look into the RHS of `$dC` to see the DFun.
(2) triggered #21332, because the Specialiser floats around dictionaries without
accounting for them in the `SpecEnv`'s `InScopeSet`, triggering a panic when
rewriting dictionary unfoldings.
Fixes #19644 and #21332.
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove unused functions exprToCoercion_maybe, applyTypeToArg,
typeMonoPrimRep_maybe, runtimeRepMonoPrimRep_maybe.
- Replace orValid with a simpler check
- Use splitAtList in applyTysX
- Remove calls to extra_clean in the testsuite; it does not do anything.
Metric Decrease:
T18223
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a new kind of metavariable, by adding the
constructor `ConcreteTv` to `MetaInfo`. A metavariable with
`ConcreteTv` `MetaInfo`, henceforth a concrete metavariable, can only
be unified with a type that is concrete (that is, a type that answers
`True` to `GHC.Core.Type.isConcrete`).
This solves the problem of dangling metavariables in `Concrete#`
constraints: instead of emitting `Concrete# ty`, which contains a
secret existential metavariable, we simply emit a primitive equality
constraint `ty ~# concrete_tv` where `concrete_tv` is a fresh concrete
metavariable.
This means we can avoid all the complexity of canonicalising
`Concrete#` constraints, as we can just re-use the existing machinery
for `~#`.
To finish things up, this patch then removes the `Concrete#` special
predicate, and instead introduces the special predicate `IsRefl#`
which enforces that a coercion is reflexive.
Such a constraint is needed because the canonicaliser is quite happy
to rewrite an equality constraint such as `ty ~# concrete_tv`, but
such a rewriting is not handled by the rest of the compiler currently,
as we need to make use of the resulting coercion, as outlined in the
FixedRuntimeRep plan.
The big upside of this approach (on top of simplifying the code)
is that we can now selectively implement PHASE 2 of FixedRuntimeRep,
by changing individual calls of `hasFixedRuntimeRep_MustBeRefl` to
`hasFixedRuntimeRep` and making use of the obtained coercion.
|
|
|
|
|
|
|
|
| |
This patch adds some performance tests for programs that create
large coercions. This is useful because the existing test coverage
is not very representative of real-world situations. In particular,
this adds a test involving an extensible records library, a common
pain-point for users.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we introduce a new data structure, RoughMap, inspired by the
previous `RoughTc` matching mechanism for checking instance matches.
This allows [Fam]InstEnv to be implemented as a trie indexed by these
RoughTc signatures, reducing the complexity of instance lookup and
FamInstEnv merging (done during the family instance conflict test)
from O(n) to O(log n).
The critical performance improvement currently realised by this patch is
in instance matching. In particular the RoughMap mechanism allows us to
discount many potential instances which will never match for constraints
involving type variables (see Note [Matching a RoughMap]). In realistic
code bases matchInstEnv was accounting for 50% of typechecker time due
to redundant work checking instances when simplifying instance contexts
when deriving instances. With this patch the cost is significantly
reduced.
The larger constants in InstEnv creation do mean that a few small
tests regress in allocations slightly. However, the runtime of T19703 is
reduced by a factor of 4. Moreover, the compilation time of the Cabal
library is slightly improved.
A couple of test cases are included which demonstrate significant
improvements in compile time with this patch.
This unfortunately does not fix the testcase provided in #19703 but does
fix #20933
-------------------------
Metric Decrease:
T12425
Metric Increase:
T13719
T9872a
T9872d
hard_hole_fits
-------------------------
Co-authored-by: Matthew Pickering <matthewtpickering@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test has been the scourge of contributors for a long time.
It has caused many failed CI runs and wasted hours debugging a test
which barely does anything. The fact is does nothing is the reason for
the flakiness and it's very sensitive to small changes in initialisation costs,
in particular adding wired-in things can cause this test to fluctuate
quite a bit.
Therefore we admit defeat and just bump the threshold up to 10% to catch
very large regressions but otherwise don't care what this test does.
Fixes #19414
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multiple home units allows you to load different packages which may depend on
each other into one GHC session. This will allow both GHCi and HLS to support
multi component projects more naturally.
Public Interface
~~~~~~~~~~~~~~~~
In order to specify multiple units, the -unit @⟨filename⟩ flag
is given multiple times with a response file containing the arguments for each unit.
The response file contains a newline separated list of arguments.
```
ghc -unit @unitLibCore -unit @unitLib
```
where the `unitLibCore` response file contains the normal arguments that cabal would pass to `--make` mode.
```
-this-unit-id lib-core-0.1.0.0
-i
-isrc
LibCore.Utils
LibCore.Types
```
The response file for lib, can specify a dependency on lib-core, so then modules in lib can use modules from lib-core.
```
-this-unit-id lib-0.1.0.0
-package-id lib-core-0.1.0.0
-i
-isrc
Lib.Parse
Lib.Render
```
Then when the compiler starts in --make mode it will compile both units lib and lib-core.
There is also very basic support for multiple home units in GHCi, at the
moment you can start a GHCi session with multiple units but only the
:reload is supported. Most commands in GHCi assume a single home unit,
and so it is additional work to work out how to modify the interface to
support multiple loaded home units.
Options used when working with Multiple Home Units
There are a few extra flags which have been introduced specifically for
working with multiple home units. The flags allow a home unit to pretend
it’s more like an installed package, for example, specifying the package
name, module visibility and reexported modules.
-working-dir ⟨dir⟩
It is common to assume that a package is compiled in the directory
where its cabal file resides. Thus, all paths used in the compiler
are assumed to be relative to this directory. When there are
multiple home units the compiler is often not operating in the
standard directory and instead where the cabal.project file is
located. In this case the -working-dir option can be passed which
specifies the path from the current directory to the directory the
unit assumes to be it’s root, normally the directory which contains
the cabal file.
When the flag is passed, any relative paths used by the compiler are
offset by the working directory. Notably this includes -i and
-I⟨dir⟩ flags.
-this-package-name ⟨name⟩
This flag papers over the awkward interaction of the PackageImports
and multiple home units. When using PackageImports you can specify
the name of the package in an import to disambiguate between modules
which appear in multiple packages with the same name.
This flag allows a home unit to be given a package name so that you
can also disambiguate between multiple home units which provide
modules with the same name.
-hidden-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules in a home unit should not be visible outside of the unit it
belongs to.
The main use of this flag is to be able to recreate the difference
between an exposed and hidden module for installed packages.
-reexported-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules are not defined in a unit but should be reexported. The
effect is that other units will see this module as if it was defined
in this unit.
The use of this flag is to be able to replicate the reexported
modules feature of packages with multiple home units.
Offsetting Paths in Template Haskell splices
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When using Template Haskell to embed files into your program,
traditionally the paths have been interpreted relative to the directory
where the .cabal file resides. This causes problems for multiple home
units as we are compiling many different libraries at once which have
.cabal files in different directories.
For this purpose we have introduced a way to query the value of the
-working-dir flag to the Template Haskell API. By using this function we
can implement a makeRelativeToProject function which offsets a path
which is relative to the original project root by the value of
-working-dir.
```
import Language.Haskell.TH.Syntax ( makeRelativeToProject )
foo = $(makeRelativeToProject "./relative/path" >>= embedFile)
```
> If you write a relative path in a Template Haskell splice you should use the makeRelativeToProject function so that your library works correctly with multiple home units.
A similar function already exists in the file-embed library. The
function in template-haskell implements this function in a more robust
manner by honouring the -working-dir flag rather than searching the file
system.
Closure Property for Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For tools or libraries using the API there is one very important closure
property which must be adhered to:
> Any dependency which is not a home unit must not (transitively) depend
on a home unit.
For example, if you have three packages p, q and r, then if p depends on
q which depends on r then it is illegal to load both p and r as home
units but not q, because q is a dependency of the home unit p which
depends on another home unit r.
If you are using GHC by the command line then this property is checked,
but if you are using the API then you need to check this property
yourself. If you get it wrong you will probably get some very confusing
errors about overlapping instances.
Limitations of Multiple Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a few limitations of the initial implementation which will be smoothed out on user demand.
* Package thinning/renaming syntax is not supported
* More complicated reexports/renaming are not yet supported.
* It’s more common to run into existing linker bugs when loading a
large number of packages in a session (for example #20674, #20689)
* Backpack is not yet supported when using multiple home units.
* Dependency chasing can be quite slow with a large number of
modules and packages.
* Loading wired-in packages as home units is currently not supported
(this only really affects GHC developers attempting to load
template-haskell).
* Barely any normal GHCi features are supported, it would be good to
support enough for ghcid to work correctly.
Despite these limitations, the implementation works already for nearly
all packages. It has been testing on large dependency closures,
including the whole of head.hackage which is a total of 4784 modules
from 452 packages.
Internal Changes
~~~~~~~~~~~~~~~~
* The biggest change is that the HomePackageTable is replaced with the
HomeUnitGraph. The HomeUnitGraph is a map from UnitId to HomeUnitEnv,
which contains information specific to each home unit.
* The HomeUnitEnv contains:
- A unit state, each home unit can have different package db flags
- A set of dynflags, each home unit can have different flags
- A HomePackageTable
* LinkNode: A new node type is added to the ModuleGraph, this is used to
place the linking step into the build plan so linking can proceed in
parralel with other packages being built.
* New invariant: Dependencies of a ModuleGraphNode can be completely
determined by looking at the value of the node. In order to achieve
this, downsweep now performs a more complete job of downsweeping and
then the dependenices are recorded forever in the node rather than
being computed again from the ModSummary.
* Some transitive module calculations are rewritten to use the
ModuleGraph which is more efficient.
* There is always an active home unit, which simplifies modifying a lot
of the existing API code which is unit agnostic (for example, in the
driver).
The road may be bumpy for a little while after this change but the
basics are well-tested.
One small metric increase, which we accept and also submodule update to
haddock which removes ExtendedModSummary.
Closes #10827
-------------------------
Metric Increase:
MultiLayerModules
-------------------------
Co-authored-by: Fendor <power.walross@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The reqlib modifer was supposed to indicate that a test needed a certain
library in order to work. If the library happened to be installed then
the test would run as normal.
However, CI has never run these tests as the packages have not been
installed and we don't want out tests to depend on things which might
get externally broken by updating the compiler.
The new strategy is to run these tests in head.hackage, where the tests
have been cabalised as well as possible. Some tests couldn't be
transferred into the normal style testsuite but it's better than never
running any of the reqlib tests. https://gitlab.haskell.org/ghc/head.hackage/-/merge_requests/169
A few submodules also had reqlib tests and have been updated to remove
it.
Closes #16264 #20032 #17764 #16561
|
|
|
|
| |
We use the parser generated by stack to ensure reproducibility
|
|
|
|
| |
Fixes #20621
|
|
|
|
|
|
| |
This test triggers the bad code path identified by #20509 where an entry
into the EPS caused by importing Control.Applicative will retain a stale
HomePackageTable.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While investigating #20106, I made a few refactorings to the pattern-match
checker that I don't want to lose. Here are the changes:
* Some key functions of the checker now have SCC annotations
* Better `-ddump-ec-trace` diagnostics for easier debugging. I added
'traceWhenFailPm' to see *why* a particular `MaybeT` computation fails and
made use of it in `instCon`.
I also increased the acceptance threshold of T11545, which seems to fail
randomly lately due to ghc/max flukes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch, provoked by regressions in the text package
(#19557), improves sharing of join points. This also fixes
the terrible behaviour in #20049.
See Note [Duplicating join points] in GHC.Core.Opt.Simplify.
* In the StrictArg case of mkDupableContWithDmds, don't
use Plan A for data constructors
* In postInlineUnconditionally, don't inline JoinIds
Avoids inlining join $j x = Just x
in case blah of
A -> $j x1
B -> $j x2
C -> $j x3
* In mkDupableStrictBind and mkDupableStrictAlt, create
join points (much) more often: exprIsTrivial rather than
exprIsDupable. This may be much, but we'll see.
Metric Decrease:
T12545
T13253-spj
T13719
T18140
T18282
T18304
T18698a
T18698b
Metric Increase:
T16577
T18923
T9961
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a sequel of #19414, I wrote a script that measures min and max allocation
bounds of T12545 based on randomly modifying -dunique-increment. I got a spread
of as much as 4.8%. But instead of widening the acceptance window further (to
5%), I committed the script as part of this commit, so that false positive
increases can easily be diagnosed by comparing min and max bounds to HEAD.
Indeed, for !5814 we have seen T12545 go from -0.3% to 3.3% after a rebase.
I made sure that the min and max bounds actually stayed the same.
In the future, this kind of check can very easily be done in a matter of a
minute. Maybe we should increase the acceptance threshold if we need to check
often (leave a comment on #19414 if you had to check), but I've not been bitten
by it for half a year, which seems OK.
Metric Increase:
T12545
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch comprises of four different but closely related ideas. The
net result is fixing a large number of open issues with the driver
whilst making it simpler to understand.
1. Use the hash of the source file to determine whether the source file
has changed or not. This makes the recompilation checking more robust to
modern build systems which are liable to copy files around changing
their modification times.
2. Remove the concept of a "stable module", a stable module was one
where the object file was older than the source file, and all transitive
dependencies were also stable. Now we don't rely on the modification
time of the source file, the notion of stability is moot.
3. Fix TH/plugin recompilation after the removal of stable modules. The
TH recompilation check used to rely on stable modules. Now there is a
uniform and simple way, we directly track the linkables which were
loaded into the interpreter whilst compiling a module. This is an
over-approximation but more robust wrt package dependencies changing.
4. Fix recompilation checking for dynamic object files. Now we actually
check if the dynamic object file exists when compiling with -dynamic-too
Fixes #19774 #19771 #19758 #17434 #11556 #9121 #8211 #16495 #7277 #16093
|
|
|
|
|
| |
This makes it more robust to people running it with `quick` flavour and
so on.
|
|
|
|
|
| |
The test max memory usage improves dramatically with the fixes to
memory usage in demand analyser from #15455
|
|
|
|
| |
Fixes #11545
|
| |
|
| |
|
|
|
|
|
|
|
| |
As #19293 realises, this one keeps on flip flopping by 2.5%
depending on how many modules there are within the GHC package.
We should revert this once we figured out how to fix what's going on.
|
|
|
|
|
|
|
|
|
| |
This test flip-flops by +-1% in arbitrary changes in CI.
While playing around with `-dunique-increment`, I could reproduce
variations of 3% in compiler allocations, so I set the acceptance window
accordingly.
Fixes #19414.
|
|
|
|
| |
This reverts commit 4a9d856d21c67b3328e26aa68a071ec9a824a7bb.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As outlined in #18903, interleaving usage and strictness demands not
only means a more compact demand representation, but also allows us to
express demands that we weren't easily able to express before.
Call demands are *relative* in the sense that a call demand `Cn(cd)`
on `g` says "`g` is called `n` times. *Whenever `g` is called*, the
result is used according to `cd`". Example from #18903:
```hs
h :: Int -> Int
h m =
let g :: Int -> (Int,Int)
g 1 = (m, 0)
g n = (2 * n, 2 `div` n)
{-# NOINLINE g #-}
in case m of
1 -> 0
2 -> snd (g m)
_ -> uncurry (+) (g m)
```
Without the interleaved representation, we would just get `L` for the
strictness demand on `g`. Now we are able to express that whenever
`g` is called, its second component is used strictly in denoting `g`
by `1C1(P(1P(U),SP(U)))`. This would allow Nested CPR to unbox the
division, for example.
Fixes #18903.
While fixing regressions, I also discovered and fixed #18957.
Metric Decrease:
T13253-spj
|
|\ |
|
| |
| |
| |
| | |
These all have a maximum residency of over 2 GB.
|
|/
|
|
|
|
|
| |
Progress towards #18842. As @sgraf812 points out, widening the window is
dangerous until the exponential described in #17658 is fixed. But this
test has caused enough misery and is low stakes enough that we and
@bgamari think it's worth it in this one case for the time being.
|
|
|
|
|
|
|
|
| |
Makes it possible for GHC to optimize away intermediate Generic representation
for more types.
Metric Increase:
T12227
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes #18223, which made GHC generate an exponential
amount of code. There are three quite separate changes in here
1. Re-engineer eta-expansion (again). The eta-expander was
generating lots of intermediate stuff, which could be optimised
away, but which choked the simplifier meanwhile. Relatively
easy to kill it off at source.
See Note [The EtaInfo mechanism] in GHC.Core.Opt.Arity.
The main new thing is the use of pushCoArg in getArg_maybe.
2. Stop Specialise specalising DFuns. This is the cause of a huge
(and utterly unnecessary) blowup in program size in #18223.
See Note [Do not specialise DFuns] in GHC.Core.Opt.Specialise.
I also refactored the Specialise monad a bit... it was silly,
because it passed on unchanging values as if they were mutable
state.
3. Do an extra Simplifer run, after SpecConstra and before
late-Specialise. I found (investigating perf/compiler/T16473)
that failing to do this was crippling *both* SpecConstr *and*
Specialise. See Note [Simplify after SpecConstr] in
GHC.Core.Opt.Pipeline.
This change does mean an extra run of the Simplifier, but only
with -O2, and I think that's acceptable.
T16473 allocates *three* times less with this change. (I changed
it to check runtime rather than compile time.)
Some smaller consequences
* I moved pushCoercion, pushCoArg and friends from SimpleOpt
to Arity, because it was needed by the new etaInfoApp.
And pushCoValArg now returns a MCoercion rather than Coercion for
the argument Coercion.
* A minor, incidental improvement to Core pretty-printing
This does fix #18223, (which was otherwise uncompilable. Hooray. But
there is still a big intermediate because there are some very deeply
nested types in that program.
Modest reductions in compile-time allocation on a couple of benchmarks
T12425 -2.0%
T13253 -10.3%
Metric increase with -O2, due to extra simplifier run
T9233 +5.8%
T12227 +1.8%
T15630 +5.0%
There is a spurious apparent increase on heap residency on T9630,
on some architectures at least. I tried it with -G1 and the residency
is essentially unchanged.
Metric Increase
T9233
T12227
T9630
Metric Decrease
T12425
T13253
|
|
|
|
|
|
|
| |
Previously it collected everything, including "max bytes used". This is
problematic since the test makes no attempt to control for deviations in
GC timing, resulting in high variability. Fix this by only collecting
"bytes allocated".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically:
#13253 exponential inlining
#10421 ditto
#18140 strict constructors
#18282 another nested-function call case
This patch makes one really significant changes: change the way that
mkDupableCont handles StrictArg. The details are explained in
GHC.Core.Opt.Simplify Note [Duplicating StrictArg].
Specific changes
* In mkDupableCont, when making auxiliary bindings for the other arguments
of a call, add extra plumbing so that we don't forget the demand on them.
Otherwise we haev to wait for another round of strictness analysis. But
actually all the info is to hand. This change affects:
- Make the strictness list in ArgInfo be [Demand] instead of [Bool],
and rename it to ai_dmds.
- Add as_dmd to ValArg
- Simplify.makeTrivial takes a Demand
- mkDupableContWithDmds takes a [Demand]
There are a number of other small changes
1. For Ids that are used at most once in each branch of a case, make
the occurrence analyser record the total number of syntactic
occurrences. Previously we recorded just OneBranch or
MultipleBranches.
I thought this was going to be useful, but I ended up barely
using it; see Note [Note [Suppress exponential blowup] in
GHC.Core.Opt.Simplify.Utils
Actual changes:
* See the occ_n_br field of OneOcc.
* postInlineUnconditionally
2. I found a small perf buglet in SetLevels; see the new
function GHC.Core.Opt.SetLevels.hasFreeJoin
3. Remove the sc_cci field of StrictArg. I found I could get
its information from the sc_fun field instead. Less to get
wrong!
4. In ArgInfo, arrange that ai_dmds and ai_discs have a simpler
invariant: they line up with the value arguments beyond ai_args
This allowed a bit of nice refactoring; see isStrictArgInfo,
lazyArgcontext, strictArgContext
There is virtually no difference in nofib. (The runtime numbers
are bogus -- I tried a few manually.)
Program Size Allocs Runtime Elapsed TotalMem
--------------------------------------------------------------------------------
fft +0.0% -2.0% -48.3% -49.4% 0.0%
multiplier +0.0% -2.2% -50.3% -50.9% 0.0%
--------------------------------------------------------------------------------
Min -0.4% -2.2% -59.2% -60.4% 0.0%
Max +0.0% +0.1% +3.3% +4.9% 0.0%
Geometric Mean +0.0% -0.0% -33.2% -34.3% -0.0%
Test T18282 is an existing example of these deeply-nested strict calls.
We get a big decrease in compile time (-85%) because so much less
inlining takes place.
Metric Decrease:
T18282
|
|
|
|
|
|
| |
This test is positively tiny and consequently the bytes allocated
measurement will be relatively noisy. Consequently I have seen this
fail spuriously quite often.
|