| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This will allow to make command line parsing to depend on
diagnostic system (which depends on dynflags)
|
|
|
|
| |
Towards #22530
|
|
|
|
|
| |
Create and use moduleGraphModulesBelow in GHC.Unit.Module.Graph that
doesn't need anything from the driver to be used.
|
|
|
|
| |
This is helpful when debugging multiple component issues.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add JS backend adapted from the GHCJS project by Luite Stegeman.
Some features haven't been ported or implemented yet. Tests for these
features have been disabled with an associated gitlab ticket.
Bump array submodule
Work funded by IOG.
Co-authored-by: Jeffrey Young <jeffrey.young@iohk.io>
Co-authored-by: Luite Stegeman <stegeman@gmail.com>
Co-authored-by: Josh Meredith <joshmeredith2008@gmail.com>
|
|
|
|
|
|
|
| |
Pass FastStrings to functions directly, to make sure the rule
for fsLit "literal" fires.
Remove SDoc indirection in GHCi.UI.Tags and GHC.Unit.Module.Graph.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds three new flags
* -fwrite-if-simplified-core: Writes the whole core program into an interface
file
* -fbyte-code-and-object-code: Generate both byte code and object code
when compiling a file
* -fprefer-byte-code: Prefer to use byte-code if it's available when
running TH splices.
The goal for including the core bindings in an interface file is to be able to restart the compiler pipeline
at the point just after simplification and before code generation. Once compilation is
restarted then code can be created for the byte code backend.
This can significantly speed up
start-times for projects in GHCi. HLS already implements its own version of these extended interface
files for this reason.
Preferring to use byte-code means that we can avoid some potentially
expensive code generation steps (see #21700)
* Producing object code is much slower than producing bytecode, and normally you
need to compile with `-dynamic-too` to produce code in the static and dynamic way, the
dynamic way just for Template Haskell execution when using a dynamically linked compiler.
* Linking many large object files, which happens once per splice, can be quite
expensive compared to linking bytecode.
And you can get GHC to compile the necessary byte code so
`-fprefer-byte-code` has access to it by using
`-fbyte-code-and-object-code`.
Fixes #21067
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes quite a tricky leak where we would end up retaining
stale ModDetails due to rehydrating modules against non-finalised
interfaces.
== Loops with multiple boot files
It is possible for a module graph to have a loop (SCC, when ignoring boot files)
which requires multiple boot files to break. In this case we must perform the
necessary hydration steps before and after compiling modules which have boot files
which are described above for corectness but also perform an additional hydration step
at the end of the SCC to remove space leaks.
Consider the following example:
┌───────┐ ┌───────┐
│ │ │ │
│ A │ │ B │
│ │ │ │
└─────┬─┘ └───┬───┘
│ │
┌────▼─────────▼──┐
│ │
│ C │
└────┬─────────┬──┘
│ │
┌────▼──┐ ┌───▼───┐
│ │ │ │
│ A-boot│ │ B-boot│
│ │ │ │
└───────┘ └───────┘
A, B and C live together in a SCC. Say we compile the modules in order
A-boot, B-boot, C, A, B then when we compile A we will perform the hydration steps
(because A has a boot file). Therefore C will be hydrated relative to A, and the
ModDetails for A will reference C/A. Then when B is compiled C will be rehydrated again,
and so B will reference C/A,B, its interface will be hydrated relative to both A and B.
Now there is a space leak because say C is a very big module, there are now two different copies of
ModDetails kept alive by modules A and B.
The way to avoid this space leak is to rehydrate an entire SCC together at the
end of compilation so that all the ModDetails point to interfaces for .hs files.
In this example, when we hydrate A, B and C together then both A and B will refer to
C/A,B.
See #21900 for some more discussion.
-------------------------------------------------------
In addition to this simple case, there is also the potential for a leak
during parallel upsweep which is also fixed by this patch. Transcibed is
Note [ModuleNameSet, efficiency and space leaks]
Note [ModuleNameSet, efficiency and space leaks]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
During unsweep the results of compiling modules are placed into a MVar, to find
the environment the module needs to compile itself in the MVar is consulted and
the HomeUnitGraph is set accordingly. The reason we do this is that precisely tracking
module dependencies and recreating the HUG from scratch each time is very expensive.
In serial mode (-j1), this all works out fine because a module can only be compiled after
its dependencies have finished compiling and not interleaved with compiling module loops.
Therefore when we create the finalised or no loop interfaces, the HUG only contains
finalised interfaces.
In parallel mode, we have to be more careful because the HUG variable can contain
non-finalised interfaces which have been started by another thread. In order to avoid
a space leak where a finalised interface is compiled against a HPT which contains a
non-finalised interface we have to restrict the HUG to only the visible modules.
The visible modules is recording in the ModuleNameSet, this is propagated upwards
whilst compiling and explains which transitive modules are visible from a certain point.
This set is then used to restrict the HUG before the module is compiled to only
the visible modules and thus avoiding this tricky space leak.
Efficiency of the ModuleNameSet is of utmost importance because a union occurs for
each edge in the module graph. Therefore the set is represented directly as an IntSet
which provides suitable performance, even using a UniqSet (which is backed by an IntMap) is
too slow. The crucial test of performance here is the time taken to a do a no-op build in --make mode.
See test "jspace" for an example which used to trigger this problem.
Fixes #21900
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to previous ModuleGraphs, in particular the lazy `mg_non_boot` field.
This manifests in `extendMG`.
Solution: Delete `mg_non_boot` as it is only used for `mgLookupModule`, which
is only called in two places in the compiler, and should only be called at most
once for every home unit:
GHC.Driver.Make:
mainModuleSrcPath :: Maybe String
mainModuleSrcPath = do
ms <- mgLookupModule mod_graph (mainModIs hue)
ml_hs_file (ms_location ms)
GHCI.UI:
listModuleLine modl line = do
graph <- GHC.getModuleGraph
let this = GHC.mgLookupModule graph modl
Instead `mgLookupModule` can be a linear function that looks through the entire
list of `ModuleGraphNodes`
Fixes #21816
|
|
|
|
|
|
|
|
|
|
| |
We were attempting to rehydrate all dependencies of a particular module,
but we actually only needed to rehydrate those of the current package
(as those are the ones participating in the loop).
This fixes loading GHC into a multi-unit session.
Fixes #21814
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this change, `Backend` becomes an abstract type
(there are no more exposed value constructors).
Decisions that were formerly made by asking "is the
current back end equal to (or different from) this named value
constructor?" are now made by interrogating the back end about
its properties, which are functions exported by `GHC.Driver.Backend`.
There is a description of how to migrate code using `Backend` in the
user guide.
Clients using the GHC API can find a backdoor to access the Backend
datatype in GHC.Driver.Backend.Internal.
Bumps haddock submodule.
Fixes #20927
|
|
|
|
|
|
| |
This function is used by API clients (hls).
This supercedes !6922
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As noted in #21071 we were missing adding this edge so there were
situations where the .hs file would get compiled before the .hs-boot
file which leads to issues with -j.
I fixed this properly by adding the edge in downsweep so the definition
of nodeDependencies can be simplified to avoid adding this dummy edge
in.
There are plenty of tests which seem to have these redundant boot files
anyway so no new test. #21094 tracks the more general issue of
identifying redundant hs-boot and SOURCE imports.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the case when we tell moduleGraphNodes to drop hs-boot files the idea
is to collapse hs-boot files into their hs file nodes. In the old code
* nodeDependencies changed edges from IsBoot to NonBoot
* moduleGraphNodes just dropped boot file nodes
The net result is that any dependencies of the hs-boot files themselves
were dropped. The correct thing to do is
* nodeDependencies changes edges from IsBoot to NonBoot
* moduleGraphNodes merges dependencies of IsBoot and NonBoot nodes.
The result is a properly quotiented dependency graph which contains no
hs-boot files nor hs-boot file edges.
Why this didn't cause endless issues when compiling with boot files, we
will never know.
|
|
|
|
|
| |
It turns out this job hasn't been running for quite a while (perhaps
ever) so there are quite a few failures when running the linter locally.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea of the needsTemplateHaskellOrQQ query is to check if any of the
modules in a module graph need Template Haskell then enable -dynamic-too
if necessary. This is quite imprecise though as it will enable
-dynamic-too for all modules in the module graph even if only one module
uses template haskell, with multiple home units, this is obviously even
worse.
With -fno-code we already have similar logic to enable code generation
just for the modules which are dependeded on my TemplateHaskell modules
so we use the same code path to decide whether to enable -dynamic-too
rather than using this big hammer.
This is part of the larger overall goal of moving as much statically
known configuration into the downsweep as possible in order to have
fully decided the build plan and all the options before starting to
build anything.
I also included a fix to #21095, a long standing bug with with the logic
which is supposed to enable the external interpreter if we don't have
the internal interpreter.
Fixes #20696 #21095
|
|
|
|
|
| |
It was unused in the compiler so I have removed it to streamline
ModuleGraph.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multiple home units allows you to load different packages which may depend on
each other into one GHC session. This will allow both GHCi and HLS to support
multi component projects more naturally.
Public Interface
~~~~~~~~~~~~~~~~
In order to specify multiple units, the -unit @⟨filename⟩ flag
is given multiple times with a response file containing the arguments for each unit.
The response file contains a newline separated list of arguments.
```
ghc -unit @unitLibCore -unit @unitLib
```
where the `unitLibCore` response file contains the normal arguments that cabal would pass to `--make` mode.
```
-this-unit-id lib-core-0.1.0.0
-i
-isrc
LibCore.Utils
LibCore.Types
```
The response file for lib, can specify a dependency on lib-core, so then modules in lib can use modules from lib-core.
```
-this-unit-id lib-0.1.0.0
-package-id lib-core-0.1.0.0
-i
-isrc
Lib.Parse
Lib.Render
```
Then when the compiler starts in --make mode it will compile both units lib and lib-core.
There is also very basic support for multiple home units in GHCi, at the
moment you can start a GHCi session with multiple units but only the
:reload is supported. Most commands in GHCi assume a single home unit,
and so it is additional work to work out how to modify the interface to
support multiple loaded home units.
Options used when working with Multiple Home Units
There are a few extra flags which have been introduced specifically for
working with multiple home units. The flags allow a home unit to pretend
it’s more like an installed package, for example, specifying the package
name, module visibility and reexported modules.
-working-dir ⟨dir⟩
It is common to assume that a package is compiled in the directory
where its cabal file resides. Thus, all paths used in the compiler
are assumed to be relative to this directory. When there are
multiple home units the compiler is often not operating in the
standard directory and instead where the cabal.project file is
located. In this case the -working-dir option can be passed which
specifies the path from the current directory to the directory the
unit assumes to be it’s root, normally the directory which contains
the cabal file.
When the flag is passed, any relative paths used by the compiler are
offset by the working directory. Notably this includes -i and
-I⟨dir⟩ flags.
-this-package-name ⟨name⟩
This flag papers over the awkward interaction of the PackageImports
and multiple home units. When using PackageImports you can specify
the name of the package in an import to disambiguate between modules
which appear in multiple packages with the same name.
This flag allows a home unit to be given a package name so that you
can also disambiguate between multiple home units which provide
modules with the same name.
-hidden-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules in a home unit should not be visible outside of the unit it
belongs to.
The main use of this flag is to be able to recreate the difference
between an exposed and hidden module for installed packages.
-reexported-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules are not defined in a unit but should be reexported. The
effect is that other units will see this module as if it was defined
in this unit.
The use of this flag is to be able to replicate the reexported
modules feature of packages with multiple home units.
Offsetting Paths in Template Haskell splices
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When using Template Haskell to embed files into your program,
traditionally the paths have been interpreted relative to the directory
where the .cabal file resides. This causes problems for multiple home
units as we are compiling many different libraries at once which have
.cabal files in different directories.
For this purpose we have introduced a way to query the value of the
-working-dir flag to the Template Haskell API. By using this function we
can implement a makeRelativeToProject function which offsets a path
which is relative to the original project root by the value of
-working-dir.
```
import Language.Haskell.TH.Syntax ( makeRelativeToProject )
foo = $(makeRelativeToProject "./relative/path" >>= embedFile)
```
> If you write a relative path in a Template Haskell splice you should use the makeRelativeToProject function so that your library works correctly with multiple home units.
A similar function already exists in the file-embed library. The
function in template-haskell implements this function in a more robust
manner by honouring the -working-dir flag rather than searching the file
system.
Closure Property for Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For tools or libraries using the API there is one very important closure
property which must be adhered to:
> Any dependency which is not a home unit must not (transitively) depend
on a home unit.
For example, if you have three packages p, q and r, then if p depends on
q which depends on r then it is illegal to load both p and r as home
units but not q, because q is a dependency of the home unit p which
depends on another home unit r.
If you are using GHC by the command line then this property is checked,
but if you are using the API then you need to check this property
yourself. If you get it wrong you will probably get some very confusing
errors about overlapping instances.
Limitations of Multiple Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a few limitations of the initial implementation which will be smoothed out on user demand.
* Package thinning/renaming syntax is not supported
* More complicated reexports/renaming are not yet supported.
* It’s more common to run into existing linker bugs when loading a
large number of packages in a session (for example #20674, #20689)
* Backpack is not yet supported when using multiple home units.
* Dependency chasing can be quite slow with a large number of
modules and packages.
* Loading wired-in packages as home units is currently not supported
(this only really affects GHC developers attempting to load
template-haskell).
* Barely any normal GHCi features are supported, it would be good to
support enough for ghcid to work correctly.
Despite these limitations, the implementation works already for nearly
all packages. It has been testing on large dependency closures,
including the whole of head.hackage which is a total of 4784 modules
from 452 packages.
Internal Changes
~~~~~~~~~~~~~~~~
* The biggest change is that the HomePackageTable is replaced with the
HomeUnitGraph. The HomeUnitGraph is a map from UnitId to HomeUnitEnv,
which contains information specific to each home unit.
* The HomeUnitEnv contains:
- A unit state, each home unit can have different package db flags
- A set of dynflags, each home unit can have different flags
- A HomePackageTable
* LinkNode: A new node type is added to the ModuleGraph, this is used to
place the linking step into the build plan so linking can proceed in
parralel with other packages being built.
* New invariant: Dependencies of a ModuleGraphNode can be completely
determined by looking at the value of the node. In order to achieve
this, downsweep now performs a more complete job of downsweeping and
then the dependenices are recorded forever in the node rather than
being computed again from the ModSummary.
* Some transitive module calculations are rewritten to use the
ModuleGraph which is more efficient.
* There is always an active home unit, which simplifies modifying a lot
of the existing API code which is unit agnostic (for example, in the
driver).
The road may be bumpy for a little while after this change but the
basics are well-tested.
One small metric increase, which we accept and also submodule update to
haddock which removes ExtendedModSummary.
Closes #10827
-------------------------
Metric Increase:
MultiLayerModules
-------------------------
Co-authored-by: Fendor <power.walross@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note [Hydrating Modules]
~~~~~~~~~~~~~~~~~~~~~~~~
What is hydrating a module?
* There are two versions of a module, the ModIface is the on-disk version and the ModDetails is a fleshed-out in-memory version.
* We can **hydrate** a ModIface in order to obtain a ModDetails.
Hydration happens in three different places
* When an interface file is initially loaded from disk, it has to be hydrated.
* When a module is finished compiling, we hydrate the ModIface in order to generate
the version of ModDetails which exists in memory (see Note)
* When dealing with boot files and module loops (see Note [Rehydrating Modules])
Note [Rehydrating Modules]
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a module has a boot file then it is critical to rehydrate the modules on
the path between the two.
Suppose we have ("R" for "recursive"):
```
R.hs-boot: module R where
data T
g :: T -> T
A.hs: module A( f, T, g ) where
import {-# SOURCE #-} R
data S = MkS T
f :: T -> S = ...g...
R.hs: module R where
data T = T1 | T2 S
g = ...f...
```
After compiling A.hs we'll have a TypeEnv in which the Id for `f` has a type
type uses the AbstractTyCon T; and a TyCon for `S` that also mentions that same
AbstractTyCon. (Abstract because it came from R.hs-boot; we know nothing about
it.)
When compiling R.hs, we build a TyCon for `T`. But that TyCon mentions `S`, and
it currently has an AbstractTyCon for `T` inside it. But we want to build a
fully cyclic structure, in which `S` refers to `T` and `T` refers to `S`.
Solution: **rehydration**. *Before compiling `R.hs`*, rehydrate all the
ModIfaces below it that depend on R.hs-boot. To rehydrate a ModIface, call
`typecheckIface` to convert it to a ModDetails. It's just a de-serialisation
step, no type inference, just lookups.
Now `S` will be bound to a thunk that, when forced, will "see" the final binding
for `T`; see [Tying the knot](https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/tying-the-knot).
But note that this must be done *before* compiling R.hs.
When compiling R.hs, the knot-tying stuff above will ensure that `f`'s unfolding
mentions the `LocalId` for `g`. But when we finish R, we carefully ensure that
all those `LocalIds` are turned into completed `GlobalIds`, replete with
unfoldings etc. Alas, that will not apply to the occurrences of `g` in `f`'s
unfolding. And if we leave matters like that, they will stay that way, and *all*
subsequent modules that import A will see a crippled unfolding for `f`.
Solution: rehydrate both R and A's ModIface together, right after completing R.hs.
We only need rehydrate modules that are
* Below R.hs
* Above R.hs-boot
There might be many unrelated modules (in the home package) that don't need to be
rehydrated.
This dark corner is the subject of #14092.
Suppose we add to our example
```
X.hs module X where
import A
data XT = MkX T
fx = ...g...
```
If in `--make` we compile R.hs-boot, then A.hs, then X.hs, we'll get a `ModDetails` for `X` that has an AbstractTyCon for `T` in the the argument type of `MkX`. So:
* Either we should delay compiling X until after R has beeen compiled.
* Or we should rehydrate X after compiling R -- because it transitively depends on R.hs-boot.
Ticket #20200 has exposed some issues to do with the knot-tying logic in GHC.Make, in `--make` mode.
this particular issue starts [here](https://gitlab.haskell.org/ghc/ghc/-/issues/20200#note_385758).
The wiki page [Tying the knot](https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/tying-the-knot) is helpful.
Also closely related are
* #14092
* #14103
Fixes tickets #20200 #20561
|
|
|
|
|
|
|
|
| |
Two reasons for this change:
1. Avoid computing the transitive dependencies when compiling each
module, this can save a lot of repeated work.
2. More robust to forthcoming changes to support multiple home units.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we would print
[1 of 3] Compiling T[boot] ( T.hs-boot, nothing, T.dyn_o )
Which was clearly wrong for two reasons.
1. No dynamic object file was produced for T[boot]
2. The file would be called T.dyn_o-boot if it was produced.
Fixes #20300
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ModLocation is the data type which tells you the locations of all the
build products which can affect recompilation. It is now computed in one
place and not modified through the pipeline. Important locations will
now just consult ModLocation rather than construct the dynamic object
path incorrectly.
* Add paths for dynamic object and dynamic interface files to
ModLocation.
* Always use the paths from mod location when looking for where to find
any interface or object file.
* Always use the paths in a ModLocation when deciding where to write an
interface and object file.
* Remove `dynamicOutputFile` and `dynamicOutputHi` functions which
*calculated* (incorrectly) the location of `dyn_o` and `dyn_hi` files.
* Don't set `outputFile_` and so-on in `enableCodeGenWhen`, `-o` and
hence `outputFile_` should not affect the location of object files in
`--make` mode. It is now sufficient to just update the ModLocation with
the temporary paths.
* In `hscGenBackendPipeline` don't recompute the `ModLocation` to
account for `-dynamic-too`, the paths are now accurate from the start
of the run.
* Rename `getLocation` to `mkOneShotModLocation`, as that's the only
place it's used. Increase the locality of the definition by moving it
close to the use-site.
* Load the dynamic interface from ml_dyn_hi_file rather than attempting
to reconstruct it in load_dynamic_too.
* Add a variety of tests to check how -o -dyno etc interact with each
other.
Some other clean-ups
* DeIOify mkHomeModLocation and friends, they are all pure functions.
* Move FinderOpts into GHC.Driver.Config.Finder, next to initFinderOpts.
* Be more precise about whether we mean outputFile or outputFile_: there
were many places where outputFile was used but the result shouldn't have
been affected by `-dyno` (for example the filename of the resulting
executable). In these places dynamicNow would never be set but it's
still more precise to not allow for this possibility.
* Typo fixes suffices -> suffixes in the appropiate places.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch specifies and simplifies the module cycle compilation
in upsweep. How things work are described in the Note [Upsweep]
Note [Upsweep]
~~~~~~~~~~~~~~
Upsweep takes a 'ModuleGraph' as input, computes a build plan and then executes
the plan in order to compile the project.
The first step is computing the build plan from a 'ModuleGraph'.
The output of this step is a `[BuildPlan]`, which is a topologically sorted plan for
how to build all the modules.
```
data BuildPlan = SingleModule ModuleGraphNode -- A simple, single module all alone but *might* have an hs-boot file which isn't part of a cycle
| ResolvedCycle [ModuleGraphNode] -- A resolved cycle, linearised by hs-boot files
| UnresolvedCycle [ModuleGraphNode] -- An actual cycle, which wasn't resolved by hs-boot files
```
The plan is computed in two steps:
Step 1: Topologically sort the module graph without hs-boot files. This returns a [SCC ModuleGraphNode] which contains
cycles.
Step 2: For each cycle, topologically sort the modules in the cycle *with* the relevant hs-boot files. This should
result in an acyclic build plan if the hs-boot files are sufficient to resolve the cycle.
The `[BuildPlan]` is then interpreted by the `interpretBuildPlan` function.
* `SingleModule nodes` are compiled normally by either the upsweep_inst or upsweep_mod functions.
* `ResolvedCycles` need to compiled "together" so that the information which ends up in
the interface files at the end is accurate (and doesn't contain temporary information from
the hs-boot files.)
- During the initial compilation, a `KnotVars` is created which stores an IORef TypeEnv for
each module of the loop. These IORefs are gradually updated as the loop completes and provide
the required laziness to typecheck the module loop.
- At the end of typechecking, all the interface files are typechecked again in
the retypecheck loop. This time, the knot-tying is done by the normal laziness
based tying, so the environment is run without the KnotVars.
* UnresolvedCycles are indicative of a proper cycle, unresolved by hs-boot files
and are reported as an error to the user.
The main trickiness of `interpretBuildPlan` is deciding which version of a dependency
is visible from each module. For modules which are not in a cycle, there is just
one version of a module, so that is always used. For modules in a cycle, there are two versions of
'HomeModInfo'.
1. Internal to loop: The version created whilst compiling the loop by upsweep_mod.
2. External to loop: The knot-tied version created by typecheckLoop.
Whilst compiling a module inside the loop, we need to use the (1). For a module which
is outside of the loop which depends on something from in the loop, the (2) version
is used.
As the plan is interpreted, which version of a HomeModInfo is visible is updated
by updating a map held in a state monad. So after a loop has finished being compiled,
the visible module is the one created by typecheckLoop and the internal version is not
used again.
This plan also ensures the most important invariant to do with module loops:
> If you depend on anything within a module loop, before you can use the dependency,
the whole loop has to finish compiling.
The end result of `interpretBuildPlan` is a `[MakeAction]`, which are pairs
of `IO a` actions and a `MVar (Maybe a)`, somewhere to put the result of running
the action. This list is topologically sorted, so can be run in order to compute
the whole graph.
As well as this `interpretBuildPlan` also outputs an `IO [Maybe (Maybe HomeModInfo)]` which
can be queried at the end to get the result of all modules at the end, with their proper
visibility. For example, if any module in a loop fails then all modules in that loop will
report as failed because the visible node at the end will be the result of retypechecking
those modules together.
Along the way we also fix a number of other bugs in the driver:
* Unify upsweep and parUpsweep.
* Fix #19937 (static points, ghci and -j)
* Adds lots of module loop tests due to Divam.
Also related to #20030
Co-authored-by: Divam Narula <dfordivam@gmail.com>
-------------------------
Metric Decrease:
T10370
-------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mode backpack edges
Backpack instantiations need to be typechecked to make sure that the
arguments fit the parameters. `tcRnInstantiateSignature` checks
instantiations with concrete modules, while `tcRnCheckUnit` checks
instantiations with free holes (signatures in the current modules).
Before this change, it worked that `tcRnInstantiateSignature` was called
after typechecking the argument module, see `HscMain.hsc_typecheck`,
while `tcRnCheckUnit` was called in `unsweep'` where-bound in
`GhcMake.upsweep`. `tcRnCheckUnit` was called once per each
instantiation once all the argument sigs were processed. This was done
with simple "to do" and "already done" accumulators in the fold.
`parUpsweep` did not implement the change.
With this change, `tcRnCheckUnit` instead is associated with its own
node in the `ModuleGraph`. Nodes are now:
```haskell
data ModuleGraphNode
-- | Instantiation nodes track the instantiation of other units
-- (backpack dependencies) with the holes (signatures) of the current package.
= InstantiationNode InstantiatedUnit
-- | There is a module summary node for each module, signature, and boot module being built.
| ModuleNode ExtendedModSummary
```
instead of just `ModSummary`; the `InstantiationNode` case is the
instantiation of a unit to be checked. The dependencies of such nodes
are the same "free holes" as was checked with the accumulator before.
Both versions of upsweep on such a node call `tcRnCheckUnit`.
There previously was an `implicitRequirements` function which would
crawl through every non-current-unit module dep to look for all free
holes (signatures) to add as dependencies in `GHC.Driver.Make`. But this
is no good: we shouldn't be looking for transitive anything when
building the graph: the graph should only have immediate edges and the
scheduler takes care that all transitive requirements are met.
So `GHC.Driver.Make` stopped using `implicitRequirements`, and instead
uses a new `implicitRequirementsShallow`, which just returns the
outermost instantiation node (or module name if the immediate dependency
is itself a signature). The signature dependencies are just treated like
any other imported module, but the module ones then go in a list stored
in the `ModuleNode` next to the `ModSummary` as the "extra backpack
dependencies". When `downsweep` creates the mod summaries, it adds this
information too.
------
There is one code quality, and possible correctness thing left: In
addition to `implicitRequirements` there is `findExtraSigImports`, which
says something like "if you are an instantiation argument (you are
substituted or a signature), you need to import its things too". This
is a little non-local so I am not quite sure how to get rid of it in
`GHC.Driver.Make`, but we probably should eventually.
First though, let's try to make a test case that observes that we don't
do this, lest it actually be unneeded. Until then, I'm happy to leave it
as is.
------
Beside the ability to use `-j`, the other major user-visibile side
effect of this change is that that the --make progress log now includes
"Instantiating" messages for these new nodes. Those also are numbered
like module nodes and count towards the total.
------
Fixes #17188
Updates hackage submomdule
Metric Increase:
T12425
T13035
|
|
I was working on making DynFlags stateless (#17957), especially by
storing loaded plugins into HscEnv instead of DynFlags. It turned out to
be complicated because HscEnv is in GHC.Driver.Types but LoadedPlugin
isn't: it is in GHC.Driver.Plugins which depends on GHC.Driver.Types. I
didn't feel like introducing yet another hs-boot file to break the loop.
Additionally I remember that while we introduced the module hierarchy
(#13009) we talked about splitting GHC.Driver.Types because it contained
various unrelated types and functions, but we never executed. I didn't
feel like making GHC.Driver.Types bigger with more unrelated Plugins
related types, so finally I bit the bullet and split GHC.Driver.Types.
As a consequence this patch moves a lot of things. I've tried to put
them into appropriate modules but nothing is set in stone.
Several other things moved to avoid loops.
* Removed Binary instances from GHC.Utils.Binary for random compiler
things
* Moved Typeable Binary instances into GHC.Utils.Binary.Typeable: they
import a lot of things that users of GHC.Utils.Binary don't want to
depend on.
* put everything related to Units/Modules under GHC.Unit:
GHC.Unit.Finder, GHC.Unit.Module.{ModGuts,ModIface,Deps,etc.}
* Created several modules under GHC.Types: GHC.Types.Fixity, SourceText,
etc.
* Split GHC.Utils.Error (into GHC.Types.Error)
* Finally removed GHC.Driver.Types
Note that this patch doesn't put loaded plugins into HscEnv. It's left
for another patch.
Bump haddock submodule
|