diff options
author | Matthew Pickering <matthewtpickering@gmail.com> | 2021-07-20 11:49:22 +0200 |
---|---|---|
committer | Matthew Pickering <matthewtpickering@gmail.com> | 2021-12-28 09:47:53 +0000 |
commit | fd42ab5fa1df847a6b595dfe4b63d9c7eecbf400 (patch) | |
tree | 3bd7add640ee4e1340de079a16a05fd34548925f /testsuite/tests/perf | |
parent | 3219610e3ba6cb6a5cd1f4e32e2b4befea5bd384 (diff) | |
download | haskell-fd42ab5fa1df847a6b595dfe4b63d9c7eecbf400.tar.gz |
Multiple Home Units
Multiple home units allows you to load different packages which may depend on
each other into one GHC session. This will allow both GHCi and HLS to support
multi component projects more naturally.
Public Interface
~~~~~~~~~~~~~~~~
In order to specify multiple units, the -unit @⟨filename⟩ flag
is given multiple times with a response file containing the arguments for each unit.
The response file contains a newline separated list of arguments.
```
ghc -unit @unitLibCore -unit @unitLib
```
where the `unitLibCore` response file contains the normal arguments that cabal would pass to `--make` mode.
```
-this-unit-id lib-core-0.1.0.0
-i
-isrc
LibCore.Utils
LibCore.Types
```
The response file for lib, can specify a dependency on lib-core, so then modules in lib can use modules from lib-core.
```
-this-unit-id lib-0.1.0.0
-package-id lib-core-0.1.0.0
-i
-isrc
Lib.Parse
Lib.Render
```
Then when the compiler starts in --make mode it will compile both units lib and lib-core.
There is also very basic support for multiple home units in GHCi, at the
moment you can start a GHCi session with multiple units but only the
:reload is supported. Most commands in GHCi assume a single home unit,
and so it is additional work to work out how to modify the interface to
support multiple loaded home units.
Options used when working with Multiple Home Units
There are a few extra flags which have been introduced specifically for
working with multiple home units. The flags allow a home unit to pretend
it’s more like an installed package, for example, specifying the package
name, module visibility and reexported modules.
-working-dir ⟨dir⟩
It is common to assume that a package is compiled in the directory
where its cabal file resides. Thus, all paths used in the compiler
are assumed to be relative to this directory. When there are
multiple home units the compiler is often not operating in the
standard directory and instead where the cabal.project file is
located. In this case the -working-dir option can be passed which
specifies the path from the current directory to the directory the
unit assumes to be it’s root, normally the directory which contains
the cabal file.
When the flag is passed, any relative paths used by the compiler are
offset by the working directory. Notably this includes -i and
-I⟨dir⟩ flags.
-this-package-name ⟨name⟩
This flag papers over the awkward interaction of the PackageImports
and multiple home units. When using PackageImports you can specify
the name of the package in an import to disambiguate between modules
which appear in multiple packages with the same name.
This flag allows a home unit to be given a package name so that you
can also disambiguate between multiple home units which provide
modules with the same name.
-hidden-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules in a home unit should not be visible outside of the unit it
belongs to.
The main use of this flag is to be able to recreate the difference
between an exposed and hidden module for installed packages.
-reexported-module ⟨module name⟩
This flag can be supplied multiple times in order to specify which
modules are not defined in a unit but should be reexported. The
effect is that other units will see this module as if it was defined
in this unit.
The use of this flag is to be able to replicate the reexported
modules feature of packages with multiple home units.
Offsetting Paths in Template Haskell splices
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When using Template Haskell to embed files into your program,
traditionally the paths have been interpreted relative to the directory
where the .cabal file resides. This causes problems for multiple home
units as we are compiling many different libraries at once which have
.cabal files in different directories.
For this purpose we have introduced a way to query the value of the
-working-dir flag to the Template Haskell API. By using this function we
can implement a makeRelativeToProject function which offsets a path
which is relative to the original project root by the value of
-working-dir.
```
import Language.Haskell.TH.Syntax ( makeRelativeToProject )
foo = $(makeRelativeToProject "./relative/path" >>= embedFile)
```
> If you write a relative path in a Template Haskell splice you should use the makeRelativeToProject function so that your library works correctly with multiple home units.
A similar function already exists in the file-embed library. The
function in template-haskell implements this function in a more robust
manner by honouring the -working-dir flag rather than searching the file
system.
Closure Property for Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For tools or libraries using the API there is one very important closure
property which must be adhered to:
> Any dependency which is not a home unit must not (transitively) depend
on a home unit.
For example, if you have three packages p, q and r, then if p depends on
q which depends on r then it is illegal to load both p and r as home
units but not q, because q is a dependency of the home unit p which
depends on another home unit r.
If you are using GHC by the command line then this property is checked,
but if you are using the API then you need to check this property
yourself. If you get it wrong you will probably get some very confusing
errors about overlapping instances.
Limitations of Multiple Home Units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a few limitations of the initial implementation which will be smoothed out on user demand.
* Package thinning/renaming syntax is not supported
* More complicated reexports/renaming are not yet supported.
* It’s more common to run into existing linker bugs when loading a
large number of packages in a session (for example #20674, #20689)
* Backpack is not yet supported when using multiple home units.
* Dependency chasing can be quite slow with a large number of
modules and packages.
* Loading wired-in packages as home units is currently not supported
(this only really affects GHC developers attempting to load
template-haskell).
* Barely any normal GHCi features are supported, it would be good to
support enough for ghcid to work correctly.
Despite these limitations, the implementation works already for nearly
all packages. It has been testing on large dependency closures,
including the whole of head.hackage which is a total of 4784 modules
from 452 packages.
Internal Changes
~~~~~~~~~~~~~~~~
* The biggest change is that the HomePackageTable is replaced with the
HomeUnitGraph. The HomeUnitGraph is a map from UnitId to HomeUnitEnv,
which contains information specific to each home unit.
* The HomeUnitEnv contains:
- A unit state, each home unit can have different package db flags
- A set of dynflags, each home unit can have different flags
- A HomePackageTable
* LinkNode: A new node type is added to the ModuleGraph, this is used to
place the linking step into the build plan so linking can proceed in
parralel with other packages being built.
* New invariant: Dependencies of a ModuleGraphNode can be completely
determined by looking at the value of the node. In order to achieve
this, downsweep now performs a more complete job of downsweeping and
then the dependenices are recorded forever in the node rather than
being computed again from the ModSummary.
* Some transitive module calculations are rewritten to use the
ModuleGraph which is more efficient.
* There is always an active home unit, which simplifies modifying a lot
of the existing API code which is unit agnostic (for example, in the
driver).
The road may be bumpy for a little while after this change but the
basics are well-tested.
One small metric increase, which we accept and also submodule update to
haddock which removes ExtendedModSummary.
Closes #10827
-------------------------
Metric Increase:
MultiLayerModules
-------------------------
Co-authored-by: Fendor <power.walross@gmail.com>
Diffstat (limited to 'testsuite/tests/perf')
-rw-r--r-- | testsuite/tests/perf/compiler/Makefile | 12 | ||||
-rw-r--r-- | testsuite/tests/perf/compiler/MultiLayerModulesTH_Make.stderr | 8 | ||||
-rw-r--r-- | testsuite/tests/perf/compiler/MultiLayerModulesTH_OneShot.stderr | 8 | ||||
-rw-r--r-- | testsuite/tests/perf/compiler/all.T | 41 | ||||
-rwxr-xr-x | testsuite/tests/perf/compiler/genMultiComp.py | 78 | ||||
-rwxr-xr-x | testsuite/tests/perf/compiler/genMultiLayerModulesTH | 47 |
6 files changed, 194 insertions, 0 deletions
diff --git a/testsuite/tests/perf/compiler/Makefile b/testsuite/tests/perf/compiler/Makefile index 20f5704450..0011c70710 100644 --- a/testsuite/tests/perf/compiler/Makefile +++ b/testsuite/tests/perf/compiler/Makefile @@ -16,3 +16,15 @@ T11068: MultiModulesRecomp: ./genMultiLayerModules '$(TEST_HC)' $(TEST_HC_OPTS) -v0 MultiLayerModules.hs + +MultiComponentModulesRecomp: + '$(PYTHON)' genMultiComp.py + TEST_HC='$(TEST_HC)' TEST_HC_OPTS='$(TEST_HC_OPTS)' ./run + +MultiLayerModulesTH_Make_Prep: + ./genMultiLayerModulesTH + "$(TEST_HC)" $(TEST_HC_OPTS) MultiLayerModulesPrep -dynamic-too -v0 + +MultiLayerModulesTH_OneShot_Prep: MultiLayerModulesTH_Make_Prep + $(CP) MultiLayerModules.hs MultiLayerModulesTH_OneShot.hs + diff --git a/testsuite/tests/perf/compiler/MultiLayerModulesTH_Make.stderr b/testsuite/tests/perf/compiler/MultiLayerModulesTH_Make.stderr new file mode 100644 index 0000000000..4a1b876638 --- /dev/null +++ b/testsuite/tests/perf/compiler/MultiLayerModulesTH_Make.stderr @@ -0,0 +1,8 @@ + +MultiLayerModules.hs:334:8: error: + • Exception when trying to run compile-time code: + deliberate error +CallStack (from HasCallStack): + error, called at MultiLayerModules.hs:334:10 in main:MultiLayerModules + Code: (error "deliberate error") + • In the untyped splice: $(error "deliberate error") diff --git a/testsuite/tests/perf/compiler/MultiLayerModulesTH_OneShot.stderr b/testsuite/tests/perf/compiler/MultiLayerModulesTH_OneShot.stderr new file mode 100644 index 0000000000..a958aceeea --- /dev/null +++ b/testsuite/tests/perf/compiler/MultiLayerModulesTH_OneShot.stderr @@ -0,0 +1,8 @@ + +MultiLayerModulesTH_OneShot.hs:334:8: error: + • Exception when trying to run compile-time code: + deliberate error +CallStack (from HasCallStack): + error, called at MultiLayerModulesTH_OneShot.hs:334:10 in main:MultiLayerModules + Code: (error "deliberate error") + • In the untyped splice: $(error "deliberate error") diff --git a/testsuite/tests/perf/compiler/all.T b/testsuite/tests/perf/compiler/all.T index 2f52209d06..25672bf7e7 100644 --- a/testsuite/tests/perf/compiler/all.T +++ b/testsuite/tests/perf/compiler/all.T @@ -293,6 +293,29 @@ test('MultiLayerModulesRecomp', multimod_compile, ['MultiLayerModules', '-v0']) + +# A performance test for calculating link dependencies in --make mode. +test('MultiLayerModulesTH_Make', + [ collect_compiler_stats('bytes allocated',3), + pre_cmd('$MAKE -s --no-print-directory MultiLayerModulesTH_Make_Prep'), + extra_files(['genMultiLayerModulesTH']), + unless(have_dynamic(),skip), + compile_timeout_multiplier(5) + ], + multimod_compile_fail, + ['MultiLayerModules', '-v0']) + +# A performance test for calculating link dependencies in -c mode. +test('MultiLayerModulesTH_OneShot', + [ collect_compiler_stats('bytes allocated',3), + pre_cmd('$MAKE -s --no-print-directory MultiLayerModulesTH_OneShot_Prep'), + extra_files(['genMultiLayerModulesTH']), + unless(have_dynamic(),skip), + compile_timeout_multiplier(5) + ], + compile_fail, + ['-v0']) + test('MultiLayerModulesDefsGhci', [ collect_compiler_residency(15), pre_cmd('./genMultiLayerModulesDefs'), @@ -319,6 +342,24 @@ test('MultiLayerModulesNoCode', ghci_script, ['MultiLayerModulesNoCode.script']) +test('MultiComponentModulesRecomp', + [ collect_compiler_stats('bytes allocated', 2), + pre_cmd('$MAKE -s --no-print-directory MultiComponentModulesRecomp'), + extra_files(['genMultiComp.py']), + compile_timeout_multiplier(5) + ], + multiunit_compile, + [['unitp%d' % n for n in range(20)], '-fno-code -fwrite-interface -v0']) + +test('MultiComponentModules', + [ collect_compiler_stats('bytes allocated', 2), + pre_cmd('$PYTHON ./genMultiComp.py'), + extra_files(['genMultiComp.py']), + compile_timeout_multiplier(5) + ], + multiunit_compile, + [['unitp%d' % n for n in range(20)], '-fno-code -fwrite-interface -v0']) + test('ManyConstructors', [ collect_compiler_stats('bytes allocated',2), pre_cmd('./genManyConstructors'), diff --git a/testsuite/tests/perf/compiler/genMultiComp.py b/testsuite/tests/perf/compiler/genMultiComp.py new file mode 100755 index 0000000000..d069f77959 --- /dev/null +++ b/testsuite/tests/perf/compiler/genMultiComp.py @@ -0,0 +1,78 @@ +#! /usr/bin/env python + +# Generates a set of interdependent units for testing any obvious performance cliffs +# with multiple component support. +# The structure of each unit is: +# * A Top module, which imports the rest of the modules in the unit +# * A number of modules names Mod_<pid>_<mid>, each module imports all the top +# modules beneath it, and all the modules in the current unit beneath it. + +import os +import stat + +modules_per = 20 +packages = 20 +total = modules_per * packages + +def unit_dir(p): + return "p" + str(p) + +def unit_fname(p): + return "unitp" + str(p) + +def top_fname(p): + return "Top" + str(p) + +def mod_name(p, k): + return "Mod_%d_%d" % (p, k) + +def flatten(t): + return [item for sublist in t for item in sublist] + +def mk_unit_file(p): + fname = top_fname(p) + deps = flatten([["-package-id", unit_dir(k)] for k in range(p)]) + opts = ["-working-dir", unit_dir(p), "-this-unit-id", unit_dir(p), fname] + deps + with open(unit_fname(p), 'w') as fout: + fout.write(' '.join(opts)) + +def mk_top_mod(p): + pdir = unit_dir(p) + topfname = os.path.join(pdir, top_fname(p) + '.hs') + header = 'module %s where' % top_fname(p) + imports = ['import %s' % mod_name(p, m) for m in range(modules_per)] + with open(topfname, 'w') as fout: + fout.write(header + '\n') + fout.write('\n'.join(imports)) + +def mk_mod(p, k): + pdir = unit_dir(p) + fname = os.path.join(pdir, mod_name(p, k) + '.hs') + header = 'module %s where' % mod_name(p,k) + imports1 = ['import ' + top_fname(pn) for pn in range(p)] + imports2 = ['import ' + mod_name(p, kn) for kn in range(k)] + with open(fname, 'w') as fout: + fout.write(header + '\n') + fout.write('\n'.join(imports1)) + fout.write('\n') + fout.write('\n'.join(imports2)) + +def mk_run(): + all_units = flatten([['-unit', '@'+unit_fname(pn)] for pn in range(packages)]) + with open('run', 'w') as fout: + fout.write("$TEST_HC $TEST_HC_OPTS -fno-code -fwrite-interface ") + fout.write(" ".join(all_units)) + + st = os.stat('run') + os.chmod('run', st.st_mode | stat.S_IEXEC) + + +for p in range(packages): + os.mkdir(unit_dir(p)) + mk_unit_file(p) + mk_top_mod(p) + for k in range(modules_per): + mk_mod(p, k) +mk_run() + + diff --git a/testsuite/tests/perf/compiler/genMultiLayerModulesTH b/testsuite/tests/perf/compiler/genMultiLayerModulesTH new file mode 100755 index 0000000000..2781871fa6 --- /dev/null +++ b/testsuite/tests/perf/compiler/genMultiLayerModulesTH @@ -0,0 +1,47 @@ +#!/usr/bin/env bash +# Generate $DEPTH layers of modules with $WIDTH modules on each layer +# Every module on layer N imports all the modules on layer N-1 +# MultiLayerModulesPrep.hs imports all the modules from the last layer, is used to +# prepare all dependencies. +# MultiLayerModules.hs imports all the modules from the last layer, and has NDEFS*WIDTH +# top-level splices which stress some inefficient parts of link dependency calculation. +# Lastly there is a splice which contains an error so that we don't benchmark code +# generation as well. + +DEPTH=10 +WIDTH=30 +NDEFS=10 +for i in $(seq -w 1 $WIDTH); do + echo "module DummyLevel0M$i where" > DummyLevel0M$i.hs; +done +for l in $(seq 1 $DEPTH); do + for i in $(seq -w 1 $WIDTH); do + echo "module DummyLevel${l}M$i where" > DummyLevel${l}M$i.hs; + for j in $(seq -w 1 $WIDTH); do + echo "import DummyLevel$((l-1))M$j" >> DummyLevel${l}M$i.hs; + done + echo "def_${l}_${i} :: Int" >> DummyLevel${l}M$i.hs; + echo "def_${l}_${i} = ${l} * ${i}" >> DummyLevel${l}M${i}.hs; + done +done +# Gen the prep module, which can be compiled without running and TH splices +# but forces the rest of the project to be built. +echo "module MultiLayerModulesPrep where" > MultiLayerModulesPrep.hs +for j in $(seq -w 1 $WIDTH); do + echo "import DummyLevel${DEPTH}M$j" >> MultiLayerModulesPrep.hs; +done + +echo "{-# LANGUAGE TemplateHaskell #-}" > MultiLayerModules.hs +echo "module MultiLayerModules where" >> MultiLayerModules.hs +echo "import Language.Haskell.TH.Syntax" >> MultiLayerModules.hs +for j in $(seq -w 1 $WIDTH); do + echo "import DummyLevel${DEPTH}M$j" >> MultiLayerModules.hs; +done +for j in $(seq -w 1 $WIDTH); do + for i in $(seq -w 1 $NDEFS); do + echo "defth_${j}_${i} = \$(lift def_${DEPTH}_${j})" >> MultiLayerModules.hs; + done +done +# Finally, a splice with an error so we stop before doing code generation +# This +echo "last = \$(error \"deliberate error\")" >> MultiLayerModules.hs |