| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This requires bumping the `exceptions` and `text` submodules to bring
in commits that bump their respective upper version bounds on
`template-haskell`.
Fixes #17645. Fixes #17696.
Note that the new `text` commit includes a fair number of additions
to the Haddocks in that library. As a result, Haddock has to do more
work during the `haddock.Cabal` test case, increasing the number of
allocations it requires. Therefore,
-------------------------
Metric Increase:
haddock.Cabal
-------------------------
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch:
1. Writes up a specification for how the types of top-level field
selectors should be determined in a new section of the GHC User's
Guide, and
2. Makes GHC actually implement that specification by using
`conLikeUserTyVarBinders` in `mkOneRecordSelector` to preserve the
order and specificity of type variables written by the user.
Fixes #18023.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This refactoring of Lint was triggered by #17923, which is
fixed by this patch.
The main change is this. Instead of
lintType :: Type -> LintM LintedKind
we now have
lintType :: Type -> LintM LintedType
Previously, all of typeKind was effectively duplicate in lintType.
Moreover, since we have an ambient substitution, we still had to
apply the substition here and there, sometimes more than once. It
was all very tricky, in the end, and made my head hurt.
Now, lintType returns a fully linted type, with all substitutions
performed on it. This is much simpler.
The same thing is needed for Coercions. Instead of
lintCoercion :: OutCoercion
-> LintM (LintedKind, LintedKind,
LintedType, LintedType, Role)
we now have
lintCoercion :: Coercion -> LintM LintedCoercion
Much simpler! The code is shorter and less bug-prone.
There are a lot of knock on effects. But life is now better.
Metric Decrease:
T1969
|
|
|
|
|
|
|
|
|
|
| |
In GHC.Types.Id.Make we were giving a strictness signature to every data
constructor wrapper Id that we weren't looking at in demand analysis
anyway. We used to use its CPR info, but that has its own CPR signature
now.
`Note [Data-con worker strictness]` then felt very out of place, so I
moved it to GHC.Core.DataCon.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the `tyConUnique` record selector would unfold into a huge
case expression that would be inlined in all call sites, such as the
`INLINE`-annotated `coreView`, see #18026. `constraintKindTyConKey` only
occurs as the `Unique` of an `AlgTyCon` anyway, so we can make the code
a lot more compact, but have to move it to GHC.Core.TyCon.
Metric Decrease:
T12150
T12234
|
|
|
|
|
|
|
|
| |
Instead of using `nlHsTyVar`, which hardcodes `NotPromoted`, have
`typeToLHsType` pick between `Promoted` and `NotPromoted` by checking
if a type constructor is promoted using `isPromotedDataCon`.
Fixes #18020.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes every unused TTG extension constructor to be strict in
its field so that the pattern-match coverage checker is smart enough
any such constructors are unreachable in pattern matches. This lets
us remove nearly every use of `noExtCon` in the GHC API. The only
ones we cannot remove are ones underneath uses of `ghcPass`, but that
is only because GHC 8.8's and 8.10's coverage checkers weren't smart
enough to perform this kind of reasoning. GHC HEAD's coverage
checker, on the other hand, _is_ smart enough, so we guard these uses
of `noExtCon` with CPP for now.
Bumps the `haddock` submodule.
Fixes #17992.
|
|
|
|
| |
Update Haddock submodule
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to have another factor, ufKeenessFactor, which would scale the
discounts before they were subtracted from the size. This was justified
with the following comment:
-- We multiple the raw discounts (args_discount and result_discount)
-- ty opt_UnfoldingKeenessFactor because the former have to do with
-- *size* whereas the discounts imply that there's some extra
-- *efficiency* to be gained (e.g. beta reductions, case reductions)
-- by inlining.
However, this is highly suspect since it means that we subtract a
*scaled* size from an absolute size, resulting in crazy (e.g. negative)
scores in some cases (#15304). We consequently killed off
ufKeenessFactor and bumped up the ufUseThreshold to compensate.
Adjustment of unfolding use threshold
=====================================
Since this removes a discount from our inlining heuristic, I revisited our
default choice of -funfolding-use-threshold to minimize the change in
overall inlining behavior. Specifically, I measured runtime allocations
and executable size of nofib and the testsuite performance tests built
using compilers (and core libraries) built with several values of
-funfolding-use-threshold.
This comes as a result of a quantitative comparison of testsuite
performance and code size as a function of ufUseThreshold, comparing
GHC trees using values of 50, 60, 70, 80, 90, and 100. The test set
consisted of nofib and the testsuite performance tests.
A full summary of these measurements are found in the description of
!2608
Comparing executable sizes (relative to the base commit) across all
nofib tests, we see that sizes are similar to the baseline:
gmean min max median
thresh
50 -6.36% -7.04% -4.82% -6.46%
60 -5.04% -5.97% -3.83% -5.11%
70 -2.90% -3.84% -2.31% -2.92%
80 -0.75% -2.16% -0.42% -0.73%
90 +0.24% -0.41% +0.55% +0.26%
100 +1.36% +0.80% +1.64% +1.37%
baseline +0.00% +0.00% +0.00% +0.00%
Likewise, looking at runtime allocations we see that 80 gives slightly
better optimisation than the baseline:
gmean min max median
thresh
50 +0.16% -0.16% +4.43% +0.00%
60 +0.09% -0.00% +3.10% +0.00%
70 +0.04% -0.09% +2.29% +0.00%
80 +0.02% -1.17% +2.29% +0.00%
90 -0.02% -2.59% +1.86% +0.00%
100 +0.00% -2.59% +7.51% -0.00%
baseline +0.00% +0.00% +0.00% +0.00%
Finally, I had to add a NOINLINE in T4306 to ensure that `upd` is
worker-wrappered as the test expects. This makes me wonder whether the
inlining heuristic is now too liberal as `upd` is quite a large
function. The same measure was taken in T12600.
Wall clock time compiling Cabal with -O0
thresh 50 60 70 80 90 100 baseline
build-Cabal 93.88 89.58 92.59 90.09 100.26 94.81 89.13
Also, this change happens to avoid the spurious test output in
`plugin-recomp-change` and `plugin-recomp-change-prof` (see #17308).
Metric Decrease:
hie002
T12234
T13035
T13719
T14683
T4801
T5631
T5642
T9020
T9872d
T9961
Metric Increase:
T12150
T12425
T13701
T14697
T15426
T1969
T3064
T5837
T6048
T9203
T9872a
T9872b
T9872c
T9872d
haddock.Cabal
haddock.base
haddock.compiler
|
|
|
|
|
| |
This refactors DictBinds into a data type rather than a pair.
No change in behaviour, just better code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue #17151 was a very tricky example of a bug in which the
specialiser accidentally constructs a recurive dictionary,
so that everything turns into bottom.
I have fixed variants of this bug at least twice before:
see Note [Avoiding loops]. It was a bit of a struggle
to isolate the problem, greatly aided by the work that
Alexey Kuleshevich did in distilling a test case.
Once I'd understood the problem, it was not difficult to fix,
though it did lead me a bit of refactoring in specImports.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #17947
When we have a ticky label for a proc, IdLabels for the ticky counter
and proc entry share the same Name. This caused overriding proc CafInfos
with the ticky CafInfos (i.e. NoCafRefs) during SRT analysis.
We now ignore the ticky labels when building SRTMaps. This makes sense
because:
- When building the current module they don't need to be in SRTMaps as
they're initialized as non-CAFFY (see mkRednCountsLabel), so they
don't take part in the dependency analysis and they're never added to
SRTs.
(Reminder: a "dependency" in the SRT analysis is a CAFFY dependency,
non-CAFFY uses are not considered as dependencies for the algorithm)
- They don't appear in the interfaces as they're not exported, so it
doesn't matter for cross-module concerns whether they're in the SRTMap
or not.
See also the new Note [Ticky labels in SRT analysis].
|
|
|
|
|
|
|
|
| |
This is necessary for certain record selectors with higher-rank
types, such as the examples in #18005. See
`Note [Impredicative record selectors]` in `TcTyDecls`.
Fixes #18005.
|
|
|
|
| |
[ci skip]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is joint work of Alexis King and Simon PJ. It does some
significant refactoring of the type-class specialiser. Main highlights:
* We can specialise functions with types like
f :: Eq a => a -> Ord b => b => blah
where the classes aren't all at the front (#16473). Here we can
correctly specialise 'f' based on a call like
f @Int @Bool dEqInt x dOrdBool
This change really happened in an earlier patch
commit 2d0cf6252957b8980d89481ecd0b79891da4b14b
Author: Sandy Maguire <sandy@sandymaguire.me>
Date: Thu May 16 12:12:10 2019 -0400
work that this new patch builds directly on that work, and refactors
it a bit.
* We can specialise functions with implicit parameters (#17930)
g :: (?foo :: Bool, Show a) => a -> String
Previously we could not, but now they behave just like a non-class
argument as in 'f' above.
* We can specialise under-saturated calls, where some (but not all of
the dictionary arguments are provided (#17966). For example, we can
specialise the above 'f' based on a call
map (f @Int dEqInt) xs
even though we don't (and can't) give Ord dictionary.
This may sound exotic, but #17966 is a program from the wild, and
showed significant perf loss for functions like f, if you need
saturation of all dictionaries.
* We fix a buglet in which a floated dictionary had a bogus demand
(#17810), by using zapIdDemandInfo in the NonRec case of specBind.
* A tiny side benefit: we can drop dead arguments to specialised
functions; see Note [Drop dead args from specialisations]
* Fixed a bug in deciding what dictionaries are "interesting"; see
Note [Keep the old dictionaries interesting]
This is all achieved by by building on Sandy Macguire's work in
defining SpecArg, which mkCallUDs uses to describe the arguments of
the call. Main changes:
* Main work is in specHeader, which marched down the [InBndr] from the
function definition and the [SpecArg] from the call site, together.
* specCalls no longer has an arity check; the entire mechanism now
handles unders-saturated calls fine.
* mkCallUDs decides on an argument-by-argument basis whether to
specialise a particular dictionary argument; this is new.
See mk_spec_arg in mkCallUDs.
It looks as if there are many more lines of code, but I think that
all the extra lines are comments!
|
|
|
|
|
|
|
|
|
|
| |
In !2959 we noticed that there was some redundant code (in GHC.Cmm.Utils
and GHC.Cmm.StgToCmm.Utils) used to deal with `CmmStatics` datatype
(before SRT generation) and `RawCmmStatics` datatype (after SRT
generation).
This patch removes this redundant code by using a single GADT for
(Raw)CmmStatics.
|
|
|
|
|
|
|
| |
Move handling of big literal strings from CmmToAsm to StgToCmm. It
avoids the use of `sdocWithDynFlags` (cf #10143). We might need to move
this handling even higher in the pipeline in the future (cf #17960):
this patch will make it easier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now differentiate three cases of constructor bindings:
1)Bindings which we can "replace" with a reference to
an existing closure. Reference the replacement closure
when accessing the binding.
2)Bindings which we can "replace" as above. But we still
generate a closure which will be referenced by modules
importing this binding.
3)For any other binding generate a closure. Then reference
it.
Before this patch 1) did only apply to local bindings and we
didn't do 2) at all.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
flag to dump pretty printed contents of the .hie file
Metric Increase:
hie002
Because of the regression on i386:
compile_time/bytes allocated increased from i386-linux-deb9 baseline @ HEAD~10:
Expected hie002 (normal) compile_time/bytes allocated: 583014888.0 +/-10%
Lower bound hie002 (normal) compile_time/bytes allocated: 524713399
Upper bound hie002 (normal) compile_time/bytes allocated: 641316377
Actual hie002 (normal) compile_time/bytes allocated: 877986292
Deviation hie002 (normal) compile_time/bytes allocated: 50.6 %
*** unexpected stat test failure for hie002(normal)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two `ASSERT`s in `reifyDataCon` were always using `arg_tys`, but
`arg_tys` is not meaningful for GADT constructors. In fact, it's
worse than non-meaningful, since using `arg_tys` when reifying a
GADT constructor can lead to failed `ASSERT`ions, as #17305
demonstrates.
This patch applies the simplest possible fix to the immediate
problem. The `ASSERT`s now use `r_arg_tys` instead of `arg_tys`, as
the former makes sure to give something meaningful for GADT
constructors. This makes the panic go away at the very least. There
is still an underlying issue with the way the internals of
`reifyDataCon` work, as described in
https://gitlab.haskell.org/ghc/ghc/issues/17305#note_227023, but we
leave that as future work, since fixing the underlying issue is
much trickier (see
https://gitlab.haskell.org/ghc/ghc/issues/17305#note_227087).
|
|
|
|
|
| |
Not only is this a reasonable efficiency measure but it avoids making
reentrant calls into ncurses, which is not thread-safe. See #17922.
|
|
|
|
|
|
|
|
|
|
|
| |
Fix #13380 and #17676 by
1. Changing `raiseIO#` to have `topDiv` instead of `botDiv`
2. Give it special treatment in `Simplifier.Util.mkArgInfo`, treating it
as if it still had `botDiv`, to recover dead code elimination.
This is the first commit of the plan outlined in
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/2525#note_260886.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The binder-swap transformation is implemented by the occurrence
analyser -- see Note [Binder swap] in OccurAnal. However it had
a very nasty corner in it, for the case where the case scrutinee
was a GlobalId. This led to trouble and hacks, and ultimately
to #16296.
This patch re-engineers how the occurrence analyser implements
the binder-swap, by actually carrying out a substitution rather
than by adding a let-binding. It's all described in
Note [The binder-swap substitution].
I did a few other things along the way
* Fix a bug in StgCse, which could allow a loop breaker to be CSE'd
away. See Note [Care with loop breakers] in StgCse. I think it can
only show up if occurrence analyser sets up bad loop breakers, but
still.
* Better commenting in SimplUtils.prepareAlts
* A little refactoring in CoreUnfold; nothing significant
e.g. rename CoreUnfold.mkTopUnfolding to mkFinalUnfolding
* Renamed CoreSyn.isFragileUnfolding to hasCoreUnfolding
* Move mkRuleInfo to CoreFVs
We observed respectively 4.6% and 5.9% allocation decreases for the following
tests:
Metric Decrease:
T9961
haddock.base
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In #17977, we ran into the reduction depth limit of the typechecker.
That was only a symptom of a much broader issue: The recursion depth
of the coverage checker for trying to instantiate strict fields in the
`nonVoid` test was far too high (100, the `defaultMaxTcBound`).
As a result, we were performing quite poorly on `T17977`.
Short of a proper termination analysis to prove emptyness of a type,
we just arbitrarily default to a much lower recursion limit of 3.
Fixes #17977.
|
|
|
|
|
|
| |
Metric Decrease:
T13035
T1969
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, there were two distinct Notes named
"Eta reduction for data families". This renames one of them to
"Implementing eta reduction for data families" to disambiguate the
two and fixes references in other parts of the codebase to ensure
that they are pointing to the right place.
Fixes #17313.
[ci skip]
|
|
|
|
|
| |
The combinator andM is used only once, and the code is shorter and
simpler if you inline it.
|
|
|
|
|
|
|
|
| |
This allows us to remove several bits of CPP that are either always
true or no longer reachable. As an added bonus, we no longer need to
worry about importing `Control.Monad.Fail.fail` qualified to avoid
clashing with `Control.Monad.fail`, since the latter is now the same
as the former.
|
|
|
|
|
|
|
| |
- Simplify mkBuildExpr, the function newTyVars was called
only on a one-element list.
- TTG: use noExtCon in more places. This is more future-proof.
- In zonkExpr, panic instead of printing a warning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Within `checkValidDataCon`, we used to run `checkValidType` on the
argument types of a newtype constructor before running
`checkNewDataCon`, which ensures that the user does not attempt
non-sensical things such as newtypes with multiple arguments or
constraints. This works out in most situations, but this falls over
on a corner case revealed in #17955:
```hs
newtype T = Coercible () T => T ()
```
`checkValidType`, among other things, peforms an ambiguity check on
the context of a data constructor, and that it turn invokes the
constraint solver. It turns out that there is a special case in the
constraint solver for representational equalities (read: `Coercible`
constraints) that causes newtypes to be unwrapped (see
`Note [Unwrap newtypes first]` in `TcCanonical`). This special case
does not know how to cope with an ill formed newtype like `T`, so
it ends up panicking.
The solution is surprisingly simple: just invoke `checkNewDataCon`
before `checkValidType` to ensure that the illicit newtype
constructor context is detected before the constraint solver can
run amok with it.
Fixes #17955.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As far as GHC is concerned, installed package components ("units") are
identified by an opaque ComponentId string provided by Cabal. But we
don't want to display it to users (as it contains a hash) so GHC queries
the database to retrieve some infos about the original source package
(name, version, component name).
This patch caches these infos in the ComponentId itself so that we don't
need to provide DynFlags (which contains installed package informations)
to print a ComponentId.
In the future we want GHC to support several independent package states
(e.g. for plugins and for target code), hence we need to avoid
implicitly querying a single global package state.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ticket #17932 showed that we were using a stupid demand for the RHS
of a let-binding, when the result is a product. This was the result
of a "fix" in 2013, which (happily) turns out to no longer be
necessary.
So I just deleted the code, which simplifies the demand analyser,
and fixes #17932. That in turn uncovered that the anticipation
of worker/wrapper in CPR analysis was inaccurate, hence the logic
that decides whether to unbox an argument in WW was extracted into
a function `wantToUnbox`, now consulted by CPR analysis.
I tried nofib, and got 0.0% perf changes.
All this came up when messing about with !2873 (ticket #17917),
but is idependent of it.
Unfortunately, this patch regresses #4267 and realised that it is now
blocked on #16335.
|
| |
|
|
|
|
| |
This module isn't used anywhere in GHC.
|
|
|
|
|
|
|
| |
Update Haddock submodule
Metric Increase:
haddock.compiler
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- There is no more use of the TABLES_NEXT_TO_CODE CPP macro in
`compiler/`. GHCI_TABLES_NEXT_TO_CODE is also removed entirely.
The field within `PlatformMisc` within `DynFlags` is used instead.
- The field is still not exposed as a CLI flag. We might consider some
way to ensure the right RTS / libraries are used before doing that.
Original reviewers:
Original subscribers: TerrorJack, rwbarton, carter
Original Differential Revision: https://phabricator.haskell.org/D5082
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use Platform instead of DynFlags when possible:
* `tARGET_MIN_INT` et al. replaced with `platformMinInt` et al.
* no more DynFlags in PreRules: added a new `RuleOpts` datatype
* don't use `wORD_SIZE` in the compiler
* make `wordAlignment` use `Platform`
* make `dOUBLE_SIZE` a constant
Metric Decrease:
T13035
T1969
|
| |
|
|
|
|
|
| |
They seem to be a benchmarking vestige of the Cardinality paper and
probably shouldn't have been merged to HEAD in the first place.
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Provide the export list of the `Main` module as parameter to the
`compiler/typecheck/TcRnDriver.hs:check_main` function.
- Instead of `lookupOccRn_maybe` call the function `lookupInfoOccRn`.
It returns the list `mains_all` of all the main functions in scope.
- Select from this list `mains_all` all `main` functions that are in
the export list of the `Main` module.
- If this new list contains exactly one single `main` function, then
typechecking continues.
- Otherwise issue an appropriate error message.
|
|
|
|
|
|
|
| |
A previous fix for #15344 made sure that monadic 'fail' is used properly
when translating ApplicativeDo. However, it didn't properly account
for when a 'fail' will be inserted which resulted in some programs
failing with a type error.
|
|
|
|
| |
This typo caused generating 'end' events without the corresponding 'begin' events.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Key changes:
* Adds a new rule for forall-coercions over coercion variables, which
was implemented but conspicuously missing from the spec.
* Adds treatment for FunCo.
* Adds treatment for ForAllTy over coercion variables.
* Improves commentary (including restoring a Note lost in
03d4852658e1b7407abb4da84b1b03bfa6f6db3b) in the source.
No changes to running code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if we had a [W] (a :: k1) ~ (rhs :: k2), we would
spit out a [D] k1 ~ k2 and part the W as irreducible, hoping for
a unification. But we needn't do this. Instead, we now spit out
a [W] co :: k2 ~ k1 and then use co to cast the rhs of the original
Wanted. This means that we retain the connection between the
spat-out constraint and the original.
The problem with this new approach is that we cannot use the
casted equality for substitution; it's too like wanteds-rewriting-
wanteds. So, we forbid CTyEqCans that mention coercion holes.
All the details are in Note [Equalities with incompatible kinds]
in TcCanonical.
There are a few knock-on effects, documented where they occur.
While debugging an error in this patch, Simon and I ran into
infelicities in how patterns and matches are printed; we made
small improvements.
This patch includes mitigations for #17828, which causes spurious
pattern-match warnings. When #17828 is fixed, these lines should
be removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This read causes NULL dereferencing when len is 0.
Fixes #17909
In the reproducer in #17909 this bug is triggered as follows:
- SimplOpt.dealWithStringLiteral is called with a single-char string
("=" in #17909)
- tailFS gets called on the FastString of the single-char string.
- tailFS checks the length of the string, which is 1, and calls
mkFastStringByteString on the tail of the ByteString, which is an
empty ByteString as the original ByteString has only one char.
- ByteString's unsafeUseAsCStringLen returns (NULL, 0) for the empty
ByteString, which is passed to mkFastStringWith.
- mkFastStringWith gets hash of the NULL pointer via hashStr, which
fails on empty strings because of this bug.
|
|
|
|
|
|
|
|
| |
Metric Decrease:
ManyConstructors
T12707
T13035
T1969
|
|
|
|
|
|
|
|
|
|
|
|
| |
In #17911, Simon recognised many warnings stemming from over-long list
unions while coverage checking Cabal's `LicenseId` module.
This patch introduces a new `PmAltConSet` type which uses a `UniqDSet`
instead of an association list for `ConLike`s. For `PmLit`s, it will
still use an assocation list, though, because a similar map data
structure would entail a lot of busy work.
Fixes #17911.
|