| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turned out that there were two bugs. First, we were getting an
exponential number of specialisations when we had a deep nest of
join points. See Note [Avoiding exponential blowup]. I fixed this
by dividing sc_count (in ScEnv) by the number of specialisations
when recursing. Crude but effective.
Second, when making specialisations I was looking at the result of
applying specExpr to the RHS of the function, whereas I should have
been looking at the original RHS. See Note [Specialise original
body].
There's a tantalising missed opportunity here, though. In this
example (recorded as a test simplCore/should_compile/T3831), each join
point has *exactly one* call pattern, so we should really just
specialise for that alone, in which case there's zero code-blow-up.
In particular, we don't need the *original* RHS at all. I need to think
more about how to exploit this.
But the blowup is now limited, so compiling terminfo with -O2 works again.
|
|
|
|
| |
This was a lot easier than I imagined.
|
|
|
|
|
|
|
|
|
|
|
| |
In #2797, a program that ran in constant stack space when compiled
needed linear stack space when interpreted. It turned out to be
nothing more than stack-squeezing not happening. We have a heuristic
to avoid stack-squeezing when it would be too expensive (shuffling a
large amount of memory to save a few words), but in some cases even
expensive stack-squeezing is necessary to avoid linear stack usage.
One day we should implement stack chunks, which would make this less
expensive.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After a bound thread had completed, its TSO remains in the heap until
it has been GC'd, although the associated Task is returned to the
caller where it is freed and possibly re-used.
The bug was that GC was following the pointer to the Task and updating
the TSO field, meanwhile the Task had already been recycled (it was
being used by exitScheduler()). Confusion ensued, leading to a very
occasional deadlock at shutdown, but in principle it could result in
other crashes too.
The fix is to remove the link between the TSO and the Task when the
TSO has completed and the call to schedule() has returned; see
comments in Schedule.c.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This helps when the thread holding the lock has been descheduled,
which is the main cause of the "last-core slowdown" problem. With
this patch, I get much better results with -N8 on an 8-core box,
although some benchmarks are still worse than with 7 cores.
I also added a yieldThread() into the any_work() loop of the parallel
GC when it has no work to do. Oddly, this seems to improve performance
on the parallel GC benchmarks even when all the cores are busy.
Perhaps it is due to reducing contention on the memory bus.
|
| |
|
|
|
|
|
|
|
|
| |
A recent patch ("Refactor CoreArity a bit") changed the arity of
GHC.Conc.runSparks such that it became a CAF, and the RTS was not
explicitly retaining it, which led to a crash when the CAF got GC'd.
While fixing this I found a couple of other closures that the RTS
refers to which weren't getting the correct CAF treatment.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
In 6.14.1 we'll switch these primops to return the exact byte size,
but for 6.12.2 we need to fix the docs.
|
| |
|
|
|
|
|
| |
It looks like it was only needed on OSX, but it has a prototype in
assert.h which now gets #included.
|
|
|
|
| |
Remove a prototype of a function that wasn't defined
|
|
|
|
|
|
|
|
| |
This patch does not apply to Windows. It only applies to systems with
ELF binaries.
This is a patch to rts/Linker.c to recognize linker scripts in .so
files and find the real target .so shared library for loading.
|
|
|
|
|
|
|
|
|
|
| |
In a GHCi stmt we don't want to report unused variables,
because we don't know the scope of the binding, eg
Prelude> x <- blah
Fixing this needed a little more info about the context of the stmt,
thus the new constructor GhciStmt in the HsStmtContext type.
|
|
|
|
|
|
|
|
|
|
| |
The immediate reason for this patch is to fix #3823. This was
rather easy: all the work was being done but I was returning
type_env2 rather than type_env3.
An unused-veriable warning would have shown this up, so I fixed all
the other warnings in TcRnDriver. Doing so showed up at least two
genuine lurking bugs. Hurrah.
|
| |
|
| |
|
| |
|
|
|
|
| |
Patch contributed by asuffield@suffields.me.uk
|
| |
|
|
|
|
|
|
| |
We were printing the wrong value, so getting confusing messages like:
Function `$wa{v s17LO} [lid]'
has 2 call pattterns, but the limit is 3
|
| |
|
|
|
|
|
| |
It thought the ) needed to close something, but the $( hadn't
opened anything.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
GHC.loadModule compiles a module after it has been parsed and
typechecked explicity. If we are compiling to object code and there is
a valid object file already on disk, then we can skip the compilation
step. This is useful in Haddock, when processing a package that uses
Template Haskell and hence needs actual compilation, and the package
has already been compiled.
As usual, the recomp avoidance can be disabled with -fforce-recomp.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Partly this is cleaner as we only have to preprocess the source files
once, but also it is necessary to avoid Haddock recompiling source
files when Template Haskell is in use, saving some time in validate
and fixing a problem whereby when HADDOCK_DOCS=YES, make always
re-haddocks the DPH packages. This also needs an additional fix to
GHC.
HsColour support still uses Cabal, and hence preprocesses the source
files again. We could move this into the build system too, but there
is a version dependency that would mean adding extra autoconf stuff.
|
| |
|