| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of the other users of the fptools build system have migrated to
Cabal, and with the move to darcs we can now flatten the source tree
without losing history, so here goes.
The main change is that the ghc/ subdir is gone, and most of what it
contained is now at the top level. The build system now makes no
pretense at being multi-project, it is just the GHC build system.
No doubt this will break many things, and there will be a period of
instability while we fix the dependencies. A straightforward build
should work, but I haven't yet fixed binary/source distributions.
Changes to the Building Guide will follow, too.
|
| |
|
| |
|
|
|
|
|
| |
-fignore-breakpoints can be used to ignore breakpoints.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gives some control over affinity, while we figure out the best
way to automatically schedule threads to make best use of the
available parallelism.
In addition to the primitive, there is also:
GHC.Conc.forkOnIO :: Int -> IO () -> IO ThreadId
where 'forkOnIO i m' creates a thread on Capability (i `rem` N), where
N is the number of available Capabilities set by +RTS -N.
Threads forked by forkOnIO do not automatically migrate when there are
free Capabilities, like normal threads do. Still, if you're using
forkOnIO exclusively, it's a good idea to do +RTS -qm to disable work
pushing anyway (work pushing takes too much time when the run queues
are large, this is something we need to fix).
|
|
|
|
|
|
|
|
| |
loaded.
This is pretty important when using the linker/bytecode-compiler from binaries
other than GHCi.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
On x86_64 we are using C argument registers for global registers in
the STG machine. This is always going to be problematic when it comes
to making C calls from STG and compiling via C. Prior to GCC 4.1.0
(approx) it was possible to just assign the argument expressions to
temporaries to avoid a clash. Now, we need to add an extra dummy
function call as a barrier between the temporary assignments and the
actual call. The dummy call is removed by the mangler.
|
|
|
|
|
|
|
|
| |
I've removed -fno-code from Main to make it work
equally well with --make and -c.
I've also allowed it not to write hi files unless
-fwrite-iface is given.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
I've removed -fno-code from Main to make it work
equally well with --make and -c.
I've also allowed it not to write hi files unless
-fwrite-iface is given.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
In SMP mode a THUNK can change to an IND at any time. The generic
apply code (stg_ap_p etc.) examines a closure to determine how to
apply it to its arguments, if it is a THUNK it must enter it first in
order to evaluate it. The problem was that in order to enter the
THUNK, we were re-reading the info pointer, and possibly ending up
with an IND instead of the original THUNK. It isn't safe to enter the
IND, because it points to a function (functions are never "entered",
only applied). Solution: we must not re-read the info pointer.
|
|
|
|
|
|
| |
Contributed by Duncan Coutts, with changes to merge into the HEAD.
This isn't the full deal, ghc-pkg still modifies files only, but it's
enough to help the Gentoo folk along.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the lexer to parse OPTIONS, LANGUAGE and INCLUDE pragmas.
This gives us greater flexibility and far better error
messages. However, I had to make a few quirks:
* The token parser is written manually since Happy doesn't
like lexer errors (we need to extract options before the
buffer is passed through 'cpp'). Still better than
manually parsing a String, though.
* The StringBuffer API has been extended so files can be
read in blocks.
I also made a new field in ModSummary called ms_hspp_opts
which stores the updated DynFlags. Oh, and I took the liberty
of moving 'getImports' into HeaderInfo together with
'getOptions'.
|
| |
|
|
|
|
|
|
| |
I'm not sure what really happens here but this is how it's
done in the old HscMain code and it appears to work.
|
|
|
|
| |
This seems to be required now that we're stealing more registers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MkIface.writeIfaceFile doesn't check GhcMode anymore. All it does
is what the name say: write an interface to disk.
I've refactored HscMain so the logic is easier to manage. That means
we can avoid running the simplifier when typechecking (: And best of
all, HscMain doesn't use GhcMode at all, anymore!
The new HscMain intro looks like this:
It's the task of the compilation proper to compile Haskell, hs-boot and
core files to either byte-code, hard-code (C, asm, Java, ect) or to
nothing at all (the module is still parsed and type-checked. This
feature is mostly used by IDE's and the likes).
Compilation can happen in either 'one-shot', 'batch', 'nothing',
or 'interactive' mode. 'One-shot' mode targets hard-code, 'batch' mode
targets hard-code, 'nothing' mode targets nothing and 'interactive' mode
targets byte-code.
The modes are kept separate because of their different types and meanings.
In 'one-shot' mode, we're only compiling a single file and can therefore
discard the new ModIface and ModDetails. This is also the reason it only
targets hard-code; compiling to byte-code or nothing doesn't make sense
when we discard the result.
'Batch' mode is like 'one-shot' except that we keep the resulting ModIface
and ModDetails. 'Batch' mode doesn't target byte-code since that require
us to return the newly compiled byte-code.
'Nothing' mode has exactly the same type as 'batch' mode but they're still
kept separate. This is because compiling to nothing is fairly special: We
don't output any interface files, we don't run the simplifier and we don't
generate any code.
'Interactive' mode is similar to 'batch' mode except that we return the
compiled byte-code together with the ModIface and ModDetails.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
I've changed the name to 'getGhcMode'. If someone changes
it back, please write an explanation above it.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
None of the new code is in use yet.
The current Haskell compiler (HscMain.hscMain) isn't as typed
and as hack-free as we'd like. Here's a list of the things it
does wrong:
* In one shot mode, it returns the new interface as _|_,
when recompilation isn't required. It's then up to the
users of hscMain to keep their hands off the result.
* (Maybe ModIface) is passed around when it's known that it's
a Just. Hey, we got a type-system, let's use it.
* In one shot mode, the backend is returning _|_ for the
new interface. This is done to prevent space leaks since
we know that the result of a one shot compilation is never
used. Again, it's up to the users of hscMain to keep their
hands off the result.
* It is allowed to compile a hs-boot file to bytecode even
though that doesn't make sense (it always returns
Nothing::Maybe CompiledByteCode).
* Logic and grunt work is completely mixed. The frontend
and backend keeps checking what kind of input they're handling.
This makes it very hard to get an idea of what the functions
actually do.
* Extra work is performed when using a null code generator.
The new code refactors out the frontends (Haskell, Core), the
backends (Haskell, boot) and the code generators (one-shot, make,
nothing, interactive) and allows them to be combined in typesafe ways.
A one-shot compilation doesn't return new interfaces at all so we
don't need the _|_ space-leak hack. In 'make' mode (when not
targeting bytecode) the result doesn't contain
Nothing::Maybe CompiledByteCode. In interactive mode, the result
is always a CompiledByteCode. The code gens are completely separate
so compiling to Nothing doesn't perform any extra work.
DriverPipeline needs a bit of work before it can use the new
API.
|
|
|
|
|
| |
When the volatile regs attached to a CmmCall is Nothing, it means
"save everything", not "save nothing".
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After a long hunt I discovered that the reason that GHC.Enum.eftIntFB
was being marked as a loop-breaker was the bizare behaviour of exprFreeVars,
which returned not only the free variables of an expression but also the
free variables of RULES attached to variables occuring in the expression!
This was clearly deliberate (the comment was CoreFVs rev 1.1 in 1999) but
I've removed it; I've left the comment with further notes in case there
turns out to be a Deep Reason.
|
|
|
|
|
|
|
|
| |
This turned out to be a lot easier than I thought. Just moving a few
bits of -split-objs support from the build system into the compiler
was enough. The only thing that Cabal needs to do in order to support
-split-objs now is to pass the names of the split objects rather than
the monolithic ones to 'ar'.
|
| |
|
| |
|