language, GHCextensions, GHC
As with all known Haskell systems, GHC implements some extensions to
the language. They are all enabled by options; by default GHC
understands only plain Haskell 98.
Some of the Glasgow extensions serve to give you access to the
underlying facilities with which we implement Haskell. Thus, you can
get at the Raw Iron, if you are willing to write some non-portable
code at a more primitive level. You need not be “stuck”
on performance because of the implementation costs of Haskell's
“high-level” features—you can always code
“under” them. In an extreme case, you can write all your
time-critical code in C, and then just glue it together with Haskell!
Before you get too carried away working at the lowest level (e.g.,
sloshing MutableByteArray#s around your
program), you may wish to check if there are libraries that provide a
“Haskellised veneer” over the features you want. The
separate libraries
documentation describes all the libraries that come with GHC.
Language optionslanguageoptionoptionslanguageextensionsoptions controllingThe language option flag control what variation of the language are
permitted. Leaving out all of them gives you standard Haskell
98.Generally speaking, all the language options are introduced by "",
e.g. .
All the language options can be turned off by using the prefix "";
e.g. "". Language options recognised by Cabal can also be enabled using the LANGUAGE pragma,
thus {-# LANGUAGE TemplateHaskell #-} (see >). The flag
is equivalent to enabling the following extensions:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
.
Enabling these options is the only
effect of -fglasgow-exts.
We are trying to move away from this portmanteau flag,
and towards enabling features individually.Unboxed types and primitive operationsGHC is built on a raft of primitive data types and operations;
"primitive" in the sense that they cannot be defined in Haskell itself.
While you really can use this stuff to write fast code,
we generally find it a lot less painful, and more satisfying in the
long run, to use higher-level language features and libraries. With
any luck, the code you write will be optimised to the efficient
unboxed version in any case. And if it isn't, we'd like to know
about it.All these primitive data types and operations are exported by the
library GHC.Prim, for which there is
detailed online documentation.
(This documentation is generated from the file compiler/prelude/primops.txt.pp.)
If you want to mention any of the primitive data types or operations in your
program, you must first import GHC.Prim to bring them
into scope. Many of them have names ending in "#", and to mention such
names you need the extension ().
The primops make extensive use of unboxed types
and unboxed tuples, which
we briefly summarise here. Unboxed types
Unboxed types (Glasgow extension)Most types in GHC are boxed, which means
that values of that type are represented by a pointer to a heap
object. The representation of a Haskell Int, for
example, is a two-word heap object. An unboxed
type, however, is represented by the value itself, no pointers or heap
allocation are involved.
Unboxed types correspond to the “raw machine” types you
would use in C: Int# (long int),
Double# (double), Addr#
(void *), etc. The primitive operations
(PrimOps) on these types are what you might expect; e.g.,
(+#) is addition on
Int#s, and is the machine-addition that we all
know and love—usually one instruction.
Primitive (unboxed) types cannot be defined in Haskell, and are
therefore built into the language and compiler. Primitive types are
always unlifted; that is, a value of a primitive type cannot be
bottom. We use the convention (but it is only a convention)
that primitive types, values, and
operations have a # suffix (see ).
For some primitive types we have special syntax for literals, also
described in the same section.
Primitive values are often represented by a simple bit-pattern, such
as Int#, Float#,
Double#. But this is not necessarily the case:
a primitive value might be represented by a pointer to a
heap-allocated object. Examples include
Array#, the type of primitive arrays. A
primitive array is heap-allocated because it is too big a value to fit
in a register, and would be too expensive to copy around; in a sense,
it is accidental that it is represented by a pointer. If a pointer
represents a primitive value, then it really does point to that value:
no unevaluated thunks, no indirections…nothing can be at the
other end of the pointer than the primitive value.
A numerically-intensive program using unboxed types can
go a lot faster than its “standard”
counterpart—we saw a threefold speedup on one example.
There are some restrictions on the use of primitive types:
The main restriction
is that you can't pass a primitive value to a polymorphic
function or store one in a polymorphic data type. This rules out
things like [Int#] (i.e. lists of primitive
integers). The reason for this restriction is that polymorphic
arguments and constructor fields are assumed to be pointers: if an
unboxed integer is stored in one of these, the garbage collector would
attempt to follow it, leading to unpredictable space leaks. Or a
seq operation on the polymorphic component may
attempt to dereference the pointer, with disastrous results. Even
worse, the unboxed value might be larger than a pointer
(Double# for instance).
You cannot define a newtype whose representation type
(the argument type of the data constructor) is an unboxed type. Thus,
this is illegal:
newtype A = MkA Int#
You cannot bind a variable with an unboxed type
in a top-level binding.
You cannot bind a variable with an unboxed type
in a recursive binding.
You may bind unboxed variables in a (non-recursive,
non-top-level) pattern binding, but any such variable causes the entire
pattern-match
to become strict. For example:
data Foo = Foo Int Int#
f x = let (Foo a b, w) = ..rhs.. in ..body..
Since b has type Int#, the entire pattern
match
is strict, and the program behaves as if you had written
data Foo = Foo Int Int#
f x = case ..rhs.. of { (Foo a b, w) -> ..body.. }
Unboxed Tuples
Unboxed tuples aren't really exported by GHC.Exts,
they're available by default with . An
unboxed tuple looks like this:
(# e_1, ..., e_n #)
where e_1..e_n are expressions of any
type (primitive or non-primitive). The type of an unboxed tuple looks
the same.
Unboxed tuples are used for functions that need to return multiple
values, but they avoid the heap allocation normally associated with
using fully-fledged tuples. When an unboxed tuple is returned, the
components are put directly into registers or on the stack; the
unboxed tuple itself does not have a composite representation. Many
of the primitive operations listed in primops.txt.pp return unboxed
tuples.
In particular, the IO and ST monads use unboxed
tuples to avoid unnecessary allocation during sequences of operations.
There are some pretty stringent restrictions on the use of unboxed tuples:
Values of unboxed tuple types are subject to the same restrictions as
other unboxed types; i.e. they may not be stored in polymorphic data
structures or passed to polymorphic functions.
No variable can have an unboxed tuple type, nor may a constructor or function
argument have an unboxed tuple type. The following are all illegal:
data Foo = Foo (# Int, Int #)
f :: (# Int, Int #) -> (# Int, Int #)
f x = x
g :: (# Int, Int #) -> Int
g (# a,b #) = a
h x = let y = (# x,x #) in ...
The typical use of unboxed tuples is simply to return multiple values,
binding those multiple results with a case expression, thus:
f x y = (# x+1, y-1 #)
g x = case f x x of { (# a, b #) -> a + b }
You can have an unboxed tuple in a pattern binding, thus
f x = let (# p,q #) = h x in ..body..
If the types of p and q are not unboxed,
the resulting binding is lazy like any other Haskell pattern binding. The
above example desugars like this:
f x = let t = case h x o f{ (# p,q #) -> (p,q)
p = fst t
q = snd t
in ..body..
Indeed, the bindings can even be recursive.
Syntactic extensionsThe magic hashThe language extension allows "#" as a
postfix modifier to identifiers. Thus, "x#" is a valid variable, and "T#" is
a valid type constructor or data constructor.The hash sign does not change sematics at all. We tend to use variable
names ending in "#" for unboxed values or types (e.g. Int#),
but there is no requirement to do so; they are just plain ordinary variables.
Nor does the extension bring anything into scope.
For example, to bring Int# into scope you must
import GHC.Prim (see );
the extension
then allows you to refer to the Int#
that is now in scope. The also enables some new forms of literals (see ):
'x'# has type Char#"foo"# has type Addr#3# has type Int#. In general,
any Haskell 98 integer lexeme followed by a # is an Int# literal, e.g.
-0x3A# as well as 32#.3## has type Word#. In general,
any non-negative Haskell 98 integer lexeme followed by ##
is a Word#. 3.2# has type Float#.3.2## has type Double#New qualified operator syntaxA new syntax for referencing qualified operators is
planned to be introduced by Haskell', and is enabled in GHC
with
the
option. In the new syntax, the prefix form of a qualified
operator is
written module.(symbol)
(in Haskell 98 this would
be (module.symbol)),
and the infix form is
written `module.(symbol)`
(in Haskell 98 this would
be `module.symbol`.
For example:
add x y = Prelude.(+) x y
subtract y = (`Prelude.(-)` y)
The new form of qualified operators is intended to regularise
the syntax by eliminating odd cases
like Prelude... For example,
when NewQualifiedOperators is on, it is possible to
write the enerated sequence [Monday..]
without spaces, whereas in Haskell 98 this would be a
reference to the operator ‘.‘
from module Monday.When is on, the old Haskell
98 syntax for qualified operators is not accepted, so this
option may cause existing Haskell 98 code to break.Hierarchical ModulesGHC supports a small extension to the syntax of module
names: a module name is allowed to contain a dot
‘.’. This is also known as the
“hierarchical module namespace” extension, because
it extends the normally flat Haskell module namespace into a
more flexible hierarchy of modules.This extension has very little impact on the language
itself; modules names are always fully
qualified, so you can just think of the fully qualified module
name as the module name. In particular, this
means that the full module name must be given after the
module keyword at the beginning of the
module; for example, the module A.B.C must
beginmodule A.B.CIt is a common strategy to use the as
keyword to save some typing when using qualified names with
hierarchical modules. For example:
import qualified Control.Monad.ST.Strict as ST
For details on how GHC searches for source and interface
files in the presence of hierarchical modules, see .GHC comes with a large collection of libraries arranged
hierarchically; see the accompanying library
documentation. More libraries to install are available
from HackageDB.Pattern guardsPattern guards (Glasgow extension)
The discussion that follows is an abbreviated version of Simon Peyton Jones's original proposal. (Note that the proposal was written before pattern guards were implemented, so refers to them as unimplemented.)
Suppose we have an abstract data type of finite maps, with a
lookup operation:
lookup :: FiniteMap -> Int -> Maybe Int
The lookup returns Nothing if the supplied key is not in the domain of the mapping, and (Just v) otherwise,
where v is the value that the key maps to. Now consider the following definition:
clunky env var1 var2 | ok1 && ok2 = val1 + val2
| otherwise = var1 + var2
where
m1 = lookup env var1
m2 = lookup env var2
ok1 = maybeToBool m1
ok2 = maybeToBool m2
val1 = expectJust m1
val2 = expectJust m2
The auxiliary functions are
maybeToBool :: Maybe a -> Bool
maybeToBool (Just x) = True
maybeToBool Nothing = False
expectJust :: Maybe a -> a
expectJust (Just x) = x
expectJust Nothing = error "Unexpected Nothing"
What is clunky doing? The guard ok1 &&
ok2 checks that both lookups succeed, using
maybeToBool to convert the Maybe
types to booleans. The (lazily evaluated) expectJust
calls extract the values from the results of the lookups, and binds the
returned values to val1 and val2
respectively. If either lookup fails, then clunky takes the
otherwise case and returns the sum of its arguments.
This is certainly legal Haskell, but it is a tremendously verbose and
un-obvious way to achieve the desired effect. Arguably, a more direct way
to write clunky would be to use case expressions:
clunky env var1 var2 = case lookup env var1 of
Nothing -> fail
Just val1 -> case lookup env var2 of
Nothing -> fail
Just val2 -> val1 + val2
where
fail = var1 + var2
This is a bit shorter, but hardly better. Of course, we can rewrite any set
of pattern-matching, guarded equations as case expressions; that is
precisely what the compiler does when compiling equations! The reason that
Haskell provides guarded equations is because they allow us to write down
the cases we want to consider, one at a time, independently of each other.
This structure is hidden in the case version. Two of the right-hand sides
are really the same (fail), and the whole expression
tends to become more and more indented.
Here is how I would write clunky:
clunky env var1 var2
| Just val1 <- lookup env var1
, Just val2 <- lookup env var2
= val1 + val2
...other equations for clunky...
The semantics should be clear enough. The qualifiers are matched in order.
For a <- qualifier, which I call a pattern guard, the
right hand side is evaluated and matched against the pattern on the left.
If the match fails then the whole guard fails and the next equation is
tried. If it succeeds, then the appropriate binding takes place, and the
next qualifier is matched, in the augmented environment. Unlike list
comprehensions, however, the type of the expression to the right of the
<- is the same as the type of the pattern to its
left. The bindings introduced by pattern guards scope over all the
remaining guard qualifiers, and over the right hand side of the equation.
Just as with list comprehensions, boolean expressions can be freely mixed
with among the pattern guards. For example:
f x | [y] <- x
, y > 3
, Just z <- h y
= ...
Haskell's current guards therefore emerge as a special case, in which the
qualifier list has just one element, a boolean expression.
View patterns
View patterns are enabled by the flag -XViewPatterns.
More information and examples of view patterns can be found on the
Wiki
page.
View patterns are somewhat like pattern guards that can be nested inside
of other patterns. They are a convenient way of pattern-matching
against values of abstract types. For example, in a programming language
implementation, we might represent the syntax of the types of the
language as follows:
type Typ
data TypView = Unit
| Arrow Typ Typ
view :: Type -> TypeView
-- additional operations for constructing Typ's ...
The representation of Typ is held abstract, permitting implementations
to use a fancy representation (e.g., hash-consing to manage sharing).
Without view patterns, using this signature a little inconvenient:
size :: Typ -> Integer
size t = case view t of
Unit -> 1
Arrow t1 t2 -> size t1 + size t2
It is necessary to iterate the case, rather than using an equational
function definition. And the situation is even worse when the matching
against t is buried deep inside another pattern.
View patterns permit calling the view function inside the pattern and
matching against the result:
size (view -> Unit) = 1
size (view -> Arrow t1 t2) = size t1 + size t2
That is, we add a new form of pattern, written
expression->pattern that means "apply the expression to
whatever we're trying to match against, and then match the result of
that application against the pattern". The expression can be any Haskell
expression of function type, and view patterns can be used wherever
patterns are used.
The semantics of a pattern (exp->pat) are as follows:
Scoping:
The variables bound by the view pattern are the variables bound by
pat.
Any variables in exp are bound occurrences,
but variables bound "to the left" in a pattern are in scope. This
feature permits, for example, one argument to a function to be used in
the view of another argument. For example, the function
clunky from can be
written using view patterns as follows:
clunky env (lookup env -> Just val1) (lookup env -> Just val2) = val1 + val2
...other equations for clunky...
More precisely, the scoping rules are:
In a single pattern, variables bound by patterns to the left of a view
pattern expression are in scope. For example:
example :: Maybe ((String -> Integer,Integer), String) -> Bool
example Just ((f,_), f -> 4) = True
Additionally, in function definitions, variables bound by matching earlier curried
arguments may be used in view pattern expressions in later arguments:
example :: (String -> Integer) -> String -> Bool
example f (f -> 4) = True
That is, the scoping is the same as it would be if the curried arguments
were collected into a tuple.
In mutually recursive bindings, such as let,
where, or the top level, view patterns in one
declaration may not mention variables bound by other declarations. That
is, each declaration must be self-contained. For example, the following
program is not allowed:
let {(x -> y) = e1 ;
(y -> x) = e2 } in x
(We may lift this
restriction in the future; the only cost is that type checking patterns
would get a little more complicated.)
Typing: If exp has type
T1->T2 and pat matches
a T2, then the whole view pattern matches a
T1.
Matching: To the equations in Section 3.17.3 of the
Haskell 98
Report, add the following:
case v of { (e -> p) -> e1 ; _ -> e2 }
=
case (e v) of { p -> e1 ; _ -> e2 }
That is, to match a variable v against a pattern
(exp->pat), evaluate (exp v) and match the result against
pat.
Efficiency: When the same view function is applied in
multiple branches of a function definition or a case expression (e.g.,
in size above), GHC makes an attempt to collect these
applications into a single nested case expression, so that the view
function is only applied once. Pattern compilation in GHC follows the
matrix algorithm described in Chapter 4 of The
Implementation of Functional Programming Languages. When the
top rows of the first column of a matrix are all view patterns with the
"same" expression, these patterns are transformed into a single nested
case. This includes, for example, adjacent view patterns that line up
in a tuple, as in
f ((view -> A, p1), p2) = e1
f ((view -> B, p3), p4) = e2
The current notion of when two view pattern expressions are "the
same" is very restricted: it is not even full syntactic equality.
However, it does include variables, literals, applications, and tuples;
e.g., two instances of view ("hi", "there") will be
collected. However, the current implementation does not compare up to
alpha-equivalence, so two instances of (x, view x ->
y) will not be coalesced.
The recursive do-notation
The recursive do-notation (also known as mdo-notation) is implemented as described in
A recursive do for Haskell,
by Levent Erkok, John Launchbury,
Haskell Workshop 2002, pages: 29-37. Pittsburgh, Pennsylvania.
This paper is essential reading for anyone making non-trivial use of mdo-notation,
and we do not repeat it here.
The do-notation of Haskell does not allow recursive bindings,
that is, the variables bound in a do-expression are visible only in the textually following
code block. Compare this to a let-expression, where bound variables are visible in the entire binding
group. It turns out that several applications can benefit from recursive bindings in
the do-notation, and this extension provides the necessary syntactic support.
Here is a simple (yet contrived) example:
import Control.Monad.Fix
justOnes = mdo xs <- Just (1:xs)
return xs
As you can guess justOnes will evaluate to Just [1,1,1,....
The Control.Monad.Fix library introduces the MonadFix class. It's definition is:
class Monad m => MonadFix m where
mfix :: (a -> m a) -> m a
The function mfix
dictates how the required recursion operation should be performed. For example,
justOnes desugars as follows:
justOnes = mfix (\xs' -> do { xs <- Just (1:xs'); return xs }
For full details of the way in which mdo is typechecked and desugared, see
the paper A recursive do for Haskell.
In particular, GHC implements the segmentation technique described in Section 3.2 of the paper.
If recursive bindings are required for a monad,
then that monad must be declared an instance of the MonadFix class.
The following instances of MonadFix are automatically provided: List, Maybe, IO.
Furthermore, the Control.Monad.ST and Control.Monad.ST.Lazy modules provide the instances of the MonadFix class
for Haskell's internal state monad (strict and lazy, respectively).
Here are some important points in using the recursive-do notation:
The recursive version of the do-notation uses the keyword mdo (rather
than do).
It is enabled with the flag -XRecursiveDo, which is in turn implied by
-fglasgow-exts.
Unlike ordinary do-notation, but like let and where bindings,
name shadowing is not allowed; that is, all the names bound in a single mdo must
be distinct (Section 3.3 of the paper).
Variables bound by a let statement in an mdo
are monomorphic in the mdo (Section 3.1 of the paper). However
GHC breaks the mdo into segments to enhance polymorphism,
and improve termination (Section 3.2 of the paper).
Historical note: The old implementation of the mdo-notation (and most
of the existing documents) used the name
MonadRec for the class and the corresponding library.
This name is not supported by GHC.
Parallel List Comprehensionslist comprehensionsparallelparallel list comprehensionsParallel list comprehensions are a natural extension to list
comprehensions. List comprehensions can be thought of as a nice
syntax for writing maps and filters. Parallel comprehensions
extend this to include the zipWith family.A parallel list comprehension has multiple independent
branches of qualifier lists, each separated by a `|' symbol. For
example, the following zips together two lists:
[ (x, y) | x <- xs | y <- ys ]
The behavior of parallel list comprehensions follows that of
zip, in that the resulting list will have the same length as the
shortest branch.We can define parallel list comprehensions by translation to
regular comprehensions. Here's the basic idea:Given a parallel comprehension of the form:
[ e | p1 <- e11, p2 <- e12, ...
| q1 <- e21, q2 <- e22, ...
...
]
This will be translated to:
[ e | ((p1,p2), (q1,q2), ...) <- zipN [(p1,p2) | p1 <- e11, p2 <- e12, ...]
[(q1,q2) | q1 <- e21, q2 <- e22, ...]
...
]
where `zipN' is the appropriate zip for the given number of
branches.Generalised (SQL-Like) List Comprehensionslist comprehensionsgeneralisedextended list comprehensionsgroupsqlGeneralised list comprehensions are a further enhancement to the
list comprehension syntatic sugar to allow operations such as sorting
and grouping which are familiar from SQL. They are fully described in the
paper
Comprehensive comprehensions: comprehensions with "order by" and "group by",
except that the syntax we use differs slightly from the paper.Here is an example:
employees = [ ("Simon", "MS", 80)
, ("Erik", "MS", 100)
, ("Phil", "Ed", 40)
, ("Gordon", "Ed", 45)
, ("Paul", "Yale", 60)]
output = [ (the dept, sum salary)
| (name, dept, salary) <- employees
, then group by dept
, then sortWith by (sum salary)
, then take 5 ]
In this example, the list output would take on
the value:
[("Yale", 60), ("Ed", 85), ("MS", 180)]
There are three new keywords: group, by, and using.
(The function sortWith is not a keyword; it is an ordinary
function that is exported by GHC.Exts.)There are five new forms of comprehension qualifier,
all introduced by the (existing) keyword then:
then f
This statement requires that f have the type
forall a. [a] -> [a]. You can see an example of it's use in the
motivating example, as this form is used to apply take 5.
then f by e
This form is similar to the previous one, but allows you to create a function
which will be passed as the first argument to f. As a consequence f must have
the type forall a. (a -> t) -> [a] -> [a]. As you can see
from the type, this function lets f "project out" some information
from the elements of the list it is transforming.An example is shown in the opening example, where sortWith
is supplied with a function that lets it find out the sum salary
for any item in the list comprehension it transforms.
then group by e using f
This is the most general of the grouping-type statements. In this form,
f is required to have type forall a. (a -> t) -> [a] -> [[a]].
As with the then f by e case above, the first argument
is a function supplied to f by the compiler which lets it compute e on every
element of the list being transformed. However, unlike the non-grouping case,
f additionally partitions the list into a number of sublists: this means that
at every point after this statement, binders occurring before it in the comprehension
refer to lists of possible values, not single values. To help understand
this, let's look at an example:
-- This works similarly to groupWith in GHC.Exts, but doesn't sort its input first
groupRuns :: Eq b => (a -> b) -> [a] -> [[a]]
groupRuns f = groupBy (\x y -> f x == f y)
output = [ (the x, y)
| x <- ([1..3] ++ [1..2])
, y <- [4..6]
, then group by x using groupRuns ]
This results in the variable output taking on the value below:
[(1, [4, 5, 6]), (2, [4, 5, 6]), (3, [4, 5, 6]), (1, [4, 5, 6]), (2, [4, 5, 6])]
Note that we have used the the function to change the type
of x from a list to its original numeric type. The variable y, in contrast, is left
unchanged from the list form introduced by the grouping.
then group by e
This form of grouping is essentially the same as the one described above. However,
since no function to use for the grouping has been supplied it will fall back on the
groupWith function defined in
GHC.Exts. This
is the form of the group statement that we made use of in the opening example.
then group using f
With this form of the group statement, f is required to simply have the type
forall a. [a] -> [[a]], which will be used to group up the
comprehension so far directly. An example of this form is as follows:
output = [ x
| y <- [1..5]
, x <- "hello"
, then group using inits]
This will yield a list containing every prefix of the word "hello" written out 5 times:
["","h","he","hel","hell","hello","helloh","hellohe","hellohel","hellohell","hellohello","hellohelloh",...]
Rebindable syntax and the implicit Prelude import-XNoImplicitPrelude
option GHC normally imports
Prelude.hi files for you. If you'd
rather it didn't, then give it a
option. The idea is
that you can then import a Prelude of your own. (But don't
call it Prelude; the Haskell module
namespace is flat, and you must not conflict with any
Prelude module.)Suppose you are importing a Prelude of your own
in order to define your own numeric class
hierarchy. It completely defeats that purpose if the
literal "1" means "Prelude.fromInteger
1", which is what the Haskell Report specifies.
So the
flag also causes
the following pieces of built-in syntax to refer to
whatever is in scope, not the Prelude
versions:
An integer literal 368 means
"fromInteger (368::Integer)", rather than
"Prelude.fromInteger (368::Integer)".
Fractional literals are handed in just the same way,
except that the translation is
fromRational (3.68::Rational).
The equality test in an overloaded numeric pattern
uses whatever (==) is in scope.
The subtraction operation, and the
greater-than-or-equal test, in n+k patterns
use whatever (-) and (>=) are in scope.
Negation (e.g. "- (f x)")
means "negate (f x)", both in numeric
patterns, and expressions.
"Do" notation is translated using whatever
functions (>>=),
(>>), and fail,
are in scope (not the Prelude
versions). List comprehensions, mdo (), and parallel array
comprehensions, are unaffected. Arrow
notation (see )
uses whatever arr,
(>>>), first,
app, (|||) and
loop functions are in scope. But unlike the
other constructs, the types of these functions must match the
Prelude types very closely. Details are in flux; if you want
to use this, ask!
In all cases (apart from arrow notation), the static semantics should be that of the desugared form,
even if that is a little unexpected. For example, the
static semantics of the literal 368
is exactly that of fromInteger (368::Integer); it's fine for
fromInteger to have any of the types:
fromInteger :: Integer -> Integer
fromInteger :: forall a. Foo a => Integer -> a
fromInteger :: Num a => a -> Integer
fromInteger :: Integer -> Bool -> Bool
Be warned: this is an experimental facility, with
fewer checks than usual. Use -dcore-lint
to typecheck the desugared program. If Core Lint is happy
you should be all right.Postfix operators
GHC allows a small extension to the syntax of left operator sections, which
allows you to define postfix operators. The extension is this: the left section
(e !)
is equivalent (from the point of view of both type checking and execution) to the expression
((!) e)
(for any expression e and operator (!).
The strict Haskell 98 interpretation is that the section is equivalent to
(\y -> (!) e y)
That is, the operator must be a function of two arguments. GHC allows it to
take only one argument, and that in turn allows you to write the function
postfix.
Since this extension goes beyond Haskell 98, it should really be enabled
by a flag; but in fact it is enabled all the time. (No Haskell 98 programs
change their behaviour, of course.)
The extension does not extend to the left-hand side of function
definitions; you must define such a function in prefix form.Record field disambiguation
In record construction and record pattern matching
it is entirely unambiguous which field is referred to, even if there are two different
data types in scope with a common field name. For example:
module M where
data S = MkS { x :: Int, y :: Bool }
module Foo where
import M
data T = MkT { x :: Int }
ok1 (MkS { x = n }) = n+1 -- Unambiguous
ok2 n = MkT { x = n+1 } -- Unambiguous
bad1 k = k { x = 3 } -- Ambiguous
bad2 k = x k -- Ambiguous
Even though there are two x's in scope,
it is clear that the x in the pattern in the
definition of ok1 can only mean the field
x from type S. Similarly for
the function ok2. However, in the record update
in bad1 and the record selection in bad2
it is not clear which of the two types is intended.
Haskell 98 regards all four as ambiguous, but with the
flag, GHC will accept
the former two. The rules are precisely the same as those for instance
declarations in Haskell 98, where the method names on the left-hand side
of the method bindings in an instance declaration refer unambiguously
to the method of that class (provided they are in scope at all), even
if there are other variables in scope with the same name.
This reduces the clutter of qualified names when you import two
records from different modules that use the same field name.
Record puns
Record puns are enabled by the flag -XNamedFieldPuns.
When using records, it is common to write a pattern that binds a
variable with the same name as a record field, such as:
data C = C {a :: Int}
f (C {a = a}) = a
Record punning permits the variable name to be elided, so one can simply
write
f (C {a}) = a
to mean the same pattern as above. That is, in a record pattern, the
pattern a expands into the pattern a =
a for the same name a.
Note that puns and other patterns can be mixed in the same record:
data C = C {a :: Int, b :: Int}
f (C {a, b = 4}) = a
and that puns can be used wherever record patterns occur (e.g. in
let bindings or at the top-level).
Record punning can also be used in an expression, writing, for example,
let a = 1 in C {a}
instead of
let a = 1 in C {a = a}
Note that this expansion is purely syntactic, so the record pun
expression refers to the nearest enclosing variable that is spelled the
same as the field name.
Record wildcards
Record wildcards are enabled by the flag -XRecordWildCards.
For records with many fields, it can be tiresome to write out each field
individually in a record pattern, as in
data C = C {a :: Int, b :: Int, c :: Int, d :: Int}
f (C {a = 1, b = b, c = c, d = d}) = b + c + d
Record wildcard syntax permits a (..) in a record
pattern, where each elided field f is replaced by the
pattern f = f. For example, the above pattern can be
written as
f (C {a = 1, ..}) = b + c + d
Note that wildcards can be mixed with other patterns, including puns
(); for example, in a pattern C {a
= 1, b, ..}). Additionally, record wildcards can be used
wherever record patterns occur, including in let
bindings and at the top-level. For example, the top-level binding
C {a = 1, ..} = e
defines b, c, and
d.
Record wildcards can also be used in expressions, writing, for example,
let {a = 1; b = 2; c = 3; d = 4} in C {..}
in place of
let {a = 1; b = 2; c = 3; d = 4} in C {a=a, b=b, c=c, d=d}
Note that this expansion is purely syntactic, so the record wildcard
expression refers to the nearest enclosing variables that are spelled
the same as the omitted field names.
Local Fixity Declarations
A careful reading of the Haskell 98 Report reveals that fixity
declarations (infix, infixl, and
infixr) are permitted to appear inside local bindings
such those introduced by let and
where. However, the Haskell Report does not specify
the semantics of such bindings very precisely.
In GHC, a fixity declaration may accompany a local binding:
let f = ...
infixr 3 `f`
in
...
and the fixity declaration applies wherever the binding is in scope.
For example, in a let, it applies in the right-hand
sides of other let-bindings and the body of the
letC. Or, in recursive do
expressions (), the local fixity
declarations of a let statement scope over other
statements in the group, just as the bound name does.
Moreover, a local fixity declaration *must* accompany a local binding of
that name: it is not possible to revise the fixity of name bound
elsewhere, as in
let infixr 9 $ in ...
Because local fixity declarations are technically Haskell 98, no flag is
necessary to enable them.
Package-qualified importsWith the flag, GHC allows
import declarations to be qualified by the package name that the
module is intended to be imported from. For example:
import "network" Network.Socket
would import the module Network.Socket from
the package network (any version). This may
be used to disambiguate an import when the same module is
available from multiple packages, or is present in both the
current package being built and an external package.Note: you probably don't need to use this feature, it was
added mainly so that we can build backwards-compatible versions of
packages when APIs change. It can lead to fragile dependencies in
the common case: modules occasionally move from one package to
another, rendering any package-qualified imports broken.Summary of stolen syntaxTurning on an option that enables special syntax
might cause working Haskell 98 code to fail
to compile, perhaps because it uses a variable name which has
become a reserved word. This section lists the syntax that is
"stolen" by language extensions.
We use
notation and nonterminal names from the Haskell 98 lexical syntax
(see the Haskell 98 Report).
We only list syntax changes here that might affect
existing working programs (i.e. "stolen" syntax). Many of these
extensions will also enable new context-free syntax, but in all
cases programs written to use the new syntax would not be
compilable without the option enabled.There are two classes of special
syntax:
New reserved words and symbols: character sequences
which are no longer available for use as identifiers in the
program.Other special syntax: sequences of characters that have
a different meaning when this particular option is turned
on.
The following syntax is stolen:
forallforall
Stolen (in types) by: ,
,
,
,
,
mdomdo
Stolen by: ,
foreignforeign
Stolen by: ,
rec,
proc, -<,
>-, -<<,
>>-, and (|,
|) brackets
proc
Stolen by: ,
?varid,
%varidimplicit parameters
Stolen by: ,
[|,
[e|, [p|,
[d|, [t|,
$(,
$varidTemplate Haskell
Stolen by: ,
[:varid|quasi-quotation
Stolen by: ,
varid{#},
char#,
string#,
integer#,
float#,
float##,
(#, #),
Stolen by: ,
Extensions to data types and type synonymsData types with no constructorsWith the flag, GHC lets you declare
a data type with no constructors. For example:
data S -- S :: *
data T a -- T :: * -> *
Syntactically, the declaration lacks the "= constrs" part. The
type can be parameterised over types of any kind, but if the kind is
not * then an explicit kind annotation must be used
(see ).Such data types have only one value, namely bottom.
Nevertheless, they can be useful when defining "phantom types".Infix type constructors, classes, and type variables
GHC allows type constructors, classes, and type variables to be operators, and
to be written infix, very much like expressions. More specifically:
A type constructor or class can be an operator, beginning with a colon; e.g. :*:.
The lexical syntax is the same as that for data constructors.
Data type and type-synonym declarations can be written infix, parenthesised
if you want further arguments. E.g.
data a :*: b = Foo a b
type a :+: b = Either a b
class a :=: b where ...
data (a :**: b) x = Baz a b x
type (a :++: b) y = Either (a,b) y
Types, and class constraints, can be written infix. For example
x :: Int :*: Bool
f :: (a :=: b) => a -> b
A type variable can be an (unqualified) operator e.g. +.
The lexical syntax is the same as that for variable operators, excluding "(.)",
"(!)", and "(*)". In a binding position, the operator must be
parenthesised. For example:
type T (+) = Int + Int
f :: T Either
f = Left 3
liftA2 :: Arrow (~>)
=> (a -> b -> c) -> (e ~> a) -> (e ~> b) -> (e ~> c)
liftA2 = ...
Back-quotes work
as for expressions, both for type constructors and type variables; e.g. Int `Either` Bool, or
Int `a` Bool. Similarly, parentheses work the same; e.g. (:*:) Int Bool.
Fixities may be declared for type constructors, or classes, just as for data constructors. However,
one cannot distinguish between the two in a fixity declaration; a fixity declaration
sets the fixity for a data constructor and the corresponding type constructor. For example:
infixl 7 T, :*:
sets the fixity for both type constructor T and data constructor T,
and similarly for :*:.
Int `a` Bool.
Function arrow is infixr with fixity 0. (This might change; I'm not sure what it should be.)
Liberalised type synonyms
Type synonyms are like macros at the type level, but Haskell 98 imposes many rules
on individual synonym declarations.
With the extension,
GHC does validity checking on types only after expanding type synonyms.
That means that GHC can be very much more liberal about type synonyms than Haskell 98.
You can write a forall (including overloading)
in a type synonym, thus:
type Discard a = forall b. Show b => a -> b -> (a, String)
f :: Discard a
f x y = (x, show y)
g :: Discard Int -> (Int,String) -- A rank-2 type
g f = f 3 True
If you also use ,
you can write an unboxed tuple in a type synonym:
type Pr = (# Int, Int #)
h :: Int -> Pr
h x = (# x, x #)
You can apply a type synonym to a forall type:
type Foo a = a -> a -> Bool
f :: Foo (forall b. b->b)
After expanding the synonym, f has the legal (in GHC) type:
f :: (forall b. b->b) -> (forall b. b->b) -> Bool
You can apply a type synonym to a partially applied type synonym:
type Generic i o = forall x. i x -> o x
type Id x = x
foo :: Generic Id []
After expanding the synonym, foo has the legal (in GHC) type:
foo :: forall x. x -> [x]
GHC currently does kind checking before expanding synonyms (though even that
could be changed.)
After expanding type synonyms, GHC does validity checking on types, looking for
the following mal-formedness which isn't detected simply by kind checking:
Type constructor applied to a type involving for-alls.
Unboxed tuple on left of an arrow.
Partially-applied type synonym.
So, for example,
this will be rejected:
type Pr = (# Int, Int #)
h :: Pr -> Int
h x = ...
because GHC does not allow unboxed tuples on the left of a function arrow.
Existentially quantified data constructors
The idea of using existential quantification in data type declarations
was suggested by Perry, and implemented in Hope+ (Nigel Perry, The Implementation
of Practical Functional Programming Languages, PhD Thesis, University of
London, 1991). It was later formalised by Laufer and Odersky
(Polymorphic type inference and abstract data types,
TOPLAS, 16(5), pp1411-1430, 1994).
It's been in Lennart
Augustsson's hbc Haskell compiler for several years, and
proved very useful. Here's the idea. Consider the declaration:
data Foo = forall a. MkFoo a (a -> Bool)
| Nil
The data type Foo has two constructors with types:
MkFoo :: forall a. a -> (a -> Bool) -> Foo
Nil :: Foo
Notice that the type variable a in the type of MkFoo
does not appear in the data type itself, which is plain Foo.
For example, the following expression is fine:
[MkFoo 3 even, MkFoo 'c' isUpper] :: [Foo]
Here, (MkFoo 3 even) packages an integer with a function
even that maps an integer to Bool; and MkFoo 'c'
isUpper packages a character with a compatible function. These
two things are each of type Foo and can be put in a list.
What can we do with a value of type Foo?. In particular,
what happens when we pattern-match on MkFoo?
f (MkFoo val fn) = ???
Since all we know about val and fn is that they
are compatible, the only (useful) thing we can do with them is to
apply fn to val to get a boolean. For example:
f :: Foo -> Bool
f (MkFoo val fn) = fn val
What this allows us to do is to package heterogeneous values
together with a bunch of functions that manipulate them, and then treat
that collection of packages in a uniform manner. You can express
quite a bit of object-oriented-like programming this way.
Why existential?
What has this to do with existential quantification?
Simply that MkFoo has the (nearly) isomorphic type
MkFoo :: (exists a . (a, a -> Bool)) -> Foo
But Haskell programmers can safely think of the ordinary
universally quantified type given above, thereby avoiding
adding a new existential quantification construct.
Existentials and type classes
An easy extension is to allow
arbitrary contexts before the constructor. For example:
data Baz = forall a. Eq a => Baz1 a a
| forall b. Show b => Baz2 b (b -> b)
The two constructors have the types you'd expect:
Baz1 :: forall a. Eq a => a -> a -> Baz
Baz2 :: forall b. Show b => b -> (b -> b) -> Baz
But when pattern matching on Baz1 the matched values can be compared
for equality, and when pattern matching on Baz2 the first matched
value can be converted to a string (as well as applying the function to it).
So this program is legal:
f :: Baz -> String
f (Baz1 p q) | p == q = "Yes"
| otherwise = "No"
f (Baz2 v fn) = show (fn v)
Operationally, in a dictionary-passing implementation, the
constructors Baz1 and Baz2 must store the
dictionaries for Eq and Show respectively, and
extract it on pattern matching.
Record Constructors
GHC allows existentials to be used with records syntax as well. For example:
data Counter a = forall self. NewCounter
{ _this :: self
, _inc :: self -> self
, _display :: self -> IO ()
, tag :: a
}
Here tag is a public field, with a well-typed selector
function tag :: Counter a -> a. The self
type is hidden from the outside; any attempt to apply _this,
_inc or _display as functions will raise a
compile-time error. In other words, GHC defines a record selector function
only for fields whose type does not mention the existentially-quantified variables.
(This example used an underscore in the fields for which record selectors
will not be defined, but that is only programming style; GHC ignores them.)
To make use of these hidden fields, we need to create some helper functions:
inc :: Counter a -> Counter a
inc (NewCounter x i d t) = NewCounter
{ _this = i x, _inc = i, _display = d, tag = t }
display :: Counter a -> IO ()
display NewCounter{ _this = x, _display = d } = d x
Now we can define counters with different underlying implementations:
counterA :: Counter String
counterA = NewCounter
{ _this = 0, _inc = (1+), _display = print, tag = "A" }
counterB :: Counter String
counterB = NewCounter
{ _this = "", _inc = ('#':), _display = putStrLn, tag = "B" }
main = do
display (inc counterA) -- prints "1"
display (inc (inc counterB)) -- prints "##"
At the moment, record update syntax is only supported for Haskell 98 data types,
so the following function does not work:
-- This is invalid; use explicit NewCounter instead for now
setTag :: Counter a -> a -> Counter a
setTag obj t = obj{ tag = t }
Restrictions
There are several restrictions on the ways in which existentially-quantified
constructors can be use.
When pattern matching, each pattern match introduces a new,
distinct, type for each existential type variable. These types cannot
be unified with any other type, nor can they escape from the scope of
the pattern match. For example, these fragments are incorrect:
f1 (MkFoo a f) = a
Here, the type bound by MkFoo "escapes", because a
is the result of f1. One way to see why this is wrong is to
ask what type f1 has:
f1 :: Foo -> a -- Weird!
What is this "a" in the result type? Clearly we don't mean
this:
f1 :: forall a. Foo -> a -- Wrong!
The original program is just plain wrong. Here's another sort of error
f2 (Baz1 a b) (Baz1 p q) = a==q
It's ok to say a==b or p==q, but
a==q is wrong because it equates the two distinct types arising
from the two Baz1 constructors.
You can't pattern-match on an existentially quantified
constructor in a let or where group of
bindings. So this is illegal:
f3 x = a==b where { Baz1 a b = x }
Instead, use a case expression:
f3 x = case x of Baz1 a b -> a==b
In general, you can only pattern-match
on an existentially-quantified constructor in a case expression or
in the patterns of a function definition.
The reason for this restriction is really an implementation one.
Type-checking binding groups is already a nightmare without
existentials complicating the picture. Also an existential pattern
binding at the top level of a module doesn't make sense, because it's
not clear how to prevent the existentially-quantified type "escaping".
So for now, there's a simple-to-state restriction. We'll see how
annoying it is.
You can't use existential quantification for newtype
declarations. So this is illegal:
newtype T = forall a. Ord a => MkT a
Reason: a value of type T must be represented as a
pair of a dictionary for Ord t and a value of type
t. That contradicts the idea that
newtype should have no concrete representation.
You can get just the same efficiency and effect by using
data instead of newtype. If
there is no overloading involved, then there is more of a case for
allowing an existentially-quantified newtype,
because the data version does carry an
implementation cost, but single-field existentially quantified
constructors aren't much use. So the simple restriction (no
existential stuff on newtype) stands, unless there
are convincing reasons to change it.
You can't use deriving to define instances of a
data type with existentially quantified data constructors.
Reason: in most cases it would not make sense. For example:;
data T = forall a. MkT [a] deriving( Eq )
To derive Eq in the standard way we would need to have equality
between the single component of two MkT constructors:
instance Eq T where
(MkT a) == (MkT b) = ???
But a and b have distinct types, and so can't be compared.
It's just about possible to imagine examples in which the derived instance
would make sense, but it seems altogether simpler simply to prohibit such
declarations. Define your own instances!
Declaring data types with explicit constructor signaturesGHC allows you to declare an algebraic data type by
giving the type signatures of constructors explicitly. For example:
data Maybe a where
Nothing :: Maybe a
Just :: a -> Maybe a
The form is called a "GADT-style declaration"
because Generalised Algebraic Data Types, described in ,
can only be declared using this form.Notice that GADT-style syntax generalises existential types ().
For example, these two declarations are equivalent:
data Foo = forall a. MkFoo a (a -> Bool)
data Foo' where { MKFoo :: a -> (a->Bool) -> Foo' }
Any data type that can be declared in standard Haskell-98 syntax
can also be declared using GADT-style syntax.
The choice is largely stylistic, but GADT-style declarations differ in one important respect:
they treat class constraints on the data constructors differently.
Specifically, if the constructor is given a type-class context, that
context is made available by pattern matching. For example:
data Set a where
MkSet :: Eq a => [a] -> Set a
makeSet :: Eq a => [a] -> Set a
makeSet xs = MkSet (nub xs)
insert :: a -> Set a -> Set a
insert a (MkSet as) | a `elem` as = MkSet as
| otherwise = MkSet (a:as)
A use of MkSet as a constructor (e.g. in the definition of makeSet)
gives rise to a (Eq a)
constraint, as you would expect. The new feature is that pattern-matching on MkSet
(as in the definition of insert) makes available an (Eq a)
context. In implementation terms, the MkSet constructor has a hidden field that stores
the (Eq a) dictionary that is passed to MkSet; so
when pattern-matching that dictionary becomes available for the right-hand side of the match.
In the example, the equality dictionary is used to satisfy the equality constraint
generated by the call to elem, so that the type of
insert itself has no Eq constraint.
For example, one possible application is to reify dictionaries:
data NumInst a where
MkNumInst :: Num a => NumInst a
intInst :: NumInst Int
intInst = MkNumInst
plus :: NumInst a -> a -> a -> a
plus MkNumInst p q = p + q
Here, a value of type NumInst a is equivalent
to an explicit (Num a) dictionary.
All this applies to constructors declared using the syntax of .
For example, the NumInst data type above could equivalently be declared
like this:
data NumInst a
= Num a => MkNumInst (NumInst a)
Notice that, unlike the situation when declaring an existential, there is
no forall, because the Num constrains the
data type's universally quantified type variable a.
A constructor may have both universal and existential type variables: for example,
the following two declarations are equivalent:
data T1 a
= forall b. (Num a, Eq b) => MkT1 a b
data T2 a where
MkT2 :: (Num a, Eq b) => a -> b -> T2 a
All this behaviour contrasts with Haskell 98's peculiar treatment of
contexts on a data type declaration (Section 4.2.1 of the Haskell 98 Report).
In Haskell 98 the definition
data Eq a => Set' a = MkSet' [a]
gives MkSet' the same type as MkSet above. But instead of
making available an (Eq a) constraint, pattern-matching
on MkSet'requires an (Eq a) constraint!
GHC faithfully implements this behaviour, odd though it is. But for GADT-style declarations,
GHC's behaviour is much more useful, as well as much more intuitive.
The rest of this section gives further details about GADT-style data
type declarations.
The result type of each data constructor must begin with the type constructor being defined.
If the result type of all constructors
has the form T a1 ... an, where a1 ... an
are distinct type variables, then the data type is ordinary;
otherwise is a generalised data type ().
The type signature of
each constructor is independent, and is implicitly universally quantified as usual.
Different constructors may have different universally-quantified type variables
and different type-class constraints.
For example, this is fine:
data T a where
T1 :: Eq b => b -> T b
T2 :: (Show c, Ix c) => c -> [c] -> T c
Unlike a Haskell-98-style
data type declaration, the type variable(s) in the "data Set a where" header
have no scope. Indeed, one can write a kind signature instead:
data Set :: * -> * where ...
or even a mixture of the two:
data Foo a :: (* -> *) -> * where ...
The type variables (if given) may be explicitly kinded, so we could also write the header for Foo
like this:
data Foo a (b :: * -> *) where ...
You can use strictness annotations, in the obvious places
in the constructor type:
data Term a where
Lit :: !Int -> Term Int
If :: Term Bool -> !(Term a) -> !(Term a) -> Term a
Pair :: Term a -> Term b -> Term (a,b)
You can use a deriving clause on a GADT-style data type
declaration. For example, these two declarations are equivalent
data Maybe1 a where {
Nothing1 :: Maybe1 a ;
Just1 :: a -> Maybe1 a
} deriving( Eq, Ord )
data Maybe2 a = Nothing2 | Just2 a
deriving( Eq, Ord )
You can use record syntax on a GADT-style data type declaration:
data Person where
Adult { name :: String, children :: [Person] } :: Person
Child { name :: String } :: Person
As usual, for every constructor that has a field f, the type of
field f must be the same (modulo alpha conversion).
At the moment, record updates are not yet possible with GADT-style declarations,
so support is limited to record construction, selection and pattern matching.
For example
aPerson = Adult { name = "Fred", children = [] }
shortName :: Person -> Bool
hasChildren (Adult { children = kids }) = not (null kids)
hasChildren (Child {}) = False
As in the case of existentials declared using the Haskell-98-like record syntax
(),
record-selector functions are generated only for those fields that have well-typed
selectors.
Here is the example of that section, in GADT-style syntax:
data Counter a where
NewCounter { _this :: self
, _inc :: self -> self
, _display :: self -> IO ()
, tag :: a
}
:: Counter a
As before, only one selector function is generated here, that for tag.
Nevertheless, you can still use all the field names in pattern matching and record construction.
Generalised Algebraic Data Types (GADTs)Generalised Algebraic Data Types generalise ordinary algebraic data types
by allowing constructors to have richer return types. Here is an example:
data Term a where
Lit :: Int -> Term Int
Succ :: Term Int -> Term Int
IsZero :: Term Int -> Term Bool
If :: Term Bool -> Term a -> Term a -> Term a
Pair :: Term a -> Term b -> Term (a,b)
Notice that the return type of the constructors is not always Term a, as is the
case with ordinary data types. This generality allows us to
write a well-typed eval function
for these Terms:
eval :: Term a -> a
eval (Lit i) = i
eval (Succ t) = 1 + eval t
eval (IsZero t) = eval t == 0
eval (If b e1 e2) = if eval b then eval e1 else eval e2
eval (Pair e1 e2) = (eval e1, eval e2)
The key point about GADTs is that pattern matching causes type refinement.
For example, in the right hand side of the equation
eval :: Term a -> a
eval (Lit i) = ...
the type a is refined to Int. That's the whole point!
A precise specification of the type rules is beyond what this user manual aspires to,
but the design closely follows that described in
the paper Simple
unification-based type inference for GADTs,
(ICFP 2006).
The general principle is this: type refinement is only carried out
based on user-supplied type annotations.
So if no type signature is supplied for eval, no type refinement happens,
and lots of obscure error messages will
occur. However, the refinement is quite general. For example, if we had:
eval :: Term a -> a -> a
eval (Lit i) j = i+j
the pattern match causes the type a to be refined to Int (because of the type
of the constructor Lit), and that refinement also applies to the type of j, and
the result type of the case expression. Hence the addition i+j is legal.
These and many other examples are given in papers by Hongwei Xi, and
Tim Sheard. There is a longer introduction
on the wiki,
and Ralf Hinze's
Fun with phantom types also has a number of examples. Note that papers
may use different notation to that implemented in GHC.
The rest of this section outlines the extensions to GHC that support GADTs. The extension is enabled with
. The flag also sets .
A GADT can only be declared using GADT-style syntax ();
the old Haskell-98 syntax for data declarations always declares an ordinary data type.
The result type of each constructor must begin with the type constructor being defined,
but for a GADT the arguments to the type constructor can be arbitrary monotypes.
For example, in the Term data
type above, the type of each constructor must end with Term ty, but
the ty need not be a type variable (e.g. the Lit
constructor).
It's is permitted to declare an ordinary algebraic data type using GADT-style syntax.
What makes a GADT into a GADT is not the syntax, but rather the presence of data constructors
whose result type is not just T a b.
You cannot use a deriving clause for a GADT; only for
an ordinary data type.
As mentioned in , record syntax is supported.
For example:
data Term a where
Lit { val :: Int } :: Term Int
Succ { num :: Term Int } :: Term Int
Pred { num :: Term Int } :: Term Int
IsZero { arg :: Term Int } :: Term Bool
Pair { arg1 :: Term a
, arg2 :: Term b
} :: Term (a,b)
If { cnd :: Term Bool
, tru :: Term a
, fls :: Term a
} :: Term a
However, for GADTs there is the following additional constraint:
every constructor that has a field f must have
the same result type (modulo alpha conversion)
Hence, in the above example, we cannot merge the num
and arg fields above into a
single name. Although their field types are both Term Int,
their selector functions actually have different types:
num :: Term Int -> Term Int
arg :: Term Bool -> Term Int
When pattern-matching against data constructors drawn from a GADT,
for example in a case expression, the following rules apply:
The type of the scrutinee must be rigid.The type of the entire case expression must be rigid.The type of any free variable mentioned in any of
the case alternatives must be rigid.
A type is "rigid" if it is completely known to the compiler at its binding site. The easiest
way to ensure that a variable a rigid type is to give it a type signature.
For more precise details see
Simple unification-based type inference for GADTs
. The criteria implemented by GHC are given in the Appendix.
Extensions to the "deriving" mechanismInferred context for deriving clauses
The Haskell Report is vague about exactly when a deriving clause is
legal. For example:
data T0 f a = MkT0 a deriving( Eq )
data T1 f a = MkT1 (f a) deriving( Eq )
data T2 f a = MkT2 (f (f a)) deriving( Eq )
The natural generated Eq code would result in these instance declarations:
instance Eq a => Eq (T0 f a) where ...
instance Eq (f a) => Eq (T1 f a) where ...
instance Eq (f (f a)) => Eq (T2 f a) where ...
The first of these is obviously fine. The second is still fine, although less obviously.
The third is not Haskell 98, and risks losing termination of instances.
GHC takes a conservative position: it accepts the first two, but not the third. The rule is this:
each constraint in the inferred instance context must consist only of type variables,
with no repetitions.
This rule is applied regardless of flags. If you want a more exotic context, you can write
it yourself, using the standalone deriving mechanism.
Stand-alone deriving declarations
GHC now allows stand-alone deriving declarations, enabled by -XStandaloneDeriving:
data Foo a = Bar a | Baz String
deriving instance Eq a => Eq (Foo a)
The syntax is identical to that of an ordinary instance declaration apart from (a) the keyword
deriving, and (b) the absence of the where part.
You must supply a context (in the example the context is (Eq a)),
exactly as you would in an ordinary instance declaration.
(In contrast the context is inferred in a deriving clause
attached to a data type declaration.)
A deriving instance declaration
must obey the same rules concerning form and termination as ordinary instance declarations,
controlled by the same flags; see .
Unlike a deriving
declaration attached to a data declaration, the instance can be more specific
than the data type (assuming you also use
-XFlexibleInstances, ). Consider
for example
data Foo a = Bar a | Baz String
deriving instance Eq a => Eq (Foo [a])
deriving instance Eq a => Eq (Foo (Maybe a))
This will generate a derived instance for (Foo [a]) and (Foo (Maybe a)),
but other types such as (Foo (Int,Bool)) will not be an instance of Eq.
The stand-alone syntax is generalised for newtypes in exactly the same
way that ordinary deriving clauses are generalised ().
For example:
newtype Foo a = MkFoo (State Int a)
deriving instance MonadState Int Foo
GHC always treats the last parameter of the instance
(Foo in this example) as the type whose instance is being derived.
Deriving clause for classes Typeable and Data
Haskell 98 allows the programmer to add "deriving( Eq, Ord )" to a data type
declaration, to generate a standard instance declaration for classes specified in the deriving clause.
In Haskell 98, the only classes that may appear in the deriving clause are the standard
classes Eq, Ord,
Enum, Ix, Bounded, Read, and Show.
GHC extends this list with two more classes that may be automatically derived
(provided the flag is specified):
Typeable, and Data. These classes are defined in the library
modules Data.Typeable and Data.Generics respectively, and the
appropriate class must be in scope before it can be mentioned in the deriving clause.
An instance of Typeable can only be derived if the
data type has seven or fewer type parameters, all of kind *.
The reason for this is that the Typeable class is derived using the scheme
described in
Scrap More Boilerplate: Reflection, Zips, and Generalised Casts
.
(Section 7.4 of the paper describes the multiple Typeable classes that
are used, and only Typeable1 up to
Typeable7 are provided in the library.)
In other cases, there is nothing to stop the programmer writing a TypableX
class, whose kind suits that of the data type constructor, and
then writing the data type instance by hand.
Generalised derived instances for newtypes
When you define an abstract type using newtype, you may want
the new type to inherit some instances from its representation. In
Haskell 98, you can inherit instances of Eq, Ord,
Enum and Bounded by deriving them, but for any
other classes you have to write an explicit instance declaration. For
example, if you define
newtype Dollars = Dollars Int
and you want to use arithmetic on Dollars, you have to
explicitly define an instance of Num:
instance Num Dollars where
Dollars a + Dollars b = Dollars (a+b)
...
All the instance does is apply and remove the newtype
constructor. It is particularly galling that, since the constructor
doesn't appear at run-time, this instance declaration defines a
dictionary which is wholly equivalent to the Int
dictionary, only slower!
Generalising the deriving clause
GHC now permits such instances to be derived instead,
using the flag ,
so one can write
newtype Dollars = Dollars Int deriving (Eq,Show,Num)
and the implementation uses the sameNum dictionary
for Dollars as for Int. Notionally, the compiler
derives an instance declaration of the form
instance Num Int => Num Dollars
which just adds or removes the newtype constructor according to the type.
We can also derive instances of constructor classes in a similar
way. For example, suppose we have implemented state and failure monad
transformers, such that
instance Monad m => Monad (State s m)
instance Monad m => Monad (Failure m)
In Haskell 98, we can define a parsing monad by
type Parser tok m a = State [tok] (Failure m) a
which is automatically a monad thanks to the instance declarations
above. With the extension, we can make the parser type abstract,
without needing to write an instance of class Monad, via
newtype Parser tok m a = Parser (State [tok] (Failure m) a)
deriving Monad
In this case the derived instance declaration is of the form
instance Monad (State [tok] (Failure m)) => Monad (Parser tok m)
Notice that, since Monad is a constructor class, the
instance is a partial application of the new type, not the
entire left hand side. We can imagine that the type declaration is
"eta-converted" to generate the context of the instance
declaration.
We can even derive instances of multi-parameter classes, provided the
newtype is the last class parameter. In this case, a ``partial
application'' of the class appears in the deriving
clause. For example, given the class
class StateMonad s m | m -> s where ...
instance Monad m => StateMonad s (State s m) where ...
then we can derive an instance of StateMonad for Parsers by
newtype Parser tok m a = Parser (State [tok] (Failure m) a)
deriving (Monad, StateMonad [tok])
The derived instance is obtained by completing the application of the
class to the new type:
instance StateMonad [tok] (State [tok] (Failure m)) =>
StateMonad [tok] (Parser tok m)
As a result of this extension, all derived instances in newtype
declarations are treated uniformly (and implemented just by reusing
the dictionary for the representation type), exceptShow and Read, which really behave differently for
the newtype and its representation.
A more precise specification
Derived instance declarations are constructed as follows. Consider the
declaration (after expansion of any type synonyms)
newtype T v1...vn = T' (t vk+1...vn) deriving (c1...cm)
where
The ci are partial applications of
classes of the form C t1'...tj', where the arity of C
is exactly j+1. That is, C lacks exactly one type argument.
The k is chosen so that ci (T v1...vk) is well-kinded.
The type t is an arbitrary type.
The type variables vk+1...vn do not occur in t,
nor in the ci, and
None of the ci is Read, Show,
Typeable, or Data. These classes
should not "look through" the type or its constructor. You can still
derive these classes for a newtype, but it happens in the usual way, not
via this new mechanism.
Then, for each ci, the derived instance
declaration is:
instance ci t => ci (T v1...vk)
As an example which does not work, consider
newtype NonMonad m s = NonMonad (State s m s) deriving Monad
Here we cannot derive the instance
instance Monad (State s m) => Monad (NonMonad m)
because the type variable s occurs in State s m,
and so cannot be "eta-converted" away. It is a good thing that this
deriving clause is rejected, because NonMonad m is
not, in fact, a monad --- for the same reason. Try defining
>>= with the correct type: you won't be able to.
Notice also that the order of class parameters becomes
important, since we can only derive instances for the last one. If the
StateMonad class above were instead defined as
class StateMonad m s | m -> s where ...
then we would not have been able to derive an instance for the
Parser type above. We hypothesise that multi-parameter
classes usually have one "main" parameter for which deriving new
instances is most interesting.
Lastly, all of this applies only for classes other than
Read, Show, Typeable,
and Data, for which the built-in derivation applies (section
4.3.3. of the Haskell Report).
(For the standard classes Eq, Ord,
Ix, and Bounded it is immaterial whether
the standard method is used or the one described here.)
Class and instances declarationsClass declarations
This section, and the next one, documents GHC's type-class extensions.
There's lots of background in the paper Type
classes: exploring the design space (Simon Peyton Jones, Mark
Jones, Erik Meijer).
All the extensions are enabled by the flag.
Multi-parameter type classes
Multi-parameter type classes are permitted. For example:
class Collection c a where
union :: c a -> c a -> c a
...etc.
The superclasses of a class declaration
There are no restrictions on the context in a class declaration
(which introduces superclasses), except that the class hierarchy must
be acyclic. So these class declarations are OK:
class Functor (m k) => FiniteMap m k where
...
class (Monad m, Monad (t m)) => Transform t m where
lift :: m a -> (t m) a
As in Haskell 98, The class hierarchy must be acyclic. However, the definition
of "acyclic" involves only the superclass relationships. For example,
this is OK:
class C a where {
op :: D b => a -> b -> b
}
class C a => D a where { ... }
Here, C is a superclass of D, but it's OK for a
class operation op of C to mention D. (It
would not be OK for D to be a superclass of C.)
Class method types
Haskell 98 prohibits class method types to mention constraints on the
class type variable, thus:
class Seq s a where
fromList :: [a] -> s a
elem :: Eq a => a -> s a -> Bool
The type of elem is illegal in Haskell 98, because it
contains the constraint Eq a, constrains only the
class type variable (in this case a).
GHC lifts this restriction (flag ).
Functional dependencies
Functional dependencies are implemented as described by Mark Jones
in “Type Classes with Functional Dependencies”, Mark P. Jones,
In Proceedings of the 9th European Symposium on Programming,
ESOP 2000, Berlin, Germany, March 2000, Springer-Verlag LNCS 1782,
.
Functional dependencies are introduced by a vertical bar in the syntax of a
class declaration; e.g.
class (Monad m) => MonadState s m | m -> s where ...
class Foo a b c | a b -> c where ...
There should be more documentation, but there isn't (yet). Yell if you need it.
Rules for functional dependencies
In a class declaration, all of the class type variables must be reachable (in the sense
mentioned in )
from the free variables of each method type.
For example:
class Coll s a where
empty :: s
insert :: s -> a -> s
is not OK, because the type of empty doesn't mention
a. Functional dependencies can make the type variable
reachable:
class Coll s a | s -> a where
empty :: s
insert :: s -> a -> s
Alternatively Coll might be rewritten
class Coll s a where
empty :: s a
insert :: s a -> a -> s a
which makes the connection between the type of a collection of
a's (namely (s a)) and the element type a.
Occasionally this really doesn't work, in which case you can split the
class like this:
class CollE s where
empty :: s
class CollE s => Coll s a where
insert :: s -> a -> s
Background on functional dependenciesThe following description of the motivation and use of functional dependencies is taken
from the Hugs user manual, reproduced here (with minor changes) by kind
permission of Mark Jones.
Consider the following class, intended as part of a
library for collection types:
class Collects e ce where
empty :: ce
insert :: e -> ce -> ce
member :: e -> ce -> Bool
The type variable e used here represents the element type, while ce is the type
of the container itself. Within this framework, we might want to define
instances of this class for lists or characteristic functions (both of which
can be used to represent collections of any equality type), bit sets (which can
be used to represent collections of characters), or hash tables (which can be
used to represent any collection whose elements have a hash function). Omitting
standard implementation details, this would lead to the following declarations:
instance Eq e => Collects e [e] where ...
instance Eq e => Collects e (e -> Bool) where ...
instance Collects Char BitSet where ...
instance (Hashable e, Collects a ce)
=> Collects e (Array Int ce) where ...
All this looks quite promising; we have a class and a range of interesting
implementations. Unfortunately, there are some serious problems with the class
declaration. First, the empty function has an ambiguous type:
empty :: Collects e ce => ce
By "ambiguous" we mean that there is a type variable e that appears on the left
of the => symbol, but not on the right. The problem with
this is that, according to the theoretical foundations of Haskell overloading,
we cannot guarantee a well-defined semantics for any term with an ambiguous
type.
We can sidestep this specific problem by removing the empty member from the
class declaration. However, although the remaining members, insert and member,
do not have ambiguous types, we still run into problems when we try to use
them. For example, consider the following two functions:
f x y = insert x . insert y
g = f True 'a'
for which GHC infers the following types:
f :: (Collects a c, Collects b c) => a -> b -> c -> c
g :: (Collects Bool c, Collects Char c) => c -> c
Notice that the type for f allows the two parameters x and y to be assigned
different types, even though it attempts to insert each of the two values, one
after the other, into the same collection. If we're trying to model collections
that contain only one type of value, then this is clearly an inaccurate
type. Worse still, the definition for g is accepted, without causing a type
error. As a result, the error in this code will not be flagged at the point
where it appears. Instead, it will show up only when we try to use g, which
might even be in a different module.
An attempt to use constructor classes
Faced with the problems described above, some Haskell programmers might be
tempted to use something like the following version of the class declaration:
class Collects e c where
empty :: c e
insert :: e -> c e -> c e
member :: e -> c e -> Bool
The key difference here is that we abstract over the type constructor c that is
used to form the collection type c e, and not over that collection type itself,
represented by ce in the original class declaration. This avoids the immediate
problems that we mentioned above: empty has type Collects e c => c
e, which is not ambiguous.
The function f from the previous section has a more accurate type:
f :: (Collects e c) => e -> e -> c e -> c e
The function g from the previous section is now rejected with a type error as
we would hope because the type of f does not allow the two arguments to have
different types.
This, then, is an example of a multiple parameter class that does actually work
quite well in practice, without ambiguity problems.
There is, however, a catch. This version of the Collects class is nowhere near
as general as the original class seemed to be: only one of the four instances
for Collects
given above can be used with this version of Collects because only one of
them---the instance for lists---has a collection type that can be written in
the form c e, for some type constructor c, and element type e.
Adding functional dependencies
To get a more useful version of the Collects class, Hugs provides a mechanism
that allows programmers to specify dependencies between the parameters of a
multiple parameter class (For readers with an interest in theoretical
foundations and previous work: The use of dependency information can be seen
both as a generalization of the proposal for `parametric type classes' that was
put forward by Chen, Hudak, and Odersky, or as a special case of Mark Jones's
later framework for "improvement" of qualified types. The
underlying ideas are also discussed in a more theoretical and abstract setting
in a manuscript [implparam], where they are identified as one point in a
general design space for systems of implicit parameterization.).
To start with an abstract example, consider a declaration such as:
class C a b where ...
which tells us simply that C can be thought of as a binary relation on types
(or type constructors, depending on the kinds of a and b). Extra clauses can be
included in the definition of classes to add information about dependencies
between parameters, as in the following examples:
class D a b | a -> b where ...
class E a b | a -> b, b -> a where ...
The notation a -> b used here between the | and where
symbols --- not to be
confused with a function type --- indicates that the a parameter uniquely
determines the b parameter, and might be read as "a determines b." Thus D is
not just a relation, but actually a (partial) function. Similarly, from the two
dependencies that are included in the definition of E, we can see that E
represents a (partial) one-one mapping between types.
More generally, dependencies take the form x1 ... xn -> y1 ... ym,
where x1, ..., xn, and y1, ..., yn are type variables with n>0 and
m>=0, meaning that the y parameters are uniquely determined by the x
parameters. Spaces can be used as separators if more than one variable appears
on any single side of a dependency, as in t -> a b. Note that a class may be
annotated with multiple dependencies using commas as separators, as in the
definition of E above. Some dependencies that we can write in this notation are
redundant, and will be rejected because they don't serve any useful
purpose, and may instead indicate an error in the program. Examples of
dependencies like this include a -> a ,
a -> a a ,
a -> , etc. There can also be
some redundancy if multiple dependencies are given, as in
a->b,
b->c , a->c , and
in which some subset implies the remaining dependencies. Examples like this are
not treated as errors. Note that dependencies appear only in class
declarations, and not in any other part of the language. In particular, the
syntax for instance declarations, class constraints, and types is completely
unchanged.
By including dependencies in a class declaration, we provide a mechanism for
the programmer to specify each multiple parameter class more precisely. The
compiler, on the other hand, is responsible for ensuring that the set of
instances that are in scope at any given point in the program is consistent
with any declared dependencies. For example, the following pair of instance
declarations cannot appear together in the same scope because they violate the
dependency for D, even though either one on its own would be acceptable:
instance D Bool Int where ...
instance D Bool Char where ...
Note also that the following declaration is not allowed, even by itself:
instance D [a] b where ...
The problem here is that this instance would allow one particular choice of [a]
to be associated with more than one choice for b, which contradicts the
dependency specified in the definition of D. More generally, this means that,
in any instance of the form:
instance D t s where ...
for some particular types t and s, the only variables that can appear in s are
the ones that appear in t, and hence, if the type t is known, then s will be
uniquely determined.
The benefit of including dependency information is that it allows us to define
more general multiple parameter classes, without ambiguity problems, and with
the benefit of more accurate types. To illustrate this, we return to the
collection class example, and annotate the original definition of Collects
with a simple dependency:
class Collects e ce | ce -> e where
empty :: ce
insert :: e -> ce -> ce
member :: e -> ce -> Bool
The dependency ce -> e here specifies that the type e of elements is uniquely
determined by the type of the collection ce. Note that both parameters of
Collects are of kind *; there are no constructor classes here. Note too that
all of the instances of Collects that we gave earlier can be used
together with this new definition.
What about the ambiguity problems that we encountered with the original
definition? The empty function still has type Collects e ce => ce, but it is no
longer necessary to regard that as an ambiguous type: Although the variable e
does not appear on the right of the => symbol, the dependency for class
Collects tells us that it is uniquely determined by ce, which does appear on
the right of the => symbol. Hence the context in which empty is used can still
give enough information to determine types for both ce and e, without
ambiguity. More generally, we need only regard a type as ambiguous if it
contains a variable on the left of the => that is not uniquely determined
(either directly or indirectly) by the variables on the right.
Dependencies also help to produce more accurate types for user defined
functions, and hence to provide earlier detection of errors, and less cluttered
types for programmers to work with. Recall the previous definition for a
function f:
f x y = insert x y = insert x . insert y
for which we originally obtained a type:
f :: (Collects a c, Collects b c) => a -> b -> c -> c
Given the dependency information that we have for Collects, however, we can
deduce that a and b must be equal because they both appear as the second
parameter in a Collects constraint with the same first parameter c. Hence we
can infer a shorter and more accurate type for f:
f :: (Collects a c) => a -> a -> c -> c
In a similar way, the earlier definition of g will now be flagged as a type error.
Although we have given only a few examples here, it should be clear that the
addition of dependency information can help to make multiple parameter classes
more useful in practice, avoiding ambiguity problems, and allowing more general
sets of instance declarations.
Instance declarationsRelaxed rules for instance declarationsAn instance declaration has the form
instance ( assertion1, ..., assertionn) => classtype1 ... typem where ...
The part before the "=>" is the
context, while the part after the
"=>" is the head of the instance declaration.
In Haskell 98 the head of an instance declaration
must be of the form C (T a1 ... an), where
C is the class, T is a type constructor,
and the a1 ... an are distinct type variables.
Furthermore, the assertions in the context of the instance declaration
must be of the form C a where a
is a type variable that occurs in the head.
The flag loosens these restrictions
considerably. Firstly, multi-parameter type classes are permitted. Secondly,
the context and head of the instance declaration can each consist of arbitrary
(well-kinded) assertions (C t1 ... tn) subject only to the
following rules:
The Paterson Conditions: for each assertion in the context
No type variable has more occurrences in the assertion than in the headThe assertion has fewer constructors and variables (taken together
and counting repetitions) than the headThe Coverage Condition. For each functional dependency,
tvsleft->tvsright, of the class,
every type variable in
S(tvsright) must appear in
S(tvsleft), where S is the
substitution mapping each type variable in the class declaration to the
corresponding type in the instance declaration.
These restrictions ensure that context reduction terminates: each reduction
step makes the problem smaller by at least one
constructor. Both the Paterson Conditions and the Coverage Condition are lifted
if you give the
flag ().
You can find lots of background material about the reason for these
restrictions in the paper
Understanding functional dependencies via Constraint Handling Rules.
For example, these are OK:
instance C Int [a] -- Multiple parameters
instance Eq (S [a]) -- Structured type in head
-- Repeated type variable in head
instance C4 a a => C4 [a] [a]
instance Stateful (ST s) (MutVar s)
-- Head can consist of type variables only
instance C a
instance (Eq a, Show b) => C2 a b
-- Non-type variables in context
instance Show (s a) => Show (Sized s a)
instance C2 Int a => C3 Bool [a]
instance C2 Int a => C3 [a] b
But these are not:
-- Context assertion no smaller than head
instance C a => C a where ...
-- (C b b) has more more occurrences of b than the head
instance C b b => Foo [b] where ...
The same restrictions apply to instances generated by
deriving clauses. Thus the following is accepted:
data MinHeap h a = H a (h a)
deriving (Show)
because the derived instance
instance (Show a, Show (h a)) => Show (MinHeap h a)
conforms to the above rules.
A useful idiom permitted by the above rules is as follows.
If one allows overlapping instance declarations then it's quite
convenient to have a "default instance" declaration that applies if
something more specific does not:
instance C a where
op = ... -- Default
Undecidable instances
Sometimes even the rules of are too onerous.
For example, sometimes you might want to use the following to get the
effect of a "class synonym":
class (C1 a, C2 a, C3 a) => C a where { }
instance (C1 a, C2 a, C3 a) => C a where { }
This allows you to write shorter signatures:
f :: C a => ...
instead of
f :: (C1 a, C2 a, C3 a) => ...
The restrictions on functional dependencies () are particularly troublesome.
It is tempting to introduce type variables in the context that do not appear in
the head, something that is excluded by the normal rules. For example:
class HasConverter a b | a -> b where
convert :: a -> b
data Foo a = MkFoo a
instance (HasConverter a b,Show b) => Show (Foo a) where
show (MkFoo value) = show (convert value)
This is dangerous territory, however. Here, for example, is a program that would make the
typechecker loop:
class D a
class F a b | a->b
instance F [a] [[a]]
instance (D c, F a c) => D [a] -- 'c' is not mentioned in the head
Similarly, it can be tempting to lift the coverage condition:
class Mul a b c | a b -> c where
(.*.) :: a -> b -> c
instance Mul Int Int Int where (.*.) = (*)
instance Mul Int Float Float where x .*. y = fromIntegral x * y
instance Mul a b c => Mul a [b] [c] where x .*. v = map (x.*.) v
The third instance declaration does not obey the coverage condition;
and indeed the (somewhat strange) definition:
f = \ b x y -> if b then x .*. [y] else y
makes instance inference go into a loop, because it requires the constraint
(Mul a [b] b).
Nevertheless, GHC allows you to experiment with more liberal rules. If you use
the experimental flag
-XUndecidableInstances,
both the Paterson Conditions and the Coverage Condition
(described in ) are lifted. Termination is ensured by having a
fixed-depth recursion stack. If you exceed the stack depth you get a
sort of backtrace, and the opportunity to increase the stack depth
with N.
Overlapping instances
In general, GHC requires that that it be unambiguous which instance
declaration
should be used to resolve a type-class constraint. This behaviour
can be modified by two flags:
-XOverlappingInstances
and
-XIncoherentInstances
, as this section discusses. Both these
flags are dynamic flags, and can be set on a per-module basis, using
an OPTIONS_GHC pragma if desired ().
When GHC tries to resolve, say, the constraint C Int Bool,
it tries to match every instance declaration against the
constraint,
by instantiating the head of the instance declaration. For example, consider
these declarations:
instance context1 => C Int a where ... -- (A)
instance context2 => C a Bool where ... -- (B)
instance context3 => C Int [a] where ... -- (C)
instance context4 => C Int [Int] where ... -- (D)
The instances (A) and (B) match the constraint C Int Bool,
but (C) and (D) do not. When matching, GHC takes
no account of the context of the instance declaration
(context1 etc).
GHC's default behaviour is that exactly one instance must match the
constraint it is trying to resolve.
It is fine for there to be a potential of overlap (by
including both declarations (A) and (B), say); an error is only reported if a
particular constraint matches more than one.
The flag instructs GHC to allow
more than one instance to match, provided there is a most specific one. For
example, the constraint C Int [Int] matches instances (A),
(C) and (D), but the last is more specific, and hence is chosen. If there is no
most-specific match, the program is rejected.
However, GHC is conservative about committing to an overlapping instance. For example:
f :: [b] -> [b]
f x = ...
Suppose that from the RHS of f we get the constraint
C Int [b]. But
GHC does not commit to instance (C), because in a particular
call of f, b might be instantiate
to Int, in which case instance (D) would be more specific still.
So GHC rejects the program.
(If you add the flag ,
GHC will instead pick (C), without complaining about
the problem of subsequent instantiations.)
Notice that we gave a type signature to f, so GHC had to
check that f has the specified type.
Suppose instead we do not give a type signature, asking GHC to infer
it instead. In this case, GHC will refrain from
simplifying the constraint C Int [b] (for the same reason
as before) but, rather than rejecting the program, it will infer the type
f :: C Int [b] => [b] -> [b]
That postpones the question of which instance to pick to the
call site for f
by which time more is known about the type b.
You can write this type signature yourself if you use the
flag.
Exactly the same situation can arise in instance declarations themselves. Suppose we have
class Foo a where
f :: a -> a
instance Foo [b] where
f x = ...
and, as before, the constraint C Int [b] arises from f's
right hand side. GHC will reject the instance, complaining as before that it does not know how to resolve
the constraint C Int [b], because it matches more than one instance
declaration. The solution is to postpone the choice by adding the constraint to the context
of the instance declaration, thus:
instance C Int [b] => Foo [b] where
f x = ...
(You need to do this.)
The willingness to be overlapped or incoherent is a property of
the instance declaration itself, controlled by the
presence or otherwise of the
and flags when that module is
being defined. Neither flag is required in a module that imports and uses the
instance declaration. Specifically, during the lookup process:
An instance declaration is ignored during the lookup process if (a) a more specific
match is found, and (b) the instance declaration was compiled with
. The flag setting for the
more-specific instance does not matter.
Suppose an instance declaration does not match the constraint being looked up, but
does unify with it, so that it might match when the constraint is further
instantiated. Usually GHC will regard this as a reason for not committing to
some other constraint. But if the instance declaration was compiled with
, GHC will skip the "does-it-unify?"
check for that declaration.
These rules make it possible for a library author to design a library that relies on
overlapping instances without the library client having to know.
If an instance declaration is compiled without
,
then that instance can never be overlapped. This could perhaps be
inconvenient. Perhaps the rule should instead say that the
overlapping instance declaration should be compiled in
this way, rather than the overlapped one. Perhaps overlap
at a usage site should be permitted regardless of how the instance declarations
are compiled, if the flag is
used at the usage site. (Mind you, the exact usage site can occasionally be
hard to pin down.) We are interested to receive feedback on these points.
The flag implies the
flag, but not vice versa.
Type synonyms in the instance headUnlike Haskell 98, instance heads may use type
synonyms. (The instance "head" is the bit after the "=>" in an instance decl.)
As always, using a type synonym is just shorthand for
writing the RHS of the type synonym definition. For example:
type Point = (Int,Int)
instance C Point where ...
instance C [Point] where ...
is legal. However, if you added
instance C (Int,Int) where ...
as well, then the compiler will complain about the overlapping
(actually, identical) instance declarations. As always, type synonyms
must be fully applied. You cannot, for example, write:
type P a = [[a]]
instance Monad P where ...
This design decision is independent of all the others, and easily
reversed, but it makes sense to me.
Overloaded string literals
GHC supports overloaded string literals. Normally a
string literal has type String, but with overloaded string
literals enabled (with -XOverloadedStrings)
a string literal has type (IsString a) => a.
This means that the usual string syntax can be used, e.g., for packed strings
and other variations of string like types. String literals behave very much
like integer literals, i.e., they can be used in both expressions and patterns.
If used in a pattern the literal with be replaced by an equality test, in the same
way as an integer literal is.
The class IsString is defined as:
class IsString a where
fromString :: String -> a
The only predefined instance is the obvious one to make strings work as usual:
instance IsString [Char] where
fromString cs = cs
The class IsString is not in scope by default. If you want to mention
it explicitly (for example, to give an instance declaration for it), you can import it
from module GHC.Exts.
Haskell's defaulting mechanism is extended to cover string literals, when is specified.
Specifically:
Each type in a default declaration must be an
instance of Numor of IsString.
The standard defaulting rule (Haskell Report, Section 4.3.4)
is extended thus: defaulting applies when all the unresolved constraints involve standard classes
orIsString; and at least one is a numeric class
orIsString.
A small example:
module Main where
import GHC.Exts( IsString(..) )
newtype MyString = MyString String deriving (Eq, Show)
instance IsString MyString where
fromString = MyString
greet :: MyString -> MyString
greet "hello" = "world"
greet other = other
main = do
print $ greet "hello"
print $ greet "fool"
Note that deriving Eq is necessary for the pattern matching
to work since it gets translated into an equality comparison.
Other type system extensionsType signaturesThe context of a type signature
The flag lifts the Haskell 98 restriction
that the type-class constraints in a type signature must have the
form (class type-variable) or
(class (type-variable type-variable ...)).
With
these type signatures are perfectly OK
g :: Eq [a] => ...
g :: Ord (T a ()) => ...
GHC imposes the following restrictions on the constraints in a type signature.
Consider the type:
forall tv1..tvn (c1, ...,cn) => type
(Here, we write the "foralls" explicitly, although the Haskell source
language omits them; in Haskell 98, all the free type variables of an
explicit source-language type signature are universally quantified,
except for the class type variables in a class declaration. However,
in GHC, you can give the foralls if you want. See ).
Each universally quantified type variable
tvi must be reachable from type.
A type variable a is "reachable" if it appears
in the same constraint as either a type variable free in
type, or another reachable type variable.
A value with a type that does not obey
this reachability restriction cannot be used without introducing
ambiguity; that is why the type is rejected.
Here, for example, is an illegal type:
forall a. Eq a => Int
When a value with this type was used, the constraint Eq tv
would be introduced where tv is a fresh type variable, and
(in the dictionary-translation implementation) the value would be
applied to a dictionary for Eq tv. The difficulty is that we
can never know which instance of Eq to use because we never
get any more information about tv.
Note
that the reachability condition is weaker than saying that a is
functionally dependent on a type variable free in
type (see ). The reason for this is there
might be a "hidden" dependency, in a superclass perhaps. So
"reachable" is a conservative approximation to "functionally dependent".
For example, consider:
class C a b | a -> b where ...
class C a b => D a b where ...
f :: forall a b. D a b => a -> a
This is fine, because in fact a does functionally determine b
but that is not immediately apparent from f's type.
Every constraint ci must mention at least one of the
universally quantified type variables tvi.
For example, this type is OK because C a b mentions the
universally quantified type variable b:
forall a. C a b => burble
The next type is illegal because the constraint Eq b does not
mention a:
forall a. Eq b => burble
The reason for this restriction is milder than the other one. The
excluded types are never useful or necessary (because the offending
context doesn't need to be witnessed at this point; it can be floated
out). Furthermore, floating them out increases sharing. Lastly,
excluding them is a conservative choice; it leaves a patch of
territory free in case we need it later.
Implicit parameters Implicit parameters are implemented as described in
"Implicit parameters: dynamic scoping with static types",
J Lewis, MB Shields, E Meijer, J Launchbury,
27th ACM Symposium on Principles of Programming Languages (POPL'00),
Boston, Jan 2000.
(Most of the following, still rather incomplete, documentation is
due to Jeff Lewis.)Implicit parameter support is enabled with the option
.
A variable is called dynamically bound when it is bound by the calling
context of a function and statically bound when bound by the callee's
context. In Haskell, all variables are statically bound. Dynamic
binding of variables is a notion that goes back to Lisp, but was later
discarded in more modern incarnations, such as Scheme. Dynamic binding
can be very confusing in an untyped language, and unfortunately, typed
languages, in particular Hindley-Milner typed languages like Haskell,
only support static scoping of variables.
However, by a simple extension to the type class system of Haskell, we
can support dynamic binding. Basically, we express the use of a
dynamically bound variable as a constraint on the type. These
constraints lead to types of the form (?x::t') => t, which says "this
function uses a dynamically-bound variable ?x
of type t'". For
example, the following expresses the type of a sort function,
implicitly parameterized by a comparison function named cmp.
sort :: (?cmp :: a -> a -> Bool) => [a] -> [a]
The dynamic binding constraints are just a new form of predicate in the type class system.
An implicit parameter occurs in an expression using the special form ?x,
where x is
any valid identifier (e.g. ord ?x is a valid expression).
Use of this construct also introduces a new
dynamic-binding constraint in the type of the expression.
For example, the following definition
shows how we can define an implicitly parameterized sort function in
terms of an explicitly parameterized sortBy function:
sortBy :: (a -> a -> Bool) -> [a] -> [a]
sort :: (?cmp :: a -> a -> Bool) => [a] -> [a]
sort = sortBy ?cmp
Implicit-parameter type constraints
Dynamic binding constraints behave just like other type class
constraints in that they are automatically propagated. Thus, when a
function is used, its implicit parameters are inherited by the
function that called it. For example, our sort function might be used
to pick out the least value in a list:
least :: (?cmp :: a -> a -> Bool) => [a] -> a
least xs = head (sort xs)
Without lifting a finger, the ?cmp parameter is
propagated to become a parameter of least as well. With explicit
parameters, the default is that parameters must always be explicit
propagated. With implicit parameters, the default is to always
propagate them.
An implicit-parameter type constraint differs from other type class constraints in the
following way: All uses of a particular implicit parameter must have
the same type. This means that the type of (?x, ?x)
is (?x::a) => (a,a), and not
(?x::a, ?x::b) => (a, b), as would be the case for type
class constraints.
You can't have an implicit parameter in the context of a class or instance
declaration. For example, both these declarations are illegal:
class (?x::Int) => C a where ...
instance (?x::a) => Foo [a] where ...
Reason: exactly which implicit parameter you pick up depends on exactly where
you invoke a function. But the ``invocation'' of instance declarations is done
behind the scenes by the compiler, so it's hard to figure out exactly where it is done.
Easiest thing is to outlaw the offending types.
Implicit-parameter constraints do not cause ambiguity. For example, consider:
f :: (?x :: [a]) => Int -> Int
f n = n + length ?x
g :: (Read a, Show a) => String -> String
g s = show (read s)
Here, g has an ambiguous type, and is rejected, but f
is fine. The binding for ?x at f's call site is
quite unambiguous, and fixes the type a.
Implicit-parameter bindings
An implicit parameter is bound using the standard
let or where binding forms.
For example, we define the min function by binding
cmp.
min :: [a] -> a
min = let ?cmp = (<=) in least
A group of implicit-parameter bindings may occur anywhere a normal group of Haskell
bindings can occur, except at top level. That is, they can occur in a let
(including in a list comprehension, or do-notation, or pattern guards),
or a where clause.
Note the following points:
An implicit-parameter binding group must be a
collection of simple bindings to implicit-style variables (no
function-style bindings, and no type signatures); these bindings are
neither polymorphic or recursive.
You may not mix implicit-parameter bindings with ordinary bindings in a
single let
expression; use two nested lets instead.
(In the case of where you are stuck, since you can't nest where clauses.)
You may put multiple implicit-parameter bindings in a
single binding group; but they are not treated
as a mutually recursive group (as ordinary let bindings are).
Instead they are treated as a non-recursive group, simultaneously binding all the implicit
parameter. The bindings are not nested, and may be re-ordered without changing
the meaning of the program.
For example, consider:
f t = let { ?x = t; ?y = ?x+(1::Int) } in ?x + ?y
The use of ?x in the binding for ?y does not "see"
the binding for ?x, so the type of f is
f :: (?x::Int) => Int -> Int
Implicit parameters and polymorphic recursion
Consider these two definitions:
len1 :: [a] -> Int
len1 xs = let ?acc = 0 in len_acc1 xs
len_acc1 [] = ?acc
len_acc1 (x:xs) = let ?acc = ?acc + (1::Int) in len_acc1 xs
------------
len2 :: [a] -> Int
len2 xs = let ?acc = 0 in len_acc2 xs
len_acc2 :: (?acc :: Int) => [a] -> Int
len_acc2 [] = ?acc
len_acc2 (x:xs) = let ?acc = ?acc + (1::Int) in len_acc2 xs
The only difference between the two groups is that in the second group
len_acc is given a type signature.
In the former case, len_acc1 is monomorphic in its own
right-hand side, so the implicit parameter ?acc is not
passed to the recursive call. In the latter case, because len_acc2
has a type signature, the recursive call is made to the
polymorphic version, which takes ?acc
as an implicit parameter. So we get the following results in GHCi:
Prog> len1 "hello"
0
Prog> len2 "hello"
5
Adding a type signature dramatically changes the result! This is a rather
counter-intuitive phenomenon, worth watching out for.
Implicit parameters and monomorphismGHC applies the dreaded Monomorphism Restriction (section 4.5.5 of the
Haskell Report) to implicit parameters. For example, consider:
f :: Int -> Int
f v = let ?x = 0 in
let y = ?x + v in
let ?x = 5 in
y
Since the binding for y falls under the Monomorphism
Restriction it is not generalised, so the type of y is
simply Int, not (?x::Int) => Int.
Hence, (f 9) returns result 9.
If you add a type signature for y, then y
will get type (?x::Int) => Int, so the occurrence of
y in the body of the let will see the
inner binding of ?x, so (f 9) will return
14.
Explicitly-kinded quantification
Haskell infers the kind of each type variable. Sometimes it is nice to be able
to give the kind explicitly as (machine-checked) documentation,
just as it is nice to give a type signature for a function. On some occasions,
it is essential to do so. For example, in his paper "Restricted Data Types in Haskell" (Haskell Workshop 1999)
John Hughes had to define the data type:
data Set cxt a = Set [a]
| Unused (cxt a -> ())
The only use for the Unused constructor was to force the correct
kind for the type variable cxt.
GHC now instead allows you to specify the kind of a type variable directly, wherever
a type variable is explicitly bound, with the flag .
This flag enables kind signatures in the following places:
data declarations:
data Set (cxt :: * -> *) a = Set [a]
type declarations:
type T (f :: * -> *) = f Int
class declarations:
class (Eq a) => C (f :: * -> *) a where ...
forall's in type signatures:
f :: forall (cxt :: * -> *). Set cxt Int
The parentheses are required. Some of the spaces are required too, to
separate the lexemes. If you write (f::*->*) you
will get a parse error, because "::*->*" is a
single lexeme in Haskell.
As part of the same extension, you can put kind annotations in types
as well. Thus:
f :: (Int :: *) -> Int
g :: forall a. a -> (a :: *)
The syntax is
atype ::= '(' ctype '::' kind ')
The parentheses are required.
Arbitrary-rank polymorphism
Haskell type signatures are implicitly quantified. The new keyword forall
allows us to say exactly what this means. For example:
g :: b -> b
means this:
g :: forall b. (b -> b)
The two are treated identically.
However, GHC's type system supports arbitrary-rank
explicit universal quantification in
types.
For example, all the following types are legal:
f1 :: forall a b. a -> b -> a
g1 :: forall a b. (Ord a, Eq b) => a -> b -> a
f2 :: (forall a. a->a) -> Int -> Int
g2 :: (forall a. Eq a => [a] -> a -> Bool) -> Int -> Int
f3 :: ((forall a. a->a) -> Int) -> Bool -> Bool
f4 :: Int -> (forall a. a -> a)
Here, f1 and g1 are rank-1 types, and
can be written in standard Haskell (e.g. f1 :: a->b->a).
The forall makes explicit the universal quantification that
is implicitly added by Haskell.
The functions f2 and g2 have rank-2 types;
the forall is on the left of a function arrow. As g2
shows, the polymorphic type on the left of the function arrow can be overloaded.
The function f3 has a rank-3 type;
it has rank-2 types on the left of a function arrow.
GHC has three flags to control higher-rank types:
: data constructors (only) can have polymorphic argument types.
: any function (including data constructors) can have a rank-2 type.
: any function (including data constructors) can have an arbitrary-rank type.
That is, you can nest foralls
arbitrarily deep in function arrows.
In particular, a forall-type (also called a "type scheme"),
including an operational type class context, is legal:
On the left or right (see f4, for example)
of a function arrow As the argument of a constructor, or type of a field, in a data type declaration. For
example, any of the f1,f2,f3,g1,g2 above would be valid
field type signatures. As the type of an implicit parameter In a pattern type signature (see )
Of course forall becomes a keyword; you can't use forall as
a type variable any more!
Examples
In a data or newtype declaration one can quantify
the types of the constructor arguments. Here are several examples:
data T a = T1 (forall b. b -> b -> b) a
data MonadT m = MkMonad { return :: forall a. a -> m a,
bind :: forall a b. m a -> (a -> m b) -> m b
}
newtype Swizzle = MkSwizzle (Ord a => [a] -> [a])
The constructors have rank-2 types:
T1 :: forall a. (forall b. b -> b -> b) -> a -> T a
MkMonad :: forall m. (forall a. a -> m a)
-> (forall a b. m a -> (a -> m b) -> m b)
-> MonadT m
MkSwizzle :: (Ord a => [a] -> [a]) -> Swizzle
Notice that you don't need to use a forall if there's an
explicit context. For example in the first argument of the
constructor MkSwizzle, an implicit "forall a." is
prefixed to the argument type. The implicit forall
quantifies all type variables that are not already in scope, and are
mentioned in the type quantified over.
As for type signatures, implicit quantification happens for non-overloaded
types too. So if you write this:
data T a = MkT (Either a b) (b -> b)
it's just as if you had written this:
data T a = MkT (forall b. Either a b) (forall b. b -> b)
That is, since the type variable b isn't in scope, it's
implicitly universally quantified. (Arguably, it would be better
to require explicit quantification on constructor arguments
where that is what is wanted. Feedback welcomed.)
You construct values of types T1, MonadT, Swizzle by applying
the constructor to suitable values, just as usual. For example,
a1 :: T Int
a1 = T1 (\xy->x) 3
a2, a3 :: Swizzle
a2 = MkSwizzle sort
a3 = MkSwizzle reverse
a4 :: MonadT Maybe
a4 = let r x = Just x
b m k = case m of
Just y -> k y
Nothing -> Nothing
in
MkMonad r b
mkTs :: (forall b. b -> b -> b) -> a -> [T a]
mkTs f x y = [T1 f x, T1 f y]
The type of the argument can, as usual, be more general than the type
required, as (MkSwizzle reverse) shows. (reverse
does not need the Ord constraint.)
When you use pattern matching, the bound variables may now have
polymorphic types. For example:
f :: T a -> a -> (a, Char)
f (T1 w k) x = (w k x, w 'c' 'd')
g :: (Ord a, Ord b) => Swizzle -> [a] -> (a -> b) -> [b]
g (MkSwizzle s) xs f = s (map f (s xs))
h :: MonadT m -> [m a] -> m [a]
h m [] = return m []
h m (x:xs) = bind m x $ \y ->
bind m (h m xs) $ \ys ->
return m (y:ys)
In the function h we use the record selectors return
and bind to extract the polymorphic bind and return functions
from the MonadT data structure, rather than using pattern
matching.
Type inference
In general, type inference for arbitrary-rank types is undecidable.
GHC uses an algorithm proposed by Odersky and Laufer ("Putting type annotations to work", POPL'96)
to get a decidable algorithm by requiring some help from the programmer.
We do not yet have a formal specification of "some help" but the rule is this:
For a lambda-bound or case-bound variable, x, either the programmer
provides an explicit polymorphic type for x, or GHC's type inference will assume
that x's type has no foralls in it.
What does it mean to "provide" an explicit type for x? You can do that by
giving a type signature for x directly, using a pattern type signature
(), thus:
\ f :: (forall a. a->a) -> (f True, f 'c')
Alternatively, you can give a type signature to the enclosing
context, which GHC can "push down" to find the type for the variable:
(\ f -> (f True, f 'c')) :: (forall a. a->a) -> (Bool,Char)
Here the type signature on the expression can be pushed inwards
to give a type signature for f. Similarly, and more commonly,
one can give a type signature for the function itself:
h :: (forall a. a->a) -> (Bool,Char)
h f = (f True, f 'c')
You don't need to give a type signature if the lambda bound variable
is a constructor argument. Here is an example we saw earlier:
f :: T a -> a -> (a, Char)
f (T1 w k) x = (w k x, w 'c' 'd')
Here we do not need to give a type signature to w, because
it is an argument of constructor T1 and that tells GHC all
it needs to know.
Implicit quantification
GHC performs implicit quantification as follows. At the top level (only) of
user-written types, if and only if there is no explicit forall,
GHC finds all the type variables mentioned in the type that are not already
in scope, and universally quantifies them. For example, the following pairs are
equivalent:
f :: a -> a
f :: forall a. a -> a
g (x::a) = let
h :: a -> b -> b
h x y = y
in ...
g (x::a) = let
h :: forall b. a -> b -> b
h x y = y
in ...
Notice that GHC does not find the innermost possible quantification
point. For example:
f :: (a -> a) -> Int
-- MEANS
f :: forall a. (a -> a) -> Int
-- NOT
f :: (forall a. a -> a) -> Int
g :: (Ord a => a -> a) -> Int
-- MEANS the illegal type
g :: forall a. (Ord a => a -> a) -> Int
-- NOT
g :: (forall a. Ord a => a -> a) -> Int
The latter produces an illegal type, which you might think is silly,
but at least the rule is simple. If you want the latter type, you
can write your for-alls explicitly. Indeed, doing so is strongly advised
for rank-2 types.
Impredicative polymorphism
GHC supports impredicative polymorphism,
enabled with .
This means
that you can call a polymorphic function at a polymorphic type, and
parameterise data structures over polymorphic types. For example:
f :: Maybe (forall a. [a] -> [a]) -> Maybe ([Int], [Char])
f (Just g) = Just (g [3], g "hello")
f Nothing = Nothing
Notice here that the Maybe type is parameterised by the
polymorphic type (forall a. [a] ->
[a]).
The technical details of this extension are described in the paper
Boxy types:
type inference for higher-rank types and impredicativity,
which appeared at ICFP 2006.
Lexically scoped type variables
GHC supports lexically scoped type variables, without
which some type signatures are simply impossible to write. For example:
f :: forall a. [a] -> [a]
f xs = ys ++ ys
where
ys :: [a]
ys = reverse xs
The type signature for f brings the type variable a into scope; it scopes over
the entire definition of f.
In particular, it is in scope at the type signature for ys.
In Haskell 98 it is not possible to declare
a type for ys; a major benefit of scoped type variables is that
it becomes possible to do so.
Lexically-scoped type variables are enabled by
. This flag implies .
Note: GHC 6.6 contains substantial changes to the way that scoped type
variables work, compared to earlier releases. Read this section
carefully!OverviewThe design follows the following principles
A scoped type variable stands for a type variable, and not for
a type. (This is a change from GHC's earlier
design.)Furthermore, distinct lexical type variables stand for distinct
type variables. This means that every programmer-written type signature
(including one that contains free scoped type variables) denotes a
rigid type; that is, the type is fully known to the type
checker, and no inference is involved.Lexical type variables may be alpha-renamed freely, without
changing the program.
A lexically scoped type variable can be bound by:
A declaration type signature ()An expression type signature ()A pattern type signature ()Class and instance declarations ()
In Haskell, a programmer-written type signature is implicitly quantified over
its free type variables (Section
4.1.2
of the Haskell Report).
Lexically scoped type variables affect this implicit quantification rules
as follows: any type variable that is in scope is not universally
quantified. For example, if type variable a is in scope,
then
(e :: a -> a) means (e :: a -> a)
(e :: b -> b) means (e :: forall b. b->b)
(e :: a -> b) means (e :: forall b. a->b)
Declaration type signaturesA declaration type signature that has explicit
quantification (using forall) brings into scope the
explicitly-quantified
type variables, in the definition of the named function. For example:
f :: forall a. [a] -> [a]
f (x:xs) = xs ++ [ x :: a ]
The "forall a" brings "a" into scope in
the definition of "f".
This only happens if:
The quantification in f's type
signature is explicit. For example:
g :: [a] -> [a]
g (x:xs) = xs ++ [ x :: a ]
This program will be rejected, because "a" does not scope
over the definition of "f", so "x::a"
means "x::forall a. a" by Haskell's usual implicit
quantification rules.
The signature gives a type for a function binding or a bare variable binding,
not a pattern binding.
For example:
f1 :: forall a. [a] -> [a]
f1 (x:xs) = xs ++ [ x :: a ] -- OK
f2 :: forall a. [a] -> [a]
f2 = \(x:xs) -> xs ++ [ x :: a ] -- OK
f3 :: forall a. [a] -> [a]
Just f3 = Just (\(x:xs) -> xs ++ [ x :: a ]) -- Not OK!
The binding for f3 is a pattern binding, and so its type signature
does not bring a into scope. However f1 is a
function binding, and f2 binds a bare variable; in both cases
the type signature brings a into scope.
Expression type signaturesAn expression type signature that has explicit
quantification (using forall) brings into scope the
explicitly-quantified
type variables, in the annotated expression. For example:
f = runST ( (op >>= \(x :: STRef s Int) -> g x) :: forall s. ST s Bool )
Here, the type signature forall a. ST s Bool brings the
type variable s into scope, in the annotated expression
(op >>= \(x :: STRef s Int) -> g x).
Pattern type signatures
A type signature may occur in any pattern; this is a pattern type
signature.
For example:
-- f and g assume that 'a' is already in scope
f = \(x::Int, y::a) -> x
g (x::a) = x
h ((x,y) :: (Int,Bool)) = (y,x)
In the case where all the type variables in the pattern type signature are
already in scope (i.e. bound by the enclosing context), matters are simple: the
signature simply constrains the type of the pattern in the obvious way.
Unlike expression and declaration type signatures, pattern type signatures are not implicitly generalised.
The pattern in a pattern binding may only mention type variables
that are already in scope. For example:
f :: forall a. [a] -> (Int, [a])
f xs = (n, zs)
where
(ys::[a], n) = (reverse xs, length xs) -- OK
zs::[a] = xs ++ ys -- OK
Just (v::b) = ... -- Not OK; b is not in scope
Here, the pattern signatures for ys and zs
are fine, but the one for v is not because b is
not in scope.
However, in all patterns other than pattern bindings, a pattern
type signature may mention a type variable that is not in scope; in this case,
the signature brings that type variable into scope.
This is particularly important for existential data constructors. For example:
data T = forall a. MkT [a]
k :: T -> T
k (MkT [t::a]) = MkT t3
where
t3::[a] = [t,t,t]
Here, the pattern type signature (t::a) mentions a lexical type
variable that is not already in scope. Indeed, it cannot already be in scope,
because it is bound by the pattern match. GHC's rule is that in this situation
(and only then), a pattern type signature can mention a type variable that is
not already in scope; the effect is to bring it into scope, standing for the
existentially-bound type variable.
When a pattern type signature binds a type variable in this way, GHC insists that the
type variable is bound to a rigid, or fully-known, type variable.
This means that any user-written type signature always stands for a completely known type.
If all this seems a little odd, we think so too. But we must have
some way to bring such type variables into scope, else we
could not name existentially-bound type variables in subsequent type signatures.
This is (now) the only situation in which a pattern type
signature is allowed to mention a lexical variable that is not already in
scope.
For example, both f and g would be
illegal if a was not already in scope.
Class and instance declarations
The type variables in the head of a class or instance declaration
scope over the methods defined in the where part. For example:
class C a where
op :: [a] -> a
op xs = let ys::[a]
ys = reverse xs
in
head ys
Generalised typing of mutually recursive bindings
The Haskell Report specifies that a group of bindings (at top level, or in a
let or where) should be sorted into
strongly-connected components, and then type-checked in dependency order
(Haskell
Report, Section 4.5.1).
As each group is type-checked, any binders of the group that
have
an explicit type signature are put in the type environment with the specified
polymorphic type,
and all others are monomorphic until the group is generalised
(Haskell Report, Section 4.5.2).
Following a suggestion of Mark Jones, in his paper
Typing Haskell in
Haskell,
GHC implements a more general scheme. If is
specified:
the dependency analysis ignores references to variables that have an explicit
type signature.
As a result of this refined dependency analysis, the dependency groups are smaller, and more bindings will
typecheck. For example, consider:
f :: Eq a => a -> Bool
f x = (x == x) || g True || g "Yes"
g y = (y <= y) || f True
This is rejected by Haskell 98, but under Jones's scheme the definition for
g is typechecked first, separately from that for
f,
because the reference to f in g's right
hand side is ignored by the dependency analysis. Then g's
type is generalised, to get
g :: Ord a => a -> Bool
Now, the definition for f is typechecked, with this type for
g in the type environment.
The same refined dependency analysis also allows the type signatures of
mutually-recursive functions to have different contexts, something that is illegal in
Haskell 98 (Section 4.5.2, last sentence). With
GHC only insists that the type signatures of a refined group have identical
type signatures; in practice this means that only variables bound by the same
pattern binding must have the same context. For example, this is fine:
f :: Eq a => a -> Bool
f x = (x == x) || g True
g :: Ord a => a -> Bool
g y = (y <= y) || f True
Type families
GHC supports the definition of type families indexed by types. They may be
seen as an extension of Haskell 98's class-based overloading of values to
types. When type families are declared in classes, they are also known as
associated types.
There are two forms of type families: data families and type synonym families.
Currently, only the former are fully implemented, while we are still working
on the latter. As a result, the specification of the language extension is
also still to some degree in flux. Hence, a more detailed description of
the language extension and its use is currently available
from the Haskell
wiki page on type families. The material will be moved to this user's
guide when it has stabilised.
Type families are enabled by the flag .
Template HaskellTemplate Haskell allows you to do compile-time meta-programming in
Haskell.
The background to
the main technical innovations is discussed in "
Template Meta-programming for Haskell" (Proc Haskell Workshop 2002).
There is a Wiki page about
Template Haskell at
http://www.haskell.org/haskellwiki/Template_Haskell, and that is the best place to look for
further details.
You may also
consult the online
Haskell library reference material
(look for module Language.Haskell.TH).
Many changes to the original design are described in
Notes on Template Haskell version 2.
Not all of these changes are in GHC, however.
The first example from that paper is set out below ()
as a worked example to help get you started.
The documentation here describes the realisation of Template Haskell in GHC. It is not detailed enough to
understand Template Haskell; see the
Wiki page.
Syntax Template Haskell has the following new syntactic
constructions. You need to use the flag
to switch these syntactic extensions on
( is no longer implied by
).
A splice is written $x, where x is an
identifier, or $(...), where the "..." is an arbitrary expression.
There must be no space between the "$" and the identifier or parenthesis. This use
of "$" overrides its meaning as an infix operator, just as "M.x" overrides the meaning
of "." as an infix operator. If you want the infix operator, put spaces around it.
A splice can occur in place of
an expression; the spliced expression must
have type Q Exp a list of top-level declarations; the spliced expression must have type Q [Dec]
Inside a splice you can can only call functions defined in imported modules,
not functions defined elsewhere in the same module.
A expression quotation is written in Oxford brackets, thus:
[| ... |], where the "..." is an expression;
the quotation has type Q Exp.[d| ... |], where the "..." is a list of top-level declarations;
the quotation has type Q [Dec].[t| ... |], where the "..." is a type;
the quotation has type Q Typ.
A quasi-quotation can appear in either a pattern context or an
expression context and is also written in Oxford brackets:
[:varid| ... |],
where the "..." is an arbitrary string; a full description of the
quasi-quotation facility is given in .
A name can be quoted with either one or two prefix single quotes:
'f has type Name, and names the function f.
Similarly 'C has type Name and names the data constructor C.
In general 'thing interprets thing in an expression context.
''T has type Name, and names the type constructor T.
That is, ''thing interprets thing in a type context.
These Names can be used to construct Template Haskell expressions, patterns, declarations etc. They
may also be given as an argument to the reify function.
(Compared to the original paper, there are many differences of detail.
The syntax for a declaration splice uses "$" not "splice".
The type of the enclosed expression must be Q [Dec], not [Q Dec].
Type splices are not implemented, and neither are pattern splices or quotations.
Using Template Haskell
The data types and monadic constructor functions for Template Haskell are in the library
Language.Haskell.THSyntax.
You can only run a function at compile time if it is imported from another module. That is,
you can't define a function in a module, and call it from within a splice in the same module.
(It would make sense to do so, but it's hard to implement.)
You can only run a function at compile time if it is imported
from another module that is not part of a mutually-recursive group of modules
that includes the module currently being compiled. Furthermore, all of the modules of
the mutually-recursive group must be reachable by non-SOURCE imports from the module where the
splice is to be run.
For example, when compiling module A,
you can only run Template Haskell functions imported from B if B does not import A (directly or indirectly).
The reason should be clear: to run B we must compile and run A, but we are currently type-checking A.
The flag -ddump-splices shows the expansion of all top-level splices as they happen.
If you are building GHC from source, you need at least a stage-2 bootstrap compiler to
run Template Haskell. A stage-1 compiler will reject the TH constructs. Reason: TH
compiles and runs a program, and then looks at the result. So it's important that
the program it compiles produces results whose representations are identical to
those of the compiler itself.
Template Haskell works in any mode (--make, --interactive,
or file-at-a-time). There used to be a restriction to the former two, but that restriction
has been lifted.
A Template Haskell Worked Example To help you get over the confidence barrier, try out this skeletal worked example.
First cut and paste the two modules below into "Main.hs" and "Printf.hs":
{- Main.hs -}
module Main where
-- Import our template "pr"
import Printf ( pr )
-- The splice operator $ takes the Haskell source code
-- generated at compile time by "pr" and splices it into
-- the argument of "putStrLn".
main = putStrLn ( $(pr "Hello") )
{- Printf.hs -}
module Printf where
-- Skeletal printf from the paper.
-- It needs to be in a separate module to the one where
-- you intend to use it.
-- Import some Template Haskell syntax
import Language.Haskell.TH
-- Describe a format string
data Format = D | S | L String
-- Parse a format string. This is left largely to you
-- as we are here interested in building our first ever
-- Template Haskell program and not in building printf.
parse :: String -> [Format]
parse s = [ L s ]
-- Generate Haskell source code from a parsed representation
-- of the format string. This code will be spliced into
-- the module which calls "pr", at compile time.
gen :: [Format] -> Q Exp
gen [D] = [| \n -> show n |]
gen [S] = [| \s -> s |]
gen [L s] = stringE s
-- Here we generate the Haskell code for the splice
-- from an input format string.
pr :: String -> Q Exp
pr s = gen (parse s)
Now run the compiler (here we are a Cygwin prompt on Windows):
$ ghc --make -XTemplateHaskell main.hs -o main.exe
Run "main.exe" and here is your output:
$ ./main
Hello
Using Template Haskell with Profilingprofilingwith Template HaskellTemplate Haskell relies on GHC's built-in bytecode compiler and
interpreter to run the splice expressions. The bytecode interpreter
runs the compiled expression on top of the same runtime on which GHC
itself is running; this means that the compiled code referred to by
the interpreted expression must be compatible with this runtime, and
in particular this means that object code that is compiled for
profiling cannot be loaded and used by a splice
expression, because profiled object code is only compatible with the
profiling version of the runtime.This causes difficulties if you have a multi-module program
containing Template Haskell code and you need to compile it for
profiling, because GHC cannot load the profiled object code and use it
when executing the splices. Fortunately GHC provides a workaround.
The basic idea is to compile the program twice:Compile the program or library first the normal way, without
.Then compile it again with , and
additionally use
to name the object files differently (you can choose any suffix
that isn't the normal object suffix here). GHC will automatically
load the object files built in the first step when executing splice
expressions. If you omit the flag when
building with and Template Haskell is used,
GHC will emit an error message. Template Haskell Quasi-quotation Quasi-quotation allows patterns and expressions to be written using
programmer-defined concrete syntax; the motivation behind the extension and
several examples are documented in
"Why It's
Nice to be Quoted: Quasiquoting for Haskell" (Proc Haskell Workshop
2007). The example below shows how to write a quasiquoter for a simple
expression language.
In the example, the quasiquoter expr is bound to a value of
type Language.Haskell.TH.Quote.QuasiQuoter which contains two
functions for quoting expressions and patterns, respectively. The first argument
to each quoter is the (arbitrary) string enclosed in the Oxford brackets. The
context of the quasi-quotation statement determines which of the two parsers is
called: if the quasi-quotation occurs in an expression context, the expression
parser is called, and if it occurs in a pattern context, the pattern parser is
called.
Note that in the example we make use of an antiquoted
variable n, indicated by the syntax 'int:n
(this syntax for anti-quotation was defined by the parser's
author, not by GHC). This binds n to the
integer value argument of the constructor IntExpr when
pattern matching. Please see the referenced paper for further details regarding
anti-quotation as well as the description of a technique that uses SYB to
leverage a single parser of type String -> a to generate both
an expression parser that returns a value of type Q Exp and a
pattern parser that returns a value of type Q Pat.
In general, a quasi-quote has the form
[$quoter| string |].
The quoter must be the name of an imported quoter; it
cannot be an arbitrary expression. The quoted string
can be arbitrary, and may contain newlines.
Quasiquoters must obey the same stage restrictions as Template Haskell, e.g., in
the example, expr cannot be defined
in Main.hs where it is used, but must be imported.
{- Main.hs -}
module Main where
import Expr
main :: IO ()
main = do { print $ eval [$expr|1 + 2|]
; case IntExpr 1 of
{ [$expr|'int:n|] -> print n
; _ -> return ()
}
}
{- Expr.hs -}
module Expr where
import qualified Language.Haskell.TH as TH
import Language.Haskell.TH.Quasi
data Expr = IntExpr Integer
| AntiIntExpr String
| BinopExpr BinOp Expr Expr
| AntiExpr String
deriving(Show, Typeable, Data)
data BinOp = AddOp
| SubOp
| MulOp
| DivOp
deriving(Show, Typeable, Data)
eval :: Expr -> Integer
eval (IntExpr n) = n
eval (BinopExpr op x y) = (opToFun op) (eval x) (eval y)
where
opToFun AddOp = (+)
opToFun SubOp = (-)
opToFun MulOp = (*)
opToFun DivOp = div
expr = QuasiQuoter parseExprExp parseExprPat
-- Parse an Expr, returning its representation as
-- either a Q Exp or a Q Pat. See the referenced paper
-- for how to use SYB to do this by writing a single
-- parser of type String -> Expr instead of two
-- separate parsers.
parseExprExp :: String -> Q Exp
parseExprExp ...
parseExprPat :: String -> Q Pat
parseExprPat ...
Now run the compiler:
$ ghc --make -XQuasiQuotes Main.hs -o main
Run "main" and here is your output:
$ ./main
3
1
Arrow notation
Arrows are a generalization of monads introduced by John Hughes.
For more details, see
“Generalising Monads to Arrows”,
John Hughes, in Science of Computer Programming 37,
pp67–111, May 2000.
The paper that introduced arrows: a friendly introduction, motivated with
programming examples.
“A New Notation for Arrows”,
Ross Paterson, in ICFP, Sep 2001.
Introduced the notation described here.
“Arrows and Computation”,
Ross Paterson, in The Fun of Programming,
Palgrave, 2003.
“Programming with Arrows”,
John Hughes, in 5th International Summer School on
Advanced Functional Programming,
Lecture Notes in Computer Science vol. 3622,
Springer, 2004.
This paper includes another introduction to the notation,
with practical examples.
“Type and Translation Rules for Arrow Notation in GHC”,
Ross Paterson and Simon Peyton Jones, September 16, 2004.
A terse enumeration of the formal rules used
(extracted from comments in the source code).
The arrows web page at
http://www.haskell.org/arrows/.
With the flag, GHC supports the arrow
notation described in the second of these papers,
translating it using combinators from the
Control.Arrow
module.
What follows is a brief introduction to the notation;
it won't make much sense unless you've read Hughes's paper.
The extension adds a new kind of expression for defining arrows:
exp10 ::= ...
| proc apat -> cmd
where proc is a new keyword.
The variables of the pattern are bound in the body of the
proc-expression,
which is a new sort of thing called a command.
The syntax of commands is as follows:
cmd ::= exp10 -< exp
| exp10 -<< exp
| cmd0
with cmd0 up to
cmd9 defined using
infix operators as for expressions, and
cmd10 ::= \ apat ... apat -> cmd
| let decls in cmd
| if exp then cmd else cmd
| case exp of { calts }
| do { cstmt ; ... cstmt ; cmd }
| fcmdfcmd ::= fcmdaexp
| ( cmd )
| (| aexpcmd ... cmd |)
cstmt ::= let decls
| pat <- cmd
| rec { cstmt ; ... cstmt [;] }
| cmd
where calts are like alts
except that the bodies are commands instead of expressions.
Commands produce values, but (like monadic computations)
may yield more than one value,
or none, and may do other things as well.
For the most part, familiarity with monadic notation is a good guide to
using commands.
However the values of expressions, even monadic ones,
are determined by the values of the variables they contain;
this is not necessarily the case for commands.
A simple example of the new notation is the expression
proc x -> f -< x+1
We call this a procedure or
arrow abstraction.
As with a lambda expression, the variable x
is a new variable bound within the proc-expression.
It refers to the input to the arrow.
In the above example, -< is not an identifier but an
new reserved symbol used for building commands from an expression of arrow
type and an expression to be fed as input to that arrow.
(The weird look will make more sense later.)
It may be read as analogue of application for arrows.
The above example is equivalent to the Haskell expression
arr (\ x -> x+1) >>> f
That would make no sense if the expression to the left of
-< involves the bound variable x.
More generally, the expression to the left of -<
may not involve any local variable,
i.e. a variable bound in the current arrow abstraction.
For such a situation there is a variant -<<, as in
proc x -> f x -<< x+1
which is equivalent to
arr (\ x -> (f x, x+1)) >>> app
so in this case the arrow must belong to the ArrowApply
class.
Such an arrow is equivalent to a monad, so if you're using this form
you may find a monadic formulation more convenient.
do-notation for commands
Another form of command is a form of do-notation.
For example, you can write
proc x -> do
y <- f -< x+1
g -< 2*y
let z = x+y
t <- h -< x*z
returnA -< t+z
You can read this much like ordinary do-notation,
but with commands in place of monadic expressions.
The first line sends the value of x+1 as an input to
the arrow f, and matches its output against
y.
In the next line, the output is discarded.
The arrow returnA is defined in the
Control.Arrow
module as arr id.
The above example is treated as an abbreviation for
arr (\ x -> (x, x)) >>>
first (arr (\ x -> x+1) >>> f) >>>
arr (\ (y, x) -> (y, (x, y))) >>>
first (arr (\ y -> 2*y) >>> g) >>>
arr snd >>>
arr (\ (x, y) -> let z = x+y in ((x, z), z)) >>>
first (arr (\ (x, z) -> x*z) >>> h) >>>
arr (\ (t, z) -> t+z) >>>
returnA
Note that variables not used later in the composition are projected out.
After simplification using rewrite rules (see )
defined in the
Control.Arrow
module, this reduces to
arr (\ x -> (x+1, x)) >>>
first f >>>
arr (\ (y, x) -> (2*y, (x, y))) >>>
first g >>>
arr (\ (_, (x, y)) -> let z = x+y in (x*z, z)) >>>
first h >>>
arr (\ (t, z) -> t+z)
which is what you might have written by hand.
With arrow notation, GHC keeps track of all those tuples of variables for you.
Note that although the above translation suggests that
let-bound variables like z must be
monomorphic, the actual translation produces Core,
so polymorphic variables are allowed.
It's also possible to have mutually recursive bindings,
using the new rec keyword, as in the following example:
counter :: ArrowCircuit a => a Bool Int
counter = proc reset -> do
rec output <- returnA -< if reset then 0 else next
next <- delay 0 -< output+1
returnA -< output
The translation of such forms uses the loop combinator,
so the arrow concerned must belong to the ArrowLoop class.
Conditional commands
In the previous example, we used a conditional expression to construct the
input for an arrow.
Sometimes we want to conditionally execute different commands, as in
proc (x,y) ->
if f x y
then g -< x+1
else h -< y+2
which is translated to
arr (\ (x,y) -> if f x y then Left x else Right y) >>>
(arr (\x -> x+1) >>> f) ||| (arr (\y -> y+2) >>> g)
Since the translation uses |||,
the arrow concerned must belong to the ArrowChoice class.
There are also case commands, like
case input of
[] -> f -< ()
[x] -> g -< x+1
x1:x2:xs -> do
y <- h -< (x1, x2)
ys <- k -< xs
returnA -< y:ys
The syntax is the same as for case expressions,
except that the bodies of the alternatives are commands rather than expressions.
The translation is similar to that of if commands.
Defining your own control structures
As we're seen, arrow notation provides constructs,
modelled on those for expressions,
for sequencing, value recursion and conditionals.
But suitable combinators,
which you can define in ordinary Haskell,
may also be used to build new commands out of existing ones.
The basic idea is that a command defines an arrow from environments to values.
These environments assign values to the free local variables of the command.
Thus combinators that produce arrows from arrows
may also be used to build commands from commands.
For example, the ArrowChoice class includes a combinator
ArrowChoice a => (<+>) :: a e c -> a e c -> a e c
so we can use it to build commands:
expr' = proc x -> do
returnA -< x
<+> do
symbol Plus -< ()
y <- term -< ()
expr' -< x + y
<+> do
symbol Minus -< ()
y <- term -< ()
expr' -< x - y
(The do on the first line is needed to prevent the first
<+> ... from being interpreted as part of the
expression on the previous line.)
This is equivalent to
expr' = (proc x -> returnA -< x)
<+> (proc x -> do
symbol Plus -< ()
y <- term -< ()
expr' -< x + y)
<+> (proc x -> do
symbol Minus -< ()
y <- term -< ()
expr' -< x - y)
It is essential that this operator be polymorphic in e
(representing the environment input to the command
and thence to its subcommands)
and satisfy the corresponding naturality property
arr k >>> (f <+> g) = (arr k >>> f) <+> (arr k >>> g)
at least for strict k.
(This should be automatic if you're not using seq.)
This ensures that environments seen by the subcommands are environments
of the whole command,
and also allows the translation to safely trim these environments.
The operator must also not use any variable defined within the current
arrow abstraction.
We could define our own operator
untilA :: ArrowChoice a => a e () -> a e Bool -> a e ()
untilA body cond = proc x ->
b <- cond -< x
if b then returnA -< ()
else do
body -< x
untilA body cond -< x
and use it in the same way.
Of course this infix syntax only makes sense for binary operators;
there is also a more general syntax involving special brackets:
proc x -> do
y <- f -< x+1
(|untilA (increment -< x+y) (within 0.5 -< x)|)
Primitive constructs
Some operators will need to pass additional inputs to their subcommands.
For example, in an arrow type supporting exceptions,
the operator that attaches an exception handler will wish to pass the
exception that occurred to the handler.
Such an operator might have a type
handleA :: ... => a e c -> a (e,Ex) c -> a e c
where Ex is the type of exceptions handled.
You could then use this with arrow notation by writing a command
body `handleA` \ ex -> handler
so that if an exception is raised in the command body,
the variable ex is bound to the value of the exception
and the command handler,
which typically refers to ex, is entered.
Though the syntax here looks like a functional lambda,
we are talking about commands, and something different is going on.
The input to the arrow represented by a command consists of values for
the free local variables in the command, plus a stack of anonymous values.
In all the prior examples, this stack was empty.
In the second argument to handleA,
this stack consists of one value, the value of the exception.
The command form of lambda merely gives this value a name.
More concretely,
the values on the stack are paired to the right of the environment.
So operators like handleA that pass
extra inputs to their subcommands can be designed for use with the notation
by pairing the values with the environment in this way.
More precisely, the type of each argument of the operator (and its result)
should have the form
a (...(e,t1), ... tn) t
where e is a polymorphic variable
(representing the environment)
and ti are the types of the values on the stack,
with t1 being the top.
The polymorphic variable e must not occur in
a, ti or
t.
However the arrows involved need not be the same.
Here are some more examples of suitable operators:
bracketA :: ... => a e b -> a (e,b) c -> a (e,c) d -> a e d
runReader :: ... => a e c -> a' (e,State) c
runState :: ... => a e c -> a' (e,State) (c,State)
We can supply the extra input required by commands built with the last two
by applying them to ordinary expressions, as in
proc x -> do
s <- ...
(|runReader (do { ... })|) s
which adds s to the stack of inputs to the command
built using runReader.
The command versions of lambda abstraction and application are analogous to
the expression versions.
In particular, the beta and eta rules describe equivalences of commands.
These three features (operators, lambda abstraction and application)
are the core of the notation; everything else can be built using them,
though the results would be somewhat clumsy.
For example, we could simulate do-notation by defining
bind :: Arrow a => a e b -> a (e,b) c -> a e c
u `bind` f = returnA &&& u >>> f
bind_ :: Arrow a => a e b -> a e c -> a e c
u `bind_` f = u `bind` (arr fst >>> f)
We could simulate if by defining
cond :: ArrowChoice a => a e b -> a e b -> a (e,Bool) b
cond f g = arr (\ (e,b) -> if b then Left e else Right e) >>> f ||| g
Differences with the paperInstead of a single form of arrow application (arrow tail) with two
translations, the implementation provides two forms
-< (first-order)
and -<< (higher-order).
User-defined operators are flagged with banana brackets instead of
a new form keyword.
Portability
Although only GHC implements arrow notation directly,
there is also a preprocessor
(available from the
arrows web page)
that translates arrow notation into Haskell 98
for use with other Haskell systems.
You would still want to check arrow programs with GHC;
tracing type errors in the preprocessor output is not easy.
Modules intended for both GHC and the preprocessor must observe some
additional restrictions:
The module must import
Control.Arrow.
The preprocessor cannot cope with other Haskell extensions.
These would have to go in separate modules.
Because the preprocessor targets Haskell (rather than Core),
let-bound variables are monomorphic.
Bang patterns
Bang patternsGHC supports an extension of pattern matching called bang
patterns. Bang patterns are under consideration for Haskell Prime.
The Haskell
prime feature description contains more discussion and examples
than the material below.
Bang patterns are enabled by the flag .
Informal description of bang patterns
The main idea is to add a single new production to the syntax of patterns:
pat ::= !pat
Matching an expression e against a pattern !p is done by first
evaluating e (to WHNF) and then matching the result against p.
Example:
f1 !x = True
This definition makes f1 is strict in x,
whereas without the bang it would be lazy.
Bang patterns can be nested of course:
f2 (!x, y) = [x,y]
Here, f2 is strict in x but not in
y.
A bang only really has an effect if it precedes a variable or wild-card pattern:
f3 !(x,y) = [x,y]
f4 (x,y) = [x,y]
Here, f3 and f4 are identical; putting a bang before a pattern that
forces evaluation anyway does nothing.
Bang patterns work in case expressions too, of course:
g5 x = let y = f x in body
g6 x = case f x of { y -> body }
g7 x = case f x of { !y -> body }
The functions g5 and g6 mean exactly the same thing.
But g7 evaluates (f x), binds y to the
result, and then evaluates body.
Bang patterns work in let and where
definitions too. For example:
let ![x,y] = e in b
is a strict pattern: operationally, it evaluates e, matches
it against the pattern [x,y], and then evaluates b
The "!" should not be regarded as part of the pattern; after all,
in a function argument ![x,y] means the
same as [x,y]. Rather, the "!"
is part of the syntax of let bindings.
Syntax and semantics
We add a single new production to the syntax of patterns:
pat ::= !pat
There is one problem with syntactic ambiguity. Consider:
f !x = 3
Is this a definition of the infix function "(!)",
or of the "f" with a bang pattern? GHC resolves this
ambiguity in favour of the latter. If you want to define
(!) with bang-patterns enabled, you have to do so using
prefix notation:
(!) f x = 3
The semantics of Haskell pattern matching is described in
Section 3.17.2 of the Haskell Report. To this description add
one extra item 10, saying:
Matching
the pattern !pat against a value v behaves as follows:
if v is bottom, the match divergesotherwise, pat is matched against
v
Similarly, in Figure 4 of
Section 3.17.3, add a new case (t):
case v of { !pat -> e; _ -> e' }
= v `seq` case v of { pat -> e; _ -> e' }
That leaves let expressions, whose translation is given in
Section
3.12
of the Haskell Report.
In the translation box, first apply
the following transformation: for each pattern pi that is of
form !qi = ei, transform it to (xi,!qi) = ((),ei), and and replace e0
by (xi `seq` e0). Then, when none of the left-hand-side patterns
have a bang at the top, apply the rules in the existing box.
The effect of the let rule is to force complete matching of the pattern
qi before evaluation of the body is begun. The bang is
retained in the translated form in case qi is a variable,
thus:
let !y = f x in b
The let-binding can be recursive. However, it is much more common for
the let-binding to be non-recursive, in which case the following law holds:
(let !p = rhs in body)
is equivalent to
(case rhs of !p -> body)
A pattern with a bang at the outermost level is not allowed at the top level of
a module.
Assertions
Assertions
If you want to make use of assertions in your standard Haskell code, you
could define a function like the following:
assert :: Bool -> a -> a
assert False x = error "assertion failed!"
assert _ x = x
which works, but gives you back a less than useful error message --
an assertion failed, but which and where?
One way out is to define an extended assert function which also
takes a descriptive string to include in the error message and
perhaps combine this with the use of a pre-processor which inserts
the source location where assert was used.
Ghc offers a helping hand here, doing all of this for you. For every
use of assert in the user's source:
kelvinToC :: Double -> Double
kelvinToC k = assert (k >= 0.0) (k+273.15)
Ghc will rewrite this to also include the source location where the
assertion was made,
assert pred val ==> assertError "Main.hs|15" pred val
The rewrite is only performed by the compiler when it spots
applications of Control.Exception.assert, so you
can still define and use your own versions of
assert, should you so wish. If not, import
Control.Exception to make use
assert in your code.
GHC ignores assertions when optimisation is turned on with the
flag. That is, expressions of the form
assert pred e will be rewritten to
e. You can also disable assertions using the
option.
Assertion failures can be caught, see the documentation for the
Control.Exception library for the details.
PragmaspragmaGHC supports several pragmas, or instructions to the
compiler placed in the source code. Pragmas don't normally affect
the meaning of the program, but they might affect the efficiency
of the generated code.Pragmas all take the form
{-# word ... #-}
where word indicates the type of
pragma, and is followed optionally by information specific to that
type of pragma. Case is ignored in
word. The various values for
word that GHC understands are described
in the following sections; any pragma encountered with an
unrecognised word is (silently)
ignored. The layout rule applies in pragmas, so the closing #-}
should start in a column to the right of the opening {-#. Certain pragmas are file-header pragmas. A file-header
pragma must precede the module keyword in the file.
There can be as many file-header pragmas as you please, and they can be
preceded or followed by comments.LANGUAGE pragmaLANGUAGEpragmapragmaLANGUAGEThe LANGUAGE pragma allows language extensions to be enabled
in a portable way.
It is the intention that all Haskell compilers support the
LANGUAGE pragma with the same syntax, although not
all extensions are supported by all compilers, of
course. The LANGUAGE pragma should be used instead
of OPTIONS_GHC, if possible.For example, to enable the FFI and preprocessing with CPP:{-# LANGUAGE ForeignFunctionInterface, CPP #-}LANGUAGE is a file-header pragma (see ).Every language extension can also be turned into a command-line flag
by prefixing it with "-X"; for example .
(Similarly, all "-X" flags can be written as LANGUAGE pragmas.
A list of all supported language extensions can be obtained by invoking
ghc --supported-languages (see ).Any extension from the Extension type defined in
Language.Haskell.Extension
may be used. GHC will report an error if any of the requested extensions are not supported.OPTIONS_GHC pragmaOPTIONS_GHCpragmaOPTIONS_GHCThe OPTIONS_GHC pragma is used to specify
additional options that are given to the compiler when compiling
this source file. See for
details.Previous versions of GHC accepted OPTIONS rather
than OPTIONS_GHC, but that is now deprecated.OPTIONS_GHC is a file-header pragma (see ).INCLUDE pragmaThe INCLUDE pragma is for specifying the names
of C header files that should be #include'd into
the C source code generated by the compiler for the current module (if
compiling via C). For example:
{-# INCLUDE "foo.h" #-}
{-# INCLUDE <stdio.h> #-}INCLUDE is a file-header pragma (see ).An INCLUDE pragma is the preferred alternative
to the option (), because the
INCLUDE pragma is understood by other
compilers. Yet another alternative is to add the include file to each
foreign import declaration in your code, but we
don't recommend using this approach with GHC.WARNING and DEPRECATED pragmasWARNINGDEPRECATEDThe WARNING pragma allows you to attach an arbitrary warning
to a particular function, class, or type.
A DEPRECATED pragma lets you specify that
a particular function, class, or type is deprecated.
There are two ways of using these pragmas.
You can work on an entire module thus:
module Wibble {-# DEPRECATED "Use Wobble instead" #-} where
...
Or:
module Wibble {-# WARNING "This is an unstable interface." #-} where
...
When you compile any module that import
Wibble, GHC will print the specified
message.You can attach a warning to a function, class, type, or data constructor, with the
following top-level declarations:
{-# DEPRECATED f, C, T "Don't use these" #-}
{-# WARNING unsafePerformIO "This is unsafe; I hope you know what you're doing" #-}
When you compile any module that imports and uses any
of the specified entities, GHC will print the specified
message. You can only attach to entities declared at top level in the module
being compiled, and you can only use unqualified names in the list of
entities. A capitalised name, such as T
refers to either the type constructor Tor the data constructor T, or both if
both are in scope. If both are in scope, there is currently no way to
specify one without the other (c.f. fixities
).
Warnings and deprecations are not reported for
(a) uses within the defining module, and
(b) uses in an export list.
The latter reduces spurious complaints within a library
in which one module gathers together and re-exports
the exports of several others.
You can suppress the warnings with the flag
.INLINE and NOINLINE pragmasThese pragmas control the inlining of function
definitions.INLINE pragmaINLINEGHC (with , as always) tries to
inline (or “unfold”) functions/values that are
“small enough,” thus avoiding the call overhead
and possibly exposing other more-wonderful optimisations.
Normally, if GHC decides a function is “too
expensive” to inline, it will not do so, nor will it
export that unfolding for other modules to use.The sledgehammer you can bring to bear is the
INLINEINLINE
pragma pragma, used thusly:
key_function :: Int -> String -> (Bool, Double)
{-# INLINE key_function #-}
The major effect of an INLINE pragma
is to declare a function's “cost” to be very low.
The normal unfolding machinery will then be very keen to
inline it. However, an INLINE pragma for a
function "f" has a number of other effects:
No functions are inlined into f. Otherwise
GHC might inline a big function into f's right hand side,
making f big; and then inline f blindly.
The float-in, float-out, and common-sub-expression transformations are not
applied to the body of f.
An INLINE function is not worker/wrappered by strictness analysis.
It's going to be inlined wholesale instead.
All of these effects are aimed at ensuring that what gets inlined is
exactly what you asked for, no more and no less.
GHC ensures that inlining cannot go on forever: every mutually-recursive
group is cut by one or more loop breakers that is never inlined
(see
Secrets of the GHC inliner, JFP 12(4) July 2002).
GHC tries not to select a function with an INLINE pragma as a loop breaker, but
when there is no choice even an INLINE function can be selected, in which case
the INLINE pragma is ignored.
For example, for a self-recursive function, the loop breaker can only be the function
itself, so an INLINE pragma is always ignored.Syntactically, an INLINE pragma for a
function can be put anywhere its type signature could be
put.INLINE pragmas are a particularly
good idea for the
then/return (or
bind/unit) functions in
a monad. For example, in GHC's own
UniqueSupply monad code, we have:
{-# INLINE thenUs #-}
{-# INLINE returnUs #-}
See also the NOINLINE pragma ().Note: the HBC compiler doesn't like INLINE pragmas,
so if you want your code to be HBC-compatible you'll have to surround
the pragma with C pre-processor directives
#ifdef __GLASGOW_HASKELL__...#endif.NOINLINE pragmaNOINLINENOTINLINEThe NOINLINE pragma does exactly what
you'd expect: it stops the named function from being inlined
by the compiler. You shouldn't ever need to do this, unless
you're very cautious about code size.NOTINLINE is a synonym for
NOINLINE (NOINLINE is
specified by Haskell 98 as the standard way to disable
inlining, so it should be used if you want your code to be
portable).Phase control Sometimes you want to control exactly when in GHC's
pipeline the INLINE pragma is switched on. Inlining happens
only during runs of the simplifier. Each
run of the simplifier has a different phase
number; the phase number decreases towards zero.
If you use you'll see the
sequence of phase numbers for successive runs of the
simplifier. In an INLINE pragma you can optionally specify a
phase number, thus:
"INLINE[k] f" means: do not inline
f
until phase k, but from phase
k onwards be very keen to inline it.
"INLINE[~k] f" means: be very keen to inline
f
until phase k, but from phase
k onwards do not inline it.
"NOINLINE[k] f" means: do not inline
f
until phase k, but from phase
k onwards be willing to inline it (as if
there was no pragma).
"NOINLINE[~k] f" means: be willing to inline
f
until phase k, but from phase
k onwards do not inline it.
The same information is summarised here:
-- Before phase 2 Phase 2 and later
{-# INLINE [2] f #-} -- No Yes
{-# INLINE [~2] f #-} -- Yes No
{-# NOINLINE [2] f #-} -- No Maybe
{-# NOINLINE [~2] f #-} -- Maybe No
{-# INLINE f #-} -- Yes Yes
{-# NOINLINE f #-} -- No No
By "Maybe" we mean that the usual heuristic inlining rules apply (if the
function body is small, or it is applied to interesting-looking arguments etc).
Another way to understand the semantics is this:
For both INLINE and NOINLINE, the phase number says
when inlining is allowed at all.The INLINE pragma has the additional effect of making the
function body look small, so that when inlining is allowed it is very likely to
happen.
The same phase-numbering control is available for RULES
().LINE pragmaLINEpragmapragmaLINEThis pragma is similar to C's #line
pragma, and is mainly for use in automatically generated Haskell
code. It lets you specify the line number and filename of the
original code; for example{-# LINE 42 "Foo.vhs" #-}if you'd generated the current file from something called
Foo.vhs and this line corresponds to line
42 in the original. GHC will adjust its error messages to refer
to the line/file named in the LINE
pragma.RULES pragmaThe RULES pragma lets you specify rewrite rules. It is
described in .SPECIALIZE pragmaSPECIALIZE pragmapragma, SPECIALIZEoverloading, death to(UK spelling also accepted.) For key overloaded
functions, you can create extra versions (NB: more code space)
specialised to particular types. Thus, if you have an
overloaded function:
hammeredLookup :: Ord key => [(key, value)] -> key -> value
If it is heavily used on lists with
Widget keys, you could specialise it as
follows:
{-# SPECIALIZE hammeredLookup :: [(Widget, value)] -> Widget -> value #-}
A SPECIALIZE pragma for a function can
be put anywhere its type signature could be put.A SPECIALIZE has the effect of generating
(a) a specialised version of the function and (b) a rewrite rule
(see ) that rewrites a call to the
un-specialised function into a call to the specialised one.The type in a SPECIALIZE pragma can be any type that is less
polymorphic than the type of the original function. In concrete terms,
if the original function is f then the pragma
{-# SPECIALIZE f :: <type> #-}
is valid if and only if the definition
f_spec :: <type>
f_spec = f
is valid. Here are some examples (where we only give the type signature
for the original function, not its code):
f :: Eq a => a -> b -> b
{-# SPECIALISE f :: Int -> b -> b #-}
g :: (Eq a, Ix b) => a -> b -> b
{-# SPECIALISE g :: (Eq a) => a -> Int -> Int #-}
h :: Eq a => a -> a -> a
{-# SPECIALISE h :: (Eq a) => [a] -> [a] -> [a] #-}
The last of these examples will generate a
RULE with a somewhat-complex left-hand side (try it yourself), so it might not fire very
well. If you use this kind of specialisation, let us know how well it works.
A SPECIALIZE pragma can optionally be followed with a
INLINE or NOINLINE pragma, optionally
followed by a phase, as described in .
The INLINE pragma affects the specialised version of the
function (only), and applies even if the function is recursive. The motivating
example is this:
-- A GADT for arrays with type-indexed representation
data Arr e where
ArrInt :: !Int -> ByteArray# -> Arr Int
ArrPair :: !Int -> Arr e1 -> Arr e2 -> Arr (e1, e2)
(!:) :: Arr e -> Int -> e
{-# SPECIALISE INLINE (!:) :: Arr Int -> Int -> Int #-}
{-# SPECIALISE INLINE (!:) :: Arr (a, b) -> Int -> (a, b) #-}
(ArrInt _ ba) !: (I# i) = I# (indexIntArray# ba i)
(ArrPair _ a1 a2) !: i = (a1 !: i, a2 !: i)
Here, (!:) is a recursive function that indexes arrays
of type Arr e. Consider a call to (!:)
at type (Int,Int). The second specialisation will fire, and
the specialised function will be inlined. It has two calls to
(!:),
both at type Int. Both these calls fire the first
specialisation, whose body is also inlined. The result is a type-based
unrolling of the indexing function.Warning: you can make GHC diverge by using SPECIALISE INLINE
on an ordinarily-recursive function.Note: In earlier versions of GHC, it was possible to provide your own
specialised function for a given type:
{-# SPECIALIZE hammeredLookup :: [(Int, value)] -> Int -> value = intLookup #-}
This feature has been removed, as it is now subsumed by the
RULES pragma (see ).SPECIALIZE instance pragma
SPECIALIZE pragmaoverloading, death to
Same idea, except for instance declarations. For example:
instance (Eq a) => Eq (Foo a) where {
{-# SPECIALIZE instance Eq (Foo [(Int, Bar)]) #-}
... usual stuff ...
}
The pragma must occur inside the where part
of the instance declaration.
Compatible with HBC, by the way, except perhaps in the placement
of the pragma.
UNPACK pragmaUNPACKThe UNPACK indicates to the compiler
that it should unpack the contents of a constructor field into
the constructor itself, removing a level of indirection. For
example:
data T = T {-# UNPACK #-} !Float
{-# UNPACK #-} !Float
will create a constructor T containing
two unboxed floats. This may not always be an optimisation: if
the T constructor is scrutinised and the
floats passed to a non-strict function for example, they will
have to be reboxed (this is done automatically by the
compiler).Unpacking constructor fields should only be used in
conjunction with , in order to expose
unfoldings to the compiler so the reboxing can be removed as
often as possible. For example:
f :: T -> Float
f (T f1 f2) = f1 + f2
The compiler will avoid reboxing f1
and f2 by inlining +
on floats, but only when is on.Any single-constructor data is eligible for unpacking; for
example
data T = T {-# UNPACK #-} !(Int,Int)
will store the two Ints directly in the
T constructor, by flattening the pair.
Multi-level unpacking is also supported:
data T = T {-# UNPACK #-} !S
data S = S {-# UNPACK #-} !Int {-# UNPACK #-} !Int
will store two unboxed Int#s
directly in the T constructor. The
unpacker can see through newtypes, too.If a field cannot be unpacked, you will not get a warning,
so it might be an idea to check the generated code with
.See also the flag,
which essentially has the effect of adding
{-# UNPACK #-} to every strict
constructor field.SOURCE pragmaSOURCEThe {-# SOURCE #-} pragma is used only in import declarations,
to break a module loop. It is described in detail in .
Rewrite rules
RULES pragmapragma, RULESrewrite rules
The programmer can specify rewrite rules as part of the source program
(in a pragma).
Here is an example:
{-# RULES
"map/map" forall f g xs. map f (map g xs) = map (f.g) xs
#-}
Use the debug flag to see what rules fired.
If you need more information, then shows you
each individual rule firing in detail.
Syntax
From a syntactic point of view:
There may be zero or more rules in a RULES pragma, separated by semicolons (which
may be generated by the layout rule).
The layout rule applies in a pragma.
Currently no new indentation level
is set, so if you put several rules in single RULES pragma and wish to use layout to separate them,
you must lay out the starting in the same column as the enclosing definitions.
{-# RULES
"map/map" forall f g xs. map f (map g xs) = map (f.g) xs
"map/append" forall f xs ys. map f (xs ++ ys) = map f xs ++ map f ys
#-}
Furthermore, the closing #-}
should start in a column to the right of the opening {-#.
Each rule has a name, enclosed in double quotes. The name itself has
no significance at all. It is only used when reporting how many times the rule fired.
A rule may optionally have a phase-control number (see ),
immediately after the name of the rule. Thus:
{-# RULES
"map/map" [2] forall f g xs. map f (map g xs) = map (f.g) xs
#-}
The "[2]" means that the rule is active in Phase 2 and subsequent phases. The inverse
notation "[~2]" is also accepted, meaning that the rule is active up to, but not including,
Phase 2.
Each variable mentioned in a rule must either be in scope (e.g. map),
or bound by the forall (e.g. f, g, xs). The variables bound by
the forall are called the pattern variables. They are separated
by spaces, just like in a type forall.
A pattern variable may optionally have a type signature.
If the type of the pattern variable is polymorphic, it must have a type signature.
For example, here is the foldr/build rule:
"fold/build" forall k z (g::forall b. (a->b->b) -> b -> b) .
foldr k z (build g) = g k z
Since g has a polymorphic type, it must have a type signature.
The left hand side of a rule must consist of a top-level variable applied
to arbitrary expressions. For example, this is not OK:
"wrong1" forall e1 e2. case True of { True -> e1; False -> e2 } = e1
"wrong2" forall f. f True = True
In "wrong1", the LHS is not an application; in "wrong2", the LHS has a pattern variable
in the head.
A rule does not need to be in the same module as (any of) the
variables it mentions, though of course they need to be in scope.
All rules are implicitly exported from the module, and are therefore
in force in any module that imports the module that defined the rule, directly
or indirectly. (That is, if A imports B, which imports C, then C's rules are
in force when compiling A.) The situation is very similar to that for instance
declarations.
Inside a RULE "forall" is treated as a keyword, regardless of
any other flag settings. Furthermore, inside a RULE, the language extension
is automatically enabled; see
.
Like other pragmas, RULE pragmas are always checked for scope errors, and
are typechecked. Typechecking means that the LHS and RHS of a rule are typechecked,
and must have the same type. However, rules are only enabled
if the flag is
on (see ).
Semantics
From a semantic point of view:
Rules are enabled (that is, used during optimisation)
by the flag.
This flag is implied by , and may be switched
off (as usual) by .
(NB: enabling without
may not do what you expect, though, because without GHC
ignores all optimisation information in interface files;
see , .)
Note that is an optimisation flag, and
has no effect on parsing or typechecking.
Rules are regarded as left-to-right rewrite rules.
When GHC finds an expression that is a substitution instance of the LHS
of a rule, it replaces the expression by the (appropriately-substituted) RHS.
By "a substitution instance" we mean that the LHS can be made equal to the
expression by substituting for the pattern variables.
GHC makes absolutely no attempt to verify that the LHS and RHS
of a rule have the same meaning. That is undecidable in general, and
infeasible in most interesting cases. The responsibility is entirely the programmer's!
GHC makes no attempt to make sure that the rules are confluent or
terminating. For example:
"loop" forall x y. f x y = f y x
This rule will cause the compiler to go into an infinite loop.
If more than one rule matches a call, GHC will choose one arbitrarily to apply.
GHC currently uses a very simple, syntactic, matching algorithm
for matching a rule LHS with an expression. It seeks a substitution
which makes the LHS and expression syntactically equal modulo alpha
conversion. The pattern (rule), but not the expression, is eta-expanded if
necessary. (Eta-expanding the expression can lead to laziness bugs.)
But not beta conversion (that's called higher-order matching).
Matching is carried out on GHC's intermediate language, which includes
type abstractions and applications. So a rule only matches if the
types match too. See below.
GHC keeps trying to apply the rules as it optimises the program.
For example, consider:
let s = map f
t = map g
in
s (t xs)
The expression s (t xs) does not match the rule "map/map", but GHC
will substitute for s and t, giving an expression which does match.
If s or t was (a) used more than once, and (b) large or a redex, then it would
not be substituted, and the rule would not fire.
Ordinary inlining happens at the same time as rule rewriting, which may lead to unexpected
results. Consider this (artificial) example
f x = x
{-# RULES "f" f True = False #-}
g y = f y
h z = g True
Since f's right-hand side is small, it is inlined into g,
to give
g y = y
Now g is inlined into h, but f's RULE has
no chance to fire.
If instead GHC had first inlined g into h then there
would have been a better chance that f's RULE might fire.
The way to get predictable behaviour is to use a NOINLINE
pragma on f, to ensure
that it is not inlined until its RULEs have had a chance to fire.
List fusion
The RULES mechanism is used to implement fusion (deforestation) of common list functions.
If a "good consumer" consumes an intermediate list constructed by a "good producer", the
intermediate list should be eliminated entirely.
The following are good producers:
List comprehensions
Enumerations of Int and Char (e.g. ['a'..'z']).
Explicit lists (e.g. [True, False])
The cons constructor (e.g 3:4:[])
++maptake, filteriterate, repeatzip, zipWith
The following are good consumers:
List comprehensions
array (on its second argument)
++ (on its first argument)
foldrmaptake, filterconcatunzip, unzip2, unzip3, unzip4zip, zipWith (but on one argument only; if both are good producers, zip
will fuse with one but not the other)
partitionheadand, or, any, allsequence_msumsortBy
So, for example, the following should generate no intermediate lists:
array (1,10) [(i,i*i) | i <- map (+ 1) [0..9]]
This list could readily be extended; if there are Prelude functions that you use
a lot which are not included, please tell us.
If you want to write your own good consumers or producers, look at the
Prelude definitions of the above functions to see how to do so.
Specialisation
Rewrite rules can be used to get the same effect as a feature
present in earlier versions of GHC.
For example, suppose that:
genericLookup :: Ord a => Table a b -> a -> b
intLookup :: Table Int b -> Int -> b
where intLookup is an implementation of
genericLookup that works very fast for
keys of type Int. You might wish
to tell GHC to use intLookup instead of
genericLookup whenever the latter was called with
type Table Int b -> Int -> b.
It used to be possible to write
{-# SPECIALIZE genericLookup :: Table Int b -> Int -> b = intLookup #-}
This feature is no longer in GHC, but rewrite rules let you do the same thing:
{-# RULES "genericLookup/Int" genericLookup = intLookup #-}
This slightly odd-looking rule instructs GHC to replace
genericLookup by intLookupwhenever the types match.
What is more, this rule does not need to be in the same
file as genericLookup, unlike the
SPECIALIZE pragmas which currently do (so that they
have an original definition available to specialise).
It is Your Responsibility to make sure that
intLookup really behaves as a specialised version
of genericLookup!!!An example in which using RULES for
specialisation will Win Big:
toDouble :: Real a => a -> Double
toDouble = fromRational . toRational
{-# RULES "toDouble/Int" toDouble = i2d #-}
i2d (I# i) = D# (int2Double# i) -- uses Glasgow prim-op directly
The i2d function is virtually one machine
instruction; the default conversion—via an intermediate
Rational—is obscenely expensive by
comparison.
Controlling what's going on
Use to see what transformation rules GHC is using.
Use to see what rules are being fired.
If you add you get a more detailed listing.
The definition of (say) build in GHC/Base.lhs looks like this:
build :: forall a. (forall b. (a -> b -> b) -> b -> b) -> [a]
{-# INLINE build #-}
build g = g (:) []
Notice the INLINE! That prevents (:) from being inlined when compiling
PrelBase, so that an importing module will “see” the (:), and can
match it on the LHS of a rule. INLINE prevents any inlining happening
in the RHS of the INLINE thing. I regret the delicacy of this.
In libraries/base/GHC/Base.lhs look at the rules for map to
see how to write rules that will do fusion and yet give an efficient
program even if fusion doesn't happen. More rules in GHC/List.lhs.
CORE pragmaCORE pragmapragma, COREcore, annotation
The external core format supports Note annotations;
the CORE pragma gives a way to specify what these
should be in your Haskell source code. Syntactically, core
annotations are attached to expressions and take a Haskell string
literal as an argument. The following function definition shows an
example:
f x = ({-# CORE "foo" #-} show) ({-# CORE "bar" #-} x)
Semantically, this is equivalent to:
g x = show x
However, when external core is generated (via
), there will be Notes attached to the
expressions show and x.
The core function declaration for f is:
f :: %forall a . GHCziShow.ZCTShow a ->
a -> GHCziBase.ZMZN GHCziBase.Char =
\ @ a (zddShow::GHCziShow.ZCTShow a) (eta::a) ->
(%note "foo"
%case zddShow %of (tpl::GHCziShow.ZCTShow a)
{GHCziShow.ZCDShow
(tpl1::GHCziBase.Int ->
a ->
GHCziBase.ZMZN GHCziBase.Char -> GHCziBase.ZMZN GHCziBase.Cha
r)
(tpl2::a -> GHCziBase.ZMZN GHCziBase.Char)
(tpl3::GHCziBase.ZMZN a ->
GHCziBase.ZMZN GHCziBase.Char -> GHCziBase.ZMZN GHCziBase.Cha
r) ->
tpl2})
(%note "bar"
eta);
Here, we can see that the function show (which
has been expanded out to a case expression over the Show dictionary)
has a %note attached to it, as does the
expression eta (which used to be called
x).
Special built-in functionsGHC has a few built-in functions with special behaviour. These
are now described in the module GHC.Prim
in the library documentation.Generic classes
The ideas behind this extension are described in detail in "Derivable type classes",
Ralf Hinze and Simon Peyton Jones, Haskell Workshop, Montreal Sept 2000, pp94-105.
An example will give the idea:
import Generics
class Bin a where
toBin :: a -> [Int]
fromBin :: [Int] -> (a, [Int])
toBin {| Unit |} Unit = []
toBin {| a :+: b |} (Inl x) = 0 : toBin x
toBin {| a :+: b |} (Inr y) = 1 : toBin y
toBin {| a :*: b |} (x :*: y) = toBin x ++ toBin y
fromBin {| Unit |} bs = (Unit, bs)
fromBin {| a :+: b |} (0:bs) = (Inl x, bs') where (x,bs') = fromBin bs
fromBin {| a :+: b |} (1:bs) = (Inr y, bs') where (y,bs') = fromBin bs
fromBin {| a :*: b |} bs = (x :*: y, bs'') where (x,bs' ) = fromBin bs
(y,bs'') = fromBin bs'
This class declaration explains how toBin and fromBin
work for arbitrary data types. They do so by giving cases for unit, product, and sum,
which are defined thus in the library module Generics:
data Unit = Unit
data a :+: b = Inl a | Inr b
data a :*: b = a :*: b
Now you can make a data type into an instance of Bin like this:
instance (Bin a, Bin b) => Bin (a,b)
instance Bin a => Bin [a]
That is, just leave off the "where" clause. Of course, you can put in the
where clause and over-ride whichever methods you please.
Using generics To use generics you need toUse the flags (to enable the extra syntax),
(to generate extra per-data-type code),
and (to make the Generics library
available. Import the module Generics from the
lang package. This import brings into
scope the data types Unit,
:*:, and :+:. (You
don't need this import if you don't mention these types
explicitly; for example, if you are simply giving instance
declarations.) Changes wrt the paper
Note that the type constructors :+: and :*:
can be written infix (indeed, you can now use
any operator starting in a colon as an infix type constructor). Also note that
the type constructors are not exactly as in the paper (Unit instead of 1, etc).
Finally, note that the syntax of the type patterns in the class declaration
uses "{|" and "|}" brackets; curly braces
alone would ambiguous when they appear on right hand sides (an extension we
anticipate wanting).
Terminology and restrictions
Terminology. A "generic default method" in a class declaration
is one that is defined using type patterns as above.
A "polymorphic default method" is a default method defined as in Haskell 98.
A "generic class declaration" is a class declaration with at least one
generic default method.
Restrictions:
Alas, we do not yet implement the stuff about constructor names and
field labels.
A generic class can have only one parameter; you can't have a generic
multi-parameter class.
A default method must be defined entirely using type patterns, or entirely
without. So this is illegal:
class Foo a where
op :: a -> (a, Bool)
op {| Unit |} Unit = (Unit, True)
op x = (x, False)
However it is perfectly OK for some methods of a generic class to have
generic default methods and others to have polymorphic default methods.
The type variable(s) in the type pattern for a generic method declaration
scope over the right hand side. So this is legal (note the use of the type variable ``p'' in a type signature on the right hand side:
class Foo a where
op :: a -> Bool
op {| p :*: q |} (x :*: y) = op (x :: p)
...
The type patterns in a generic default method must take one of the forms:
a :+: b
a :*: b
Unit
where "a" and "b" are type variables. Furthermore, all the type patterns for
a single type constructor (:*:, say) must be identical; they
must use the same type variables. So this is illegal:
class Foo a where
op :: a -> Bool
op {| a :+: b |} (Inl x) = True
op {| p :+: q |} (Inr y) = False
The type patterns must be identical, even in equations for different methods of the class.
So this too is illegal:
class Foo a where
op1 :: a -> Bool
op1 {| a :*: b |} (x :*: y) = True
op2 :: a -> Bool
op2 {| p :*: q |} (x :*: y) = False
(The reason for this restriction is that we gather all the equations for a particular type constructor
into a single generic instance declaration.)
A generic method declaration must give a case for each of the three type constructors.
The type for a generic method can be built only from:
Function arrows Type variables Tuples Arbitrary types not involving type variables
Here are some example type signatures for generic methods:
op1 :: a -> Bool
op2 :: Bool -> (a,Bool)
op3 :: [Int] -> a -> a
op4 :: [a] -> Bool
Here, op1, op2, op3 are OK, but op4 is rejected, because it has a type variable
inside a list.
This restriction is an implementation restriction: we just haven't got around to
implementing the necessary bidirectional maps over arbitrary type constructors.
It would be relatively easy to add specific type constructors, such as Maybe and list,
to the ones that are allowed.
In an instance declaration for a generic class, the idea is that the compiler
will fill in the methods for you, based on the generic templates. However it can only
do so if
The instance type is simple (a type constructor applied to type variables, as in Haskell 98).
No constructor of the instance type has unboxed fields.
(Of course, these things can only arise if you are already using GHC extensions.)
However, you can still give an instance declarations for types which break these rules,
provided you give explicit code to override any generic default methods.
The option dumps incomprehensible stuff giving details of
what the compiler does with generic declarations.
Another example
Just to finish with, here's another example I rather like:
class Tag a where
nCons :: a -> Int
nCons {| Unit |} _ = 1
nCons {| a :*: b |} _ = 1
nCons {| a :+: b |} _ = nCons (bot::a) + nCons (bot::b)
tag :: a -> Int
tag {| Unit |} _ = 1
tag {| a :*: b |} _ = 1
tag {| a :+: b |} (Inl x) = tag x
tag {| a :+: b |} (Inr y) = nCons (bot::a) + tag y
Control over monomorphismGHC supports two flags that control the way in which generalisation is
carried out at let and where bindings.
Switching off the dreaded Monomorphism RestrictionHaskell's monomorphism restriction (see
Section
4.5.5
of the Haskell Report)
can be completely switched off by
.
Monomorphic pattern bindings As an experimental change, we are exploring the possibility of
making pattern bindings monomorphic; that is, not generalised at all.
A pattern binding is a binding whose LHS has no function arguments,
and is not a simple variable. For example:
f x = x -- Not a pattern binding
f = \x -> x -- Not a pattern binding
f :: Int -> Int = \x -> x -- Not a pattern binding
(g,h) = e -- A pattern binding
(f) = e -- A pattern binding
[x] = e -- A pattern binding
Experimentally, GHC now makes pattern bindings monomorphic by
default. Use to recover the
standard behaviour.