summaryrefslogtreecommitdiff
path: root/docs/users_guide/parallel.xml
blob: b216592c5e6664d58b8f0a21730c1186ef08d74a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
<?xml version="1.0" encoding="iso-8859-1"?>
<sect1 id="lang-parallel">
  <title>Concurrent and Parallel Haskell</title>
  <indexterm><primary>parallelism</primary>
  </indexterm>

  <para>GHC implements some major extensions to Haskell to support
  concurrent and parallel programming.  Let us first establish terminology:
  <itemizedlist>
	<listitem><para><emphasis>Parallelism</emphasis> means running
	  a Haskell program on multiple processors, with the goal of improving
	  performance.  Ideally, this should be done invisibly, and with no
	  semantic changes.
	    </para></listitem>
	<listitem><para><emphasis>Concurrency</emphasis> means implementing
	  a program by using multiple I/O-performing threads.  While a
	  concurrent Haskell program <emphasis>can</emphasis> run on a
	  parallel machine, the primary goal of using concurrency is not to gain
	  performance, but rather because that is the simplest and most
	  direct way to write the program.  Since the threads perform I/O,
	  the semantics of the program is necessarily non-deterministic.
	    </para></listitem>
  </itemizedlist>
  GHC supports both concurrency and parallelism.
  </para>

  <sect2 id="concurrent-haskell">
    <title>Concurrent Haskell</title>

  <para>Concurrent Haskell is the name given to GHC's concurrency extension.
  It is enabled by default, so no special flags are required.
   The <ulink
	      url="https://www.haskell.org/ghc/docs/papers/concurrent-haskell.ps.gz">
	      Concurrent Haskell paper</ulink> is still an excellent
	      resource, as is <ulink
	      url="http://research.microsoft.com/%7Esimonpj/papers/marktoberdorf/">Tackling
	      the awkward squad</ulink>.
  </para><para>
  To the programmer, Concurrent Haskell introduces no new language constructs;
  rather, it appears simply as a library, <ulink
  url="&libraryBaseLocation;/Control-Concurrent.html">
	      Control.Concurrent</ulink>.  The functions exported by this
	      library include:
  <itemizedlist>
<listitem><para>Forking and killing threads.</para></listitem>
<listitem><para>Sleeping.</para></listitem>
<listitem><para>Synchronised mutable variables, called <literal>MVars</literal></para></listitem>
<listitem><para>Support for bound threads; see the paper <ulink
url="http://research.microsoft.com/%7Esimonpj/Papers/conc-ffi/index.htm">Extending
the FFI with concurrency</ulink>.</para></listitem>
</itemizedlist>
</para>
</sect2>

   <sect2><title>Software Transactional Memory</title>

    <para>GHC now supports a new way to coordinate the activities of Concurrent
    Haskell threads, called Software Transactional Memory (STM).  The
    <ulink
    url="http://research.microsoft.com/%7Esimonpj/papers/stm/index.htm">STM
    papers</ulink> are an excellent introduction to what STM is, and how to use
    it.</para>

   <para>The main library you need to use is the <ulink
  url="http://hackage.haskell.org/package/stm">
	      stm library</ulink>. The main features supported are these:
<itemizedlist>
<listitem><para>Atomic blocks.</para></listitem>
<listitem><para>Transactional variables.</para></listitem>
<listitem><para>Operations for composing transactions:
<literal>retry</literal>, and <literal>orElse</literal>.</para></listitem>
<listitem><para>Data invariants.</para></listitem>
</itemizedlist>
All these features are described in the papers mentioned earlier.
</para>
</sect2>

<sect2><title>Parallel Haskell</title>

  <para>GHC includes support for running Haskell programs in parallel
  on symmetric, shared-memory multi-processor
      (SMP)<indexterm><primary>SMP</primary></indexterm>.
  By default GHC runs your program on one processor; if you
     want it to run in parallel you must link your program
      with the <option>-threaded</option>, and run it with the RTS
      <option>-N</option> option; see  <xref linkend="using-smp" />).
      The runtime will
      schedule the running Haskell threads among the available OS
      threads, running as many in parallel as you specified with the
      <option>-N</option> RTS option.</para>

  <para>GHC only supports parallelism on a shared-memory multiprocessor.
    Glasgow Parallel Haskell<indexterm><primary>Glasgow Parallel Haskell</primary></indexterm>
    (GPH) supports running Parallel Haskell
    programs on both clusters of machines, and single multiprocessors.  GPH is
    developed and distributed
    separately from GHC (see <ulink url="http://www.macs.hw.ac.uk/~dsg/gph/">The
      GPH Page</ulink>).  However, the current version of GPH is based on a much older
    version of GHC (4.06).</para>

  </sect2>
  <sect2>
    <title>Annotating pure code for parallelism</title>

  <para>Ordinary single-threaded Haskell programs will not benefit from
    enabling SMP parallelism alone: you must expose parallelism to the
    compiler.

    One way to do so is forking threads using Concurrent Haskell (<xref
    linkend="concurrent-haskell"/>), but the simplest mechanism for extracting parallelism from pure code is
      to use the <literal>par</literal> combinator, which is closely related to (and often used
      with) <literal>seq</literal>.  Both of these are available from the <ulink
	url="http://hackage.haskell.org/package/parallel">parallel library</ulink>:</para>

<programlisting>
infixr 0 `par`
infixr 1 `pseq`

par  :: a -&#62; b -&#62; b
pseq :: a -&#62; b -&#62; b</programlisting>

    <para>The expression <literal>(x `par` y)</literal>
      <emphasis>sparks</emphasis> the evaluation of <literal>x</literal>
      (to weak head normal form) and returns <literal>y</literal>.  Sparks are
      queued for execution in FIFO order, but are not executed immediately.  If
      the runtime detects that there is an idle CPU, then it may convert a
      spark into a real thread, and run the new thread on the idle CPU.  In
      this way the available parallelism is spread amongst the real
      CPUs.</para>

    <para>For example, consider the following parallel version of our old
      nemesis, <function>nfib</function>:</para>

<programlisting>
import Control.Parallel

nfib :: Int -&#62; Int
nfib n | n &#60;= 1 = 1
       | otherwise = par n1 (pseq n2 (n1 + n2 + 1))
                     where n1 = nfib (n-1)
                           n2 = nfib (n-2)</programlisting>

    <para>For values of <varname>n</varname> greater than 1, we use
      <function>par</function> to spark a thread to evaluate <literal>nfib (n-1)</literal>,
      and then we use <function>pseq</function> to force the
      parent thread to evaluate <literal>nfib (n-2)</literal> before going on
      to add together these two subexpressions.  In this divide-and-conquer
      approach, we only spark a new thread for one branch of the computation
      (leaving the parent to evaluate the other branch).  Also, we must use
      <function>pseq</function> to ensure that the parent will evaluate
      <varname>n2</varname> <emphasis>before</emphasis> <varname>n1</varname>
      in the expression <literal>(n1 + n2 + 1)</literal>.  It is not sufficient
      to reorder the expression as <literal>(n2 + n1 + 1)</literal>, because
      the compiler may not generate code to evaluate the addends from left to
      right.</para>

    <para>
      Note that we use <literal>pseq</literal> rather
      than <literal>seq</literal>.  The two are almost equivalent, but
      differ in their runtime behaviour in a subtle
      way: <literal>seq</literal> can evaluate its arguments in either
      order, but <literal>pseq</literal> is required to evaluate its
      first argument before its second, which makes it more suitable
      for controlling the evaluation order in conjunction
      with <literal>par</literal>.
    </para>

    <para>When using <literal>par</literal>, the general rule of thumb is that
      the sparked computation should be required at a later time, but not too
      soon.  Also, the sparked computation should not be too small, otherwise
      the cost of forking it in parallel will be too large relative to the
      amount of parallelism gained.  Getting these factors right is tricky in
      practice.</para>

    <para>It is possible to glean a little information about how
      well <literal>par</literal> is working from the runtime
      statistics; see <xref linkend="rts-options-gc" />.</para>

    <para>More sophisticated combinators for expressing parallelism are
      available from the <literal>Control.Parallel.Strategies</literal>
      module in the <ulink
	url="http://hackage.haskell.org/package/parallel">parallel package</ulink>.
      This module builds functionality around <literal>par</literal>,
      expressing more elaborate patterns of parallel computation, such as
      parallel <literal>map</literal>.</para>
  </sect2>

<sect2 id="dph"><title>Data Parallel Haskell</title>
  <para>GHC includes experimental support for Data Parallel Haskell (DPH). This code
        is highly unstable and is only provided as a technology preview. More
        information can be found on the corresponding <ulink
        url="http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell">DPH
        wiki page</ulink>.</para>
</sect2>

</sect1>

<!-- Emacs stuff:
     ;;; Local Variables: ***
     ;;; sgml-parent-document: ("users_guide.xml" "book" "chapter" "sect1") ***
     ;;; ispell-local-dictionary: "british" ***
     ;;; End: ***
 -->