summaryrefslogtreecommitdiff
path: root/docs/tutorials/018/page01.html
blob: e6a3db6f0868e866307d90573b7adbe9a43f76e5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
<!-- $Id$ -->
<HTML>
<HEAD>
   <META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
   <META NAME="Author" CONTENT="James CE Johnson">
   <TITLE>ACE Tutorial 018</TITLE>
</HEAD>
<BODY TEXT="#000000" BGCOLOR="#FFFFFF" LINK="#000FFF" VLINK="#FF0F0F">

<CENTER><B><FONT SIZE=+2>ACE Tutorial 018</FONT></B></CENTER>

<CENTER><B><FONT SIZE=+2>The FIFO Nature of ACE_Token</FONT></B></CENTER>

<P>
<HR WIDTH="100%">

Welcome to Tutorial 18!
<P>
We've seen various ACE methods for synchronization in this and other
tutorial sections.  Something we haven't yet seen is the ACE_Token.
ACE_Token has a really cool thing:  it behaves in a FIFO manner.
<P>
Why is that cool?
<P>
In the other tutorials, you may have found that one thread will end up
with all of the work.  Even though other threads are available, the OS
scheduling and lock management just causes it to happen.  With
ACE_Token, the threads are queued up on the token and served in a
traditional first-in-first-out manner.
<P>
Why is FIFO important?
<P>
Well, if your app is running in a bunch of threads and each is doing
the same thing on the local host then FIFO may not be important.
However, take the case where each thread is connected to a remote
system.  Let's say you have a dozen threads in your app and each is
connected to a different remote system.  Each of the threads will be
given a block of data which will be passed to the remote for some
intense calculation.  If you use the FIFO then you'll spread the work
more-or-less evenly between the remote peers.  If you use the
traditional mutex then one peer may get the lion's share of the work.
<P>
It gets down to a personal decision based on the application's needs.
Consider your application, examine its behavior & decide for yourself
if you want to spread the work evenly or if it's OK to let some
threads work harder than others.
<P>
Kirthika's abstract:
<UL>
A token is similar to a mutex-lock, with the difference being that
the token is given to the waiting threads in a FIFO order. In the case
of the mutex-lock,  any thread (depending on the OS) could acquire
the lock when its released. It internally implements a recursive mutex,
i.e. the thread that owns the mutex can reqacquire it without deadlocking.
The token also has two FIFO lists for writers and readers with writer-
acquires having a higher priority than reader-acquires.
<P>
This tutorial throws light on the differences on having a shared resource governed by
a lock and a token, both derive from  a Task which simply updates a counter with the
number-of-threads value. A barrier is used for ensuring that all threads get a equal
opportunity of grabbing the token. The message queue with the message containing the
thread count moves among the threads to be obtained and read.
<P>
On obtaining the results, we conclude that on using the Token, the job to be completed
can be distributed evenly among available threads. This cant be guaranteed
in case of simply using the lock for synchronisation.
</ul>
<P><HR WIDTH="100%">
<CENTER>[<A HREF="../online-tutorials.html">Tutorial Index</A>] [<A HREF="page02.html">Continue This Tutorial</A>]</CENTER>