summaryrefslogtreecommitdiff
path: root/TAO/docs/releasenotes/ec.html
blob: 43c142707dbafc23eb51e16bd78122db47c53207 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
   <TITLE>Event Service Status</TITLE>
    <!--  -->
</HEAD>
<BODY TEXT="#000000" BGCOLOR="#FFFFFF">

<H3>TAO's Real-time Event Service</H3>
    Point of contact: <A HREF="mailto:jwillemsen@remedy.nl">Johnny Willemsen</A>

    Documentation for the command line and service configurator
    options used to configure the real-time event service is available <A
    HREF="../ec_options.html">here</A>.


<H3>New on this release</H3>

    <UL>
      <LI><P>The consumer/supplier control can be controlled better, interval
        and timeout can be configured.
  </P>
      </LI>
      <LI><P>At the moment a consumer is connected it can be controlled when the
        connection from the EC to the consumer is created, this can be directly
        at the first connect or with the first push.
  </P>
      </LI>
      <LI><P>The IIOP Gateway has been expanded with several options to control
        its behaviour.
  </P>
      </LI>
      <LI><P>The implementation of the multicast gateway has been improved.
  </P>
      </LI>
    </UL>

<H3>Known issues:</H3>

    <DL>
      <DT><I>The new EC does not use the scheduling service</I>
      </DT>
      <DD>
  <P>The new implementation has been designed to simplify its use
    in applications that do not require an scheduling service and
    to minimize the code footprint when the scheduling service is
    only required for dispatching
  </P>
  <P>To achieve this goals the EC will able to run without any
    scheduling service or only consulting the schedule, but not
    updating the dependencies.
  </P>
  <P>Using strategies and factories we will be able to
    configure the EC to update the schedule only in the
    configurations that required.
    Unfortunately this features have not been implemented yet.
  </P>
      </DD>

      <DT><I>Further details:</I></DT>

      <DD><P>Many lower level issues and tasks can be found in the
    <A HREF="http://bugzilla.dre.vanderbilt.edu/index.cgi">
      DOC Center Bugzilla webpage
    </A>.
  </P>
      </DD>
    </DL>

<H3>Examples</H3>


For general documentation on the Event Service please read <A HREF="http://www.cs.wustl.edu/~schmidt/oopsla.ps.gz">The
Design and Performance of a Real-time CORBA Event Service</A>.
<P>The simplest test for the Event Channel is <TT>Event_Latency</TT>, below
are the basic instructions to run it:
<OL>
<LI>
Compile everything under <TT>$TAO_ROOT/orbsvcs</TT>, this needs, obviously,
<TT>$TAO_ROOT/tao</TT>
and the IDL compiler in <TT>$TAO_ROOT/TAO_IDL</TT>.</LI>

<P>Run the naming service, the scheduling service, the event service and
the test in <TT>$TAO_ROOT/TAO/orbsvcs/tests/Event_Latency</TT>.
As in:
<P><TT>$ cd $TAO_ROOT/orbsvcs</TT>
<P><TT>$ cd Naming_Service ; ./tao_cosnaming &amp;</TT>
<P><TT>$ cd Event_Service ; ./tao_rtevent &amp;</TT>
<P><TT>$ cd tests/Event_Latency ; ./Event_Latency -m 20 -j &amp;</TT>
<P>You may want to run each program in a separate window. Try using a fixed
port number for the <TT>Naming Service</TT> so you can use the <TT>NameService</TT>
environment variable.
<P>The script <TT>start_services</TT> in <TT>$TAO_ROOT/orbsvcs/tests</TT>
can help with this.
<LI>
If you want real-time behavior on Solaris you may need to run these programs
as root; on the other hand, this particular example really has no priority
inversion, since only one thread runs at a time.</LI>
</OL>
Another example is <TT>EC_Multiple</TT>, numerous examples on how to run
this test can be found in the scripts located in <TT>$TAO_ROOT/orbsvcs/tests/EC_Multiple</TT>.

<H3>
Features in previous releases</H3>

    <UL>
      <LI><P>The copy-on-write semantics has been supported for a
    while now.
  </P>
      </LI>
      <LI><P>The event service library has been divided in several
    smaller libraries, so applications only link the required
    components.
    The base code for the Event Service is located in the
    <CODE>TAO_RTEvent</CODE> library.
    <CODE>TAO_RTOLDEvent</CODE> contains the old implementation
    for the real-time Event Service,
    in addition to this the <CODE>TAO_RTSchedEvent</CODE>
    contains the components that will support scheduling in the
    new Event Service.
    This means that applications using only the
    <CODE>TAO_RTEvent</CODE> library do not need to link the
    scheduling service.
  </P>
      </LI>
      <LI><P>More details can be found on the <CODE>README</CODE> file
    in the <CODE>$TAO_ROOT/orbsvcs/orbsvcs/Event</CODE>
    directory.
  </P>
      </LI>
      <LI><P>Add strategies to remove unresponsive or dead consumers
    and/or suppliers
  </P>
      </LI>
      <LI><P>Lots of bug fixes since the last time this releases notes
    where updated.
  </P>
      </LI>
      <LI><P>In this release the EC supports atomic updates of
    subscriptions and publications.  In previous versions events
    could be lost during an update of the subscription list.
  </P>
      </LI>
      <LI><P>The internal data structures in the event channel have
    been strategized, for example, it is possible to use
    RB-trees instead of ordered lists.  The benefits are small
    at this stage.
  </P>
      </LI>
      <LI><P>New implementation of the serialization protocols. The
    new version is based on "internal iterators" (aka Worker).
    This implementation can support copy-on-read (already
    implemented) and copy-on-write (in progress).
  </P>
      </LI>
      <LI><P>The new EC allows the suppliers and consumers to update
    their publications and subscriptions, they can simply call
    the corresponding <CODE>connect</CODE> operation.
    The default EC configuration disallows this, but it is very
    easy to change it.
  </P>
      </LI>
      <LI><P>The new EC uses an abstract factory to build its
    strategies, this factory can be dynamically loaded using the
    service configurator.
  </P>
      </LI>
      <LI><P>The new EC can use trivial filters for both consumers and
    suppliers, resulting in optimal performance for broadcasters.
  </P>
      </LI>
      <LI><P>Most of the locks on the new EC are strategized.
  </P>
      </LI>
      <LI><P>The duration of all locks in the EC can be bounded,
    resulting in very predictable behavior.
  </P>
      </LI>

  <LI><P>Added fragmentation and reassembly support for the multicast
    gateways</P>
  </LI>

<LI><P>Continued work on the multicast support for the EC, we added a new
server that maps the event types (and supplier ids) into the right mcast
group. Usually this server is collocated with the helper classes that send
the events through multicast, so using a CORBA interface for this mapping
is not expensive, further it adds the flexibility of using a global service
with complete knowledge of the traffic in the system, that could try to
optimize multicast group usage.
<P>The subscriptions and publications on a particular EC can be remotely
observed by instances of the <TT>RtecChannelAdmin::Observer</TT> class.
Once more using CORBA for this interface cost us little or nothing because
it is usually used by objects collocated with the EC.
<P><TT>TAO_EC_UDP_Receiver</TT> is a helper class that receives events
from multicast groups and dispatches them as a supplier to some event channel.
This class has to <B>join</B> the right multicast groups, using the <TT>Observer</TT>
described above and the <TT>RtecUDPAdmin</TT> to map the subscriptions
into multicast groups it can do this dynamically, as consumers join or
leave its Event Channel.
<P>When sending Events through multicast all the <TT>TAO_EC_UDP_Sender</TT>
objects can shared the same socket.
</P>
</LI>

<LI><P>Added a prototype Consumer and Supplier that can send events though
multicast groups (or regular UDP sockets).
<P>The Event Channel can be configured using a Factory that constructs
the right modules (like changing the dispatching module), in the current
release only the default Factory is implemented.
<P>When several suppliers are consumers are distributed over the network
it could be nice to exploit locality and have a separate Event Channel
on each process (or host). Only when an event is required by some remote
consumer we need to send it through the network.
<P>The basic architecture to achieve this seems very simple, each Event
Channel has a proxy that connects to the EC peers, providing a "merge"
of its (local) consumer subscriptions as its own subscription list.
<P>Locally the proxy connects as a supplier, publishing all the events
it has register for.
<P>To avoid event looping the events carry a time-to-live field that is
decremented each time the event goes through a proxy, when the TTL gets
to zero the event is not propagated by the proxy.
<P>In the current release an experimental implementation is provided, it
basically hardcodes all the subscriptions and publications, we are researching
on how to automatically build the publication list.
<P>We use the COS Time Service types (not the services) to specify time
for the Event Service and Scheduling Service.
</P>
</LI>

<LI>
<P>The <TT>Gateway</TT> to connect two event channels was moved from a test
to the library. The corresponding test (<TT>EC_Multiple</TT>) has been
expanded and improved.
</P>
</LI>

<LI>
<P>The user can register a set of <TT>EC_Gateways</TT> with the <TT>EventChannel</TT>
implementation, the event channel will automatically update the subscription
list as consumers subscribe to the EC.
</P>
</LI>

<LI>
<P>The code for consumer and supplier disconnection was improved and seems
to work without problems now
</P>
</LI>

<LI>
<P>The <TT>Event_Service</TT> program creates a collocated <TT>Scheduling
Service</TT> this works around a problem in the ORB when running on
multiprocessor.
</P>
</LI>

<LI>
<P>Startup and shutdown were revised, the event channel shutdown
cleanly now.
</P>
</LI>

<LI>
<P>Added yet another example
(<TT>$TAO_ROOT/orbsvcs/tests/EC_Throughput</TT>),
this one ilustrate how to use the TAO extensions to create octet sequences
based on CDR streams, without incurring in extra copies. This is useful
to implement custom marshaling or late dermarhaling of the event payload.
Future versions of the test will help measuring the EC throughput, hence
the name.</P>
</LI>
</UL>

</BODY>
</HTML>