summaryrefslogtreecommitdiff
path: root/TAO/docs/releasenotes/ec.html
blob: a82e227aa5ae14312386ab4b259c8ccbdf513990 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
<!-- $Id$ -->

<HTML>
  <HEAD>
    <TITLE>Event Service Status</TITLE>
  </HEAD>

  <BODY TEXT="#000000" BGCOLOR="#FFFFFF">
    <H3>Event Service Status</H3>
      Point of contact: <A HREF="mailto:coryan@cs.wustl.edu">Carlos O'Ryan</A>

    <H4>Last Updated: $Date$ </H4>

    <H3>New on this release</H3>

    <UL>
      <LI><P>Continued work on the multicast support for the EC,
	  we added a new server that maps the event types
	  (and supplier ids) into the right mcast group.
	  Usually this server is collocated with the helper classes
	  that send the events through multicast,
	  so using a CORBA interface for this mapping is not
	  expensive,
	  further it adds the flexibility of using a global service
	  with complete knowledge of the traffic in the system,
	  that could try to optimize multicast group usage.
	</P>
      </LI>
      <LI><P>The subscriptions and publications on a particular EC can
	  be remotely observed by instances of the
	  <CODE>RtecChannelAdmin::Observer</CODE> class.
	  Once more using CORBA for this interface cost us little or
	  nothing because it is usually used by objects collocated
	  with the EC.
	</P>
      </LI>
      <LI><P><CODE>TAO_EC_UDP_Receiver</CODE> is a helper class that
	  receives events from multicast groups and dispatches them as
	  a supplier to some event channel.
	  This class has to <B>join</B> the right multicast groups,
	  using the <CODE>Observer</CODE> described above and the
	  <CODE>RtecUDPAdmin</CODE> to map the subscriptions into
	  multicast groups it can do this dynamically,
	  as consumers join or leave its Event Channel.
	</P>
      </LI>
      <LI><P>When sending Events through multicast all the
	  <CODE>TAO_EC_UDP_Sender</CODE> objects can shared the same
	  socket.
	</P>
      </LI>
    </UL>

    <H3>Known issues:</H3>
    <DL>
      <DT><EM>The schedule cannot be downloaded</EM></DT>
      <DD>
	The Scheduling Service seems to compute proper schedules,
	but it is not possible to download them,
	apparently there is a marshalling problem for sequences of
	complex structures.

	<P>Due to this problem we have been unable to test the
	  run-time scheduler and performance it is impossible to
	  complete performance measurements and optimizations:
	  the (global) scheduling service latency and overhead is at
	  least as large as the EC itself.</P>
	<P><STRONG>Note:</STRONG> This does not seem to be the case
	  anymore, but the comment will remain here until I can
	  confirm that the problem dissapeared.</P>
	</DD>

      <DT><EM>Run-time scheduler requires re-link</EM></DT>
      <DD>
	During a normal execution of the system
	there is no
	need to use the a global Real-time Scheduling Service,
	a faster,
	collocated implementation for the service is available.
	Obviously the scheduling information is precomputed in some
	config run.

	<P>Unfortunately the current scheme requires a relink of all the
	involved applications against the generated tables for the
	run-time scheduling service.</P>

	<P>We should be able to download the schedule to the interested
	parties,
	without need for a separate link phase.
	This will simplify and speed up the developing cycle,
	but requires a (small and fixed) amount of dynamic memory
	allocation.
	It could be interesting to "save" the schedule computation in
	some persistent form,
	so startup cost are lower too.</P>
	
	<P>The current design contemplates a config run were a global
	consumer accumulates the QoS requirements of all the objects, 
	next an external utility is used to force a computation and
	save of the schedule.
	In future executions 
	the global scheduler pre-loads this schedule and 
	the clients simply download the precomputed schedule,
	and all scheduling queries are to a local scheduling service, 
	without any further contact to the global instance.</P>
      </DD>

      <DT><EM>Users have no control over service
	  collocations</EM></DT>
      <DD>
	<P>The user should have complete control of services collocation,
	  using ACE Service Configurator;
	  currently the services must be explicitly instantiated by the
	  user.
	</P>
      </DD>

      <DT><EM>The <CODE>TAO_EC_Gateway_IIOP</CODE> objects publish
	  events coming from multiple suppliers</EM></DT>
      <DD><P>This objects receive the events from a "remote" EC and
	  pushes them into a "local" EC,
	  it subscribes to the disjunction of the events in the local
	  consumers and it uses the same event
	  types/<CODE>supplier_ids</CODE> to
	  connect as a local supplier.
	  This list may potentially include several different
	  subscriptions based on different supplier ids,
	  so the <CODE>Gateway</CODE> may end up with an invalid
	  publication.
	  We need to have different local suppliers for each remote
	  <CODE>supplier_id</CODE> potentially shared between all the
	  local <CODE>Gateways</CODE>.
	</P>
      </DD>

      <DT><EM>There is no <CODE>CosEventChannel</CODE>
	  interface</EM></DT>
      <DD><P>This is more of a warning than an issue.
	  TAO's Real-time Event Channel is <B>not</B> an
	  implementation of the CORBAservices Event Channel;
	  it provides a similar set of features,
	  and the interfaces are also similar,
	  but real-time applications require more control over
	  their middleware than what the CORBA Event Channel
	  provides.
	</P>
	<P>
	  It should also be noted that the Event Channel only provides
	  the <B>Push</B> model,
	  since it is more predictable and it can be reuse the
	  scheduling algorithms uses for normal function calls.
	</P>
	<P>It would be fairly simple to implement a standard CORBA
	  Event Service on top of TAO's Real-time Event Channel,
	  but this is a low priority task,
	  since our sponsors have no need for such a beast.
	</P>
      </DD>

      <DT><EM>Further details:</EM></DT>
      <DD>
	<P>Many lower level issues and tasks can be found in the
	  <A HREF="TODO.html">TODO list</A>.
	</P>
      </DD>

    </DL>

    <H3>Examples</H3>

    <P>For general documentation on the Event Service please read
      <A HREF="http://www.cs.wustl.edu/~schmidt/oopsla.ps.gz">
	The Design and Performance of a Real-time CORBA Event
	Service</A>.

    <P>The simplest test for the Event Channel is
      <CODE>Event_Latency</CODE>,
      below are the basic instructions to run it:</P>

    <OL>
      <LI> Compile everything under <CODE>$TAO_ROOT/orbsvcs</CODE>, this
	needs, obviously, <CODE>$TAO_ROOT/tao</CODE> and
	the IDL compiler in <CODE>$TAO_ROOT/TAO_IDL</CODE>.</LI>

      <LI><P>Run the naming service, the scheduling service, the event service
	and the test in
	<CODE>$TAO_ROOT/TAO/orbsvcs/tests/Event_Latency</CODE>;
	remember to give a different port to each one,
	using the <CODE>-ORBport</CODE> option. As in:</P>

	<CODE>
	<P>
	  $ cd $TAO_ROOT/orbsvcs
	</P>
	<P>
$ cd Naming_Service ; ./Naming_Service -ORBport 10000 &
	</P>
	<P>
$ cd Event_Service ; ./Event_Service -ORBport 0 &
	</P>
	<P>
$ cd tests/Event_Latency ; ./Event_Latency -ORBport 0 -m 20 -j &
	</P>
	</CODE>

	  <P>
	    You may want to run each program in a separate window.
	    Try using a fixed port number for the <CODE>Naming
	      Service</CODE> so you can use the <CODE>NameService</CODE>
	    environment variable.
	  </P>
	  
	  <P>
	    The script <CODE>start_services</CODE>
	    in <CODE>$TAO_ROOT/orbsvcs/tests</CODE> can help with
	    this.
	  </P>
	  
      </LI>

      <LI> If you want real-time behavior on Solaris you may need to run
	these programs as root; on the other hand, this particular
	example really has no priority inversion, since only one
	thread runs at a time.</LI>
    </OL>

    <P>Another example is <CODE>EC_Multiple</CODE>,
      numerous examples on how to run this test can be found in the
      scripts located in 
      <CODE>$TAO_ROOT/orbsvcs/tests/EC_Multiple</CODE>.</P>

    <H3>Features in previous releases</H3>

    <UL>
      <LI><P>
	  Added a prototype Consumer and Supplier that can send events
	  though multicast groups (or regular UDP sockets).
	</P>
      </LI>
      <LI><P>The Event Channel can be configured using a Factory that
	  constructs the right modules (like changing the dispatching
	  module),
	  in the current release only the default Factory is
	  implemented.
	</P>
      </LI>
      <LI>
	<P>
	When several suppliers are consumers are distributed over the
	network it could be nice to exploit locality and have a
	separate Event Channel on each process (or host).
	Only when an event is required by some remote consumer we need
	to send it through the network. </P>

	<P>
	The basic architecture to achieve this seems very simple,
	each Event Channel has a proxy that connects to the EC peers,
	providing a "merge" of its (local) consumer subscriptions as
	its own subscription list. </P>

	<P>
	Locally the proxy connects as a supplier,
	publishing all the events it has register for. </P>

	<P>
	To avoid event looping the events carry a time-to-live field
	that is decremented each time the event goes through a proxy, 
	when the TTL gets to zero the event is not propagated by the
	proxy. </P>

	<P>
	In the current release an experimental implementation is
	provided,
	it basically hardcodes all the subscriptions and publications,
	we are researching on how to automatically build the
	  publication list.</P>
      </LI>

      <LI> <P>
	We use the COS Time Service types (not the services) to
	specify time for the Event Service and Scheduling Service.</P>
      </LI>

      <LI>The <CODE>Gateway</CODE> to connect two event channels was
	moved from a test to the library.
	The corresponding test (<CODE>EC_Multiple</CODE>) has been
	expanded and improved.</LI>

	<LI>
	  The user can register a set of <CODE>EC_Gateways</CODE> with
	  the <CODE>EventChannel</CODE> implementation, the event
	  channel will automatically update the subscription list as
	  consumers subscribe to the EC.
	</LI>
	<LI>
	  The code for consumer and supplier disconnection was
	  improved and seems to work without problems now
	</LI>
	<LI>
	  The <CODE>Event_Service</CODE> program creates a collocated
	  <CODE>Scheduling Service</CODE> this works around a problem
	  in the ORB when running on multiprocessor.
	</LI>
	<LI>
	  Startup and shutdown were revised, the event channel
	  shutdown cleanly now.
	</LI>
	<LI>
	  Added yet another example
	  (<CODE>$TAO_ROOT/orbsvcs/tests/EC_Throughput</CODE>), this
	  one ilustrate how to use the 
	  TAO extensions to create octet sequences based on CDR
	  streams, without incurring in extra copies.
	  This is useful to implement custom marshalling or late
	  dermashalling of the event payload.
	  Future versions of the test will help measuring the EC
	  throughput, hence the name.
	</LI>
    </UL>

  </BODY>
</HTML>