summaryrefslogtreecommitdiff
path: root/TAO/docs/releasenotes/ec.html
blob: 89c4fb506e8665bfd395f3e4d25974ddd0c1967f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
   <META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
   <META NAME="GENERATOR" CONTENT="Mozilla/4.06 [en] (X11; I; SunOS 5.5.1 sun4u) [Netscape]">
   <TITLE>Event Service Status</TITLE>
<!-- $Id$ -->
</HEAD>
<BODY TEXT="#000000" BGCOLOR="#FFFFFF">

<H3>TAO's Real-time Event Service</H3>
Point of contact: <A HREF="mailto:coryan@cs.wustl.edu">Carlos O'Ryan</A>
<H4>
Last Updated: $Date$</H4>

<H3>
New on this release</H3>

<UL>
  <LI><P>Added fragmentation and reassembly support for the multicast
    gateways</P>
  </LI>
</UL>

<H3>
Known issues:</H3>

<DL>
<DT>
<I>The schedule cannot be downloaded</I></DT>

<DD>
The Scheduling Service seems to compute proper schedules, but it is not
possible to download them, apparently there is a marshalling problem for
sequences of complex structures.</DD>

<P>Due to this problem we have been unable to test the run-time scheduler
and performance it is impossible to complete performance measurements and
optimizations: the (global) scheduling service latency and overhead is
at least as large as the EC itself.
<P><B>Note:</B> This does not seem to be the case anymore, but the comment
will remain here until I can confirm that the problem dissapeared.
<DT>

<P><I>Run-time scheduler requires re-link</I></DT>

<DD>
During a normal execution of the system there is no need to use the a global
Real-time Scheduling Service, a faster, collocated implementation for the
service is available. Obviously the scheduling information is precomputed
in some config run.</DD>

<P>Unfortunately the current scheme requires a relink of all the involved
applications against the generated tables for the run-time scheduling service.
<P>We should be able to download the schedule to the interested parties,
without need for a separate link phase. This will simplify and speed up
the developing cycle, but requires a (small and fixed) amount of dynamic
memory allocation. It could be interesting to "save" the schedule computation
in some persistent form, so startup cost are lower too.
<P>The current design contemplates a config run were a global consumer
accumulates the QoS requirements of all the objects, next an external utility
is used to force a computation and save of the schedule. In future executions
the global scheduler pre-loads this schedule and the clients simply download
the precomputed schedule, and all scheduling queries are to a local scheduling
service, without any further contact to the global instance.
<DT>
<P><I>Users have no control over service collocations</I></DT>

<P>The user should have complete control of services collocation, using
ACE Service Configurator; currently the services must be explicitly instantiated
by the user.
<DT>

<DT>
<P><I>Further details:</I></DT>

<P>Many lower level issues and tasks can be found in the <A HREF="TODO.html">TODO
list</A>.

</DL>

<H3>
Examples</H3>


For general documentation on the Event Service please read <A HREF="http://www.cs.wustl.edu/~schmidt/oopsla.ps.gz">The
Design and Performance of a Real-time CORBA Event Service</A>.
<P>The simplest test for the Event Channel is <TT>Event_Latency</TT>, below
are the basic instructions to run it:
<OL>
<LI>
Compile everything under <TT>$TAO_ROOT/orbsvcs</TT>, this needs, obviously,
<TT>$TAO_ROOT/tao</TT>
and the IDL compiler in <TT>$TAO_ROOT/TAO_IDL</TT>.</LI>

<P>Run the naming service, the scheduling service, the event service and
the test in <TT>$TAO_ROOT/TAO/orbsvcs/tests/Event_Latency</TT>; remember
to give a different port to each one, using the <TT>-ORBport</TT> option.
As in:
<P><TT>$ cd $TAO_ROOT/orbsvcs</TT>
<P><TT>$ cd Naming_Service ; ./Naming_Service -ORBport 10000 &amp;</TT>
<P><TT>$ cd Event_Service ; ./Event_Service -ORBport 0 &amp;</TT>
<P><TT>$ cd tests/Event_Latency ; ./Event_Latency -ORBport 0 -m 20 -j &amp;</TT>
<P>You may want to run each program in a separate window. Try using a fixed
port number for the <TT>Naming Service</TT> so you can use the <TT>NameService</TT>
environment variable.
<P>The script <TT>start_services</TT> in <TT>$TAO_ROOT/orbsvcs/tests</TT>
can help with this.
<LI>
If you want real-time behavior on Solaris you may need to run these programs
as root; on the other hand, this particular example really has no priority
inversion, since only one thread runs at a time.</LI>
</OL>
Another example is <TT>EC_Multiple</TT>, numerous examples on how to run
this test can be found in the scripts located in <TT>$TAO_ROOT/orbsvcs/tests/EC_Multiple</TT>.

<H3>
Features in previous releases</H3>

<UL>

<LI><P>Continued work on the multicast support for the EC, we added a new
server that maps the event types (and supplier ids) into the right mcast
group. Usually this server is collocated with the helper classes that send
the events through multicast, so using a CORBA interface for this mapping
is not expensive, further it adds the flexibility of using a global service
with complete knowledge of the traffic in the system, that could try to
optimize multicast group usage.
<P>The subscriptions and publications on a particular EC can be remotely
observed by instances of the <TT>RtecChannelAdmin::Observer</TT> class.
Once more using CORBA for this interface cost us little or nothing because
it is usually used by objects collocated with the EC.
<P><TT>TAO_EC_UDP_Receiver</TT> is a helper class that receives events
from multicast groups and dispatches them as a supplier to some event channel.
This class has to <B>join</B> the right multicast groups, using the <TT>Observer</TT>
described above and the <TT>RtecUDPAdmin</TT> to map the subscriptions
into multicast groups it can do this dynamically, as consumers join or
leave its Event Channel.
<P>When sending Events through multicast all the <TT>TAO_EC_UDP_Sender</TT>
objects can shared the same socket.
</P>
</LI>

<LI><P>Added a prototype Consumer and Supplier that can send events though
multicast groups (or regular UDP sockets).
<P>The Event Channel can be configured using a Factory that constructs
the right modules (like changing the dispatching module), in the current
release only the default Factory is implemented.
<P>When several suppliers are consumers are distributed over the network
it could be nice to exploit locality and have a separate Event Channel
on each process (or host). Only when an event is required by some remote
consumer we need to send it through the network.
<P>The basic architecture to achieve this seems very simple, each Event
Channel has a proxy that connects to the EC peers, providing a "merge"
of its (local) consumer subscriptions as its own subscription list.
<P>Locally the proxy connects as a supplier, publishing all the events
it has register for.
<P>To avoid event looping the events carry a time-to-live field that is
decremented each time the event goes through a proxy, when the TTL gets
to zero the event is not propagated by the proxy.
<P>In the current release an experimental implementation is provided, it
basically hardcodes all the subscriptions and publications, we are researching
on how to automatically build the publication list.
<P>We use the COS Time Service types (not the services) to specify time
for the Event Service and Scheduling Service.
</P>
</LI>

<LI>
<P>The <TT>Gateway</TT> to connect two event channels was moved from a test
to the library. The corresponding test (<TT>EC_Multiple</TT>) has been
expanded and improved.
</P>
</LI>

<LI>
<P>The user can register a set of <TT>EC_Gateways</TT> with the <TT>EventChannel</TT>
implementation, the event channel will automatically update the subscription
list as consumers subscribe to the EC.
</P>
</LI>

<LI>
<P>The code for consumer and supplier disconnection was improved and seems
to work without problems now
</P>
</LI>

<LI>
<P>The <TT>Event_Service</TT> program creates a collocated <TT>Scheduling
Service</TT> this works around a problem in the ORB when running on
multiprocessor.
</P>
</LI> 

<LI>
<P>Startup and shutdown were revised, the event channel shutdown
cleanly now.
</P>
</LI> 

<LI>
<P>Added yet another example
(<TT>$TAO_ROOT/orbsvcs/tests/EC_Throughput</TT>),
this one ilustrate how to use the TAO extensions to create octet sequences
based on CDR streams, without incurring in extra copies. This is useful
to implement custom marshalling or late dermashalling of the event payload.
Future versions of the test will help measuring the EC throughput, hence
the name.</P>
</LI>
</UL>

</BODY>
</HTML>