summaryrefslogtreecommitdiff
path: root/TAO/docs/releasenotes/ec.html
diff options
context:
space:
mode:
Diffstat (limited to 'TAO/docs/releasenotes/ec.html')
-rw-r--r--TAO/docs/releasenotes/ec.html148
1 files changed, 0 insertions, 148 deletions
diff --git a/TAO/docs/releasenotes/ec.html b/TAO/docs/releasenotes/ec.html
deleted file mode 100644
index ec703e4759c..00000000000
--- a/TAO/docs/releasenotes/ec.html
+++ /dev/null
@@ -1,148 +0,0 @@
-<!-- $Id$ -->
-
-<HTML>
- <HEAD>
- <TITLE>Event Service Status</TITLE>
- </HEAD>
-
- <BODY>
- <H3>Event Service Status</H3>
- Point of contact: <A HREF="mailto:coryan@cs.wustl.edu">Carlos O'Ryan</A>
-
- <H4>Last Updated: $Date$ </H4>
-
- <H3>New on this release</H3>
-
- <UL>
- <LI>Fixed memory leak in</LI>
- </UL>
-
- <H3>Known issues:</H3>
- <DL>
- <DT><EM>The schedule cannot be downloaded</EM></DT>
- <DD>
- The Scheduling Service seems to compute proper schedules,
- but it is not possible to download them,
- apparently there is a marshalling problem for sequences of
- complex structures.
-
- <P>Due to this problem we have been unable to test the
- run-time scheduler and performance it is impossible to
- complete performance measurements and optimizations:
- the (global) scheduling service latency and overhead is at
- least as large as the EC itself.</P>
- </DD>
-
- <DT><EM>Run-time scheduler requires re-link</EM></DT>
- <DD>
- During a normal execution of the system
- there is no
- need to use the a global Real-time Scheduling Service,
- a faster,
- collocated implementation for the service is available.
- Obviously the scheduling information is precomputed in some
- config run.
-
- <P>Unfortunately the current scheme requires a relink of all the
- involved applications against the generated tables for the
- run-time scheduling service.</P>
-
- <P>We should be able to download the schedule to the interested
- parties,
- without need for a separate link phase.
- This will simplify and speed up the developing cycle,
- but requires a (small and fixed) amount of dynamic memory
- allocation.
- It could be interesting to "save" the schedule computation in
- some persistent form,
- so startup cost are lower too.</P>
-
- <P>The current design contemplates a config run were a global
- consumer acumulates the QoS requirements of all the objects,
- next an external utility is used to force a computation and
- save of the schedule.
- In future executions
- the global scheduler pre-loads this schedule and
- the clients simply download the precomputed schedule,
- and all scheduling queries are to a local scheduling service,
- without any further contact to the global instance.</P>
- </DD>
-
- <DT><EM>Users have no control over service
- collocations</EM></DT>
- <DD>
- The user should have complete control of services collocation,
- using ACE Service Configurator.
- </DD>
-
- </DL>
-
- <H3>Examples</H3>
-
- <P>For general documentation on the Event Service please read
- <A HREF="http://www.cs.wustl.edu/~schmidt/oopsla.ps.gz">
- The Design and Performance of a Real-time CORBA Event
- Service</A>.
-
- <P>The simplest test for the Event Channel is
- <CODE>Event_Latency</CODE>,
- below are the basic instructions to run it:</P>
-
- <OL>
- <LI> Compile everything under <CODE>$TAO_ROOT/orbsvcs</CODE>, this
- needs, obviously, <CODE>$TAO_ROOT/tao</CODE> and
- the IDL compiler in <CODE>$TAO_ROOT/TAO_IDL</CODE>.</LI>
-
- <LI> Run the naming service, the scheduling service, the event service
- and the test in
- <CODE>$TAO_ROOT/TAO/orbsvcs/tests/Event_Latency</CODE>;
- remember to give a different port to each one,
- using the <CODE>-ORBport</CODE> option.</LI>
-
- <LI> If you want real-time behavior on Solaris you may need to run
- these programs as root; on the other hand, this particular
- example really has no priority inversion, since only one
- thread runs at a time.</LI>
- </OL>
-
- <P>Another example is <CODE>EC_Multiple</CODE>,
- please check the README file on
- <CODE>$TAO_ROOT/orbsvcs/tests/EC_Multiple</CODE> for
- further detail.</P>
-
- <H3>Features in previous releases</H3>
-
-<UL>
- <LI>
- When several suppliers are consumers are distributed over the
- network it could be nice to exploit locality and have a
- separate Event Channel on each process (or host).
- Only when an event is required by some remote consumer we need
- to send it through the network. <P>
-
- The basic architecture to achieve this seems very simple,
- each Event Channel has a proxy that connects to the EC peers,
- providing a "merge" of its (local) consumer subscriptions as
- its own subscription list. <P>
-
- Locally the proxy connects as a supplier,
- publishing all the events it has register for. <P>
-
- To avoid event looping the events carry a time-to-live field
- that is decremented each time the event goes through a proxy,
- when the TTL gets to zero the event is not propagated by the
- proxy. <P>
-
- In the current release an experimental implementation is
- provided,
- it basically hardcodes all the subscriptions and publications,
- we are researching on how to automatically build the
- publication list.<P>
-
- <LI> <P>
- We use the COS Time Service types (not the services) to
- specify time for the Event Service and Scheduling Service.<P>
-</UL>
-
- </BODY>
-</HTML>