summaryrefslogtreecommitdiff
path: root/ACE/TAO/docs/rtcorba/architecture.html
diff options
context:
space:
mode:
Diffstat (limited to 'ACE/TAO/docs/rtcorba/architecture.html')
-rw-r--r--ACE/TAO/docs/rtcorba/architecture.html117
1 files changed, 117 insertions, 0 deletions
diff --git a/ACE/TAO/docs/rtcorba/architecture.html b/ACE/TAO/docs/rtcorba/architecture.html
new file mode 100644
index 00000000000..b06460a079d
--- /dev/null
+++ b/ACE/TAO/docs/rtcorba/architecture.html
@@ -0,0 +1,117 @@
+<html>
+
+<!-- $Id$ -->
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
+<title>TAO Real-Time Architecture</title>
+<meta name="GENERATOR" content="Microsoft FrontPage 4.0">
+</head>
+
+<body>
+
+<h3 align="center">TAO Real-Time Architecture&nbsp;</h3>
+<p>This page describes and compares the two main ORB designs we considered for
+supporting Real-Time CORBA 1.0 in TAO.&nbsp; The first design, codenamed <i>reactor-per-lane</i>
+and shown in Figure 1, was chosen for our initial implementation.&nbsp; The
+second design, <i>queue-per-lane</i>, might also get implemented in the future,
+as a part of a project evaluating alternative ORB architectures for Real-Time
+CORBA. </p>
+<h3 align="center">Design I: Reactor-per-Lane</h3>
+In this design, each Threadpool Lane has its own reactor, acceptor, connector
+and connection cache.&nbsp; Both I/O and application-level processing for a
+request happen in the same threadpool thread: there are no context switches, and
+the ORB does not create any internal threads.&nbsp; Objects registered with any
+Real-Time POA that does not have the <i>ThreadpoolPolicy</i> are serviced by <i>default
+threadpool</i>, which is a set of of all the application threads that invoked
+<CODE>orb-&gt;run ()</CODE>.
+<p align="center"><img border="0" src="reactor-per-lane.gif" width="407" height="333"></p>
+<p align="center">Figure 1: Reactor-per-lane</p>
+<p>When a Real-Time POA is creating an IOR, it includes one or more of its
+threadpool's acceptor endpoints into that IOR according to the following
+rules.&nbsp; If the POA's priority model is <i>server declared</i>, we use the
+acceptor from the lane whose priority is equal to the priority of the target
+object.&nbsp; If the
+priority model is <i> client propagated,</i> all endpoints from the POA's
+threadpool are included into the IOR.&nbsp; Finally, if the <i>PriorityBandedConnectionPolicy</i>
+is set, then endpoints from the threadpool lanes with priorities falling into one of the specified
+priority bands are selected.&nbsp; The endpoints, selected according to the
+rules above, and their corresponding priorities are stored in IORs using a special
+<i>TaggedComponent </i><CODE>TAO_TAG_ENDPOINTS</CODE>.&nbsp; During each
+invocation, to obtain a connection to the server, the client-side ORB selects
+one of these endpoints based on the effective policies.&nbsp; For example, if the object has
+<i> client propagated</i> priority
+model, the ORB selects the endpoint whose priority is equal to the priority of the client thread making the
+invocation.&nbsp; If on the client-side, during invocation, or on the
+server-side, during IOR creation, an endpoint of the right priority is not available, it
+means the system has not been configured properly, and the ORB throws an exception.&nbsp;
+The design and rules described above ensure that when a threadpool with lanes is
+used, client requests are processed by the thread of desired priority from very
+beginning, and priority inversions are minimized.</p>
+<p>Some applications need less rigid, more dynamic environment.&nbsp; They do
+not have the advanced knowledge or&nbsp; cannot afford the cost of configuring all of the
+resources ahead of time, and have a greater tolerance for priority
+inversions.&nbsp; For such applications, threadpools <i> without</i> lanes are a way to go.&nbsp;
+In TAO, threadpools without lanes have different semantics that their <i>with-lanes</i>
+counterparts.&nbsp; Pools without lanes have a single acceptor endpoint used in
+all IORs, and their threads change priorities on the fly, as necessary, to
+service all types of requests.</p>
+<h3 align="center">Design II: Queue-per-Lane</h3>
+In this design, each threadpool lane has its own request queue.&nbsp; There is a separate
+I/O layer, which is shared by all lanes in all threadpools.&nbsp;&nbsp;The I/O layer has one
+set of resources - reactor, connector, connection cache, and I/O thread(s) - for each priority
+level used in the threadpools.&nbsp; I/O layer threads perform I/O processing
+and demultiplexing,&nbsp; and threadpool threads are used for application-level request
+processing.&nbsp;&nbsp;&nbsp;
+<p align="center"><img border="0" src="queue-per-lane.gif" width="387" height="384"></p>
+<p align="center">Figure 2: Queue-per-lane</p>
+<p>Global acceptor is listening for connections from clients.&nbsp; Once a
+connection is established, it is moved to the appropriate reactor during the
+first request, once its priority is known.&nbsp; Threads in each lane are blocked on
+condition variable, waiting for requests to show up in their queue.&nbsp; I/O
+threads read incoming requests, determine their target POA and its threadpool,
+and deposit the requests into the right queue for processing.&nbsp;&nbsp;</p>
+<h3 align="center">Design Comparison</h3>
+<i>Reactor-per-lane</i> advantages:
+<ul>
+ <li><b>Better performance</b><br>
+ Unlike in <i>queue-per-lane</i>, since each request is serviced in one
+ thread, there are no context switches, and there are opportunities for&nbsp;stack and TSS optimizations.</li>
+ <li><b>No priority inversions during connection establishment</b><i><br>
+ </i>In <i>reactor-per-lane</i>, threads accepting connections are the same
+ threads that will be servicing the requests coming in on those connections, <i>i.e.</i>,
+ priority of the accepting thread is equal to the priority of requests on the
+ connection.&nbsp; In <i>queue-per-lane</i>,
+ however, because of a global acceptor, there is no differentiation between
+ high priority and low priority clients until the first request.</li>
+ <li><b>Control over all threads with standard threadpool API<br>
+ </b>In <i>reactor-per-lane,</i> the ORB does not create any threads of its
+ own, so application programmer has full control over the number and
+ properties of all the threads with the Real-Time CORBA Threadpool
+ APIs.&nbsp; <i>Queue-per-lane</i>, on the other hand, has I/O layer threads,
+ so either a proprietary API has to be added or application programmer will
+ not have full control over all the thread resources..</li>
+</ul>
+<p>
+<i>Queue-per-lane</i> advantages:</p>
+<ul>
+ <li><b>Better feature support and adaptability</b><br>
+ <i>Queue-per-lane</i>&nbsp;supports ORB-level request buffering, while <i>reactor-per-lane</i>
+ can only provide buffering in the transport.&nbsp; With its two-layer
+ structure, <i>queue-per-lane</i> is a more decoupled design than <i>reactor-per-lane</i>,
+ making it easier to add new features or introduce changes&nbsp;</li>
+ <li><b>Better scalability</b>&nbsp;<br>
+ Reactor, connector and connection cache are <i>per-priority</i> resources in
+ <i>queue-per-lane</i>, and <i>per-lane</i> in <i>reactor-per-lane</i>.&nbsp;
+ If a server is configured with many threadpools that have similar lane
+ priorities, <i>queue-per-lane</i> might require significantly fewer of the
+ above-mentioned resources.&nbsp; It also uses fewer acceptors, and its IORs
+ are a bit smaller.</li>
+ <li><b>Easier piece-by-piece integration into the ORB<br>
+ </b>Ease of implementation and integration are important practical
+ considerations in any project.&nbsp; Because of its two-layer structure, <i>queue-per-lane</i>
+ is an easier design to implement, integrate and test piece-by-piece.</li>
+</ul>
+<hr>
+<p><i>Last modified: $Date$</i></p>
+</body>
+</html>