1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
|
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="Adobe PageMill 2.0 Mac">
<TITLE>Configuring TAO's Components</TITLE>
</HEAD>
<!-- $Id$ -->
<BODY text = "#000000"
link="#000fff"
vlink="#ff0f0f"
bgcolor="#ffffff">
<HR><P>
<H3 ALIGN=CENTER>Configuring TAO's Components</H3>
<H3>Overview</H3>
<p>As described in the <a href="Options.html">options</a>
documentation, various components in TAO can be customized by
specifying options for those components. This document illustrates
how to combine these options in order to affect ORB behavior and
performance, particularly its <A
HREF="http://www.cs.wustl.edu/~schmidt/CACM-arch.ps.gz">concurrency
model</A>.</P>
<p>TAO configures itself using the <A
HREF="http://www.cs.wustl.edu/~schmidt/O-Service-Configurator.ps.gz">ACE
Service Configurator</a> framework. Thus, options are specified in
the familiar <code>svc.conf</code> file (if you want to use a
different file name, use the <a
href="Options.html#svcfonf"><code>-ORBsvcconf</code></a> option).</p>
<HR><P>
<H3>Roadmap</H3>
<blockquote>
<P>Details for the following configurations are provided.</P>
<UL>
<li><b><a href="#comp">Configurating key components</a>:</b>
<ul>
<li><a href="#concurrency">Server Concurrency Strategy.</a>
<li><a href="#orb">ORB and other resources.</a>
<li><a href="#poa">POA.</a>
<li><a href="#coltbl">Collocation Table.</a>
<li><a href="#iiopprofile">Forwarding IIOP Profile</a>
</ul>
<li><b><a href="#examples">Configuration examples</a></b>
<ul>
<LI><A HREF="#reactive">Single-threaded, reactive model.</A>
<LI><A HREF="#tpc">Multiple threads, thread-per-connection model.</A>
<LI><A HREF="#multiorb">Multiple threads, ORB-per-Reactor-thread model.</A>
<LI><A HREF="#multiorb-tpc">Multiple threads, ORB-per-thread,
thread-per-connection model.</A>
<li><a href="#tpool">Multiple threads, thread-pool model.</a>
(Not yet implemented.)
<li><a href="#multiorb-tpool">Multiple threads,
ORB-per-thread, thread-pool model.</a> (Not yet implemented.)
<li>Each configuration has the following information:</p>
<table border=2 width="70%" cellspacing="2" cellpadding="0">
<tr align=left>
<th> Typical Use </th>
<td> A brief description of the scenario and its typical use. </td>
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>The number of threads used by ORB-related activities.</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>Identifies the creator of the threads discussed above.</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Where information on various resources is stored.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>Describes what task is undertaken for each thread.</td>
</tr>
<tr align=left>
<th>Options</th>
<td>Specifies the options for each service in order to utilize this configuration.</td>
</tr>
</table>
</ul>
<li><b><a href="#homogenous">Configuration for homogenous
systems</a></b>
<UL>
<LI><A HREF="#homogenous_compile">Compile time options</A></LI>
<LI><A HREF="#homogenous_runtime">Runtime time</A></LI>
</UL>
</LI>
</UL>
</blockquote>
<HR><P>
<h3>Configuring key components<a name="comp"></a></h3>
<ul>
<li><b><a name="concurrency">Server concurrency strategy</a></b>
specifies the concurrency strategy an ORB uses. It says nothing
about how many ORBs (or, threads) are there in a process.<p>
<ul>
<li><code>reactive</code>: The ORB handles requests
reactively, i.e., the ORB runs in one thread and service
multiple requests/connections simultaneously using
"<code>select</code>" call. You can have multiple ORBs
accepting requests reactively and running in separate
threads.<p>
<li><code>thread-per-connection</code>: The ORB handles new
connections by spawning a new thread whose job is to
service requests coming from the connection. The new
threads inherits all properties from the ORB threads (see
below.) <p>
<li><code>thread-pool</code> (not yet implemented): ... to be
continued ... <p>
</ul><p>
<li><b><a name="orb">ORB and other resources.</a></b><p>
<ul>
<li><code>global</code>: There's only one ORB process-wide.
<code>ORB_init () </code>must be called only once. Every
thread accesses the same ORB. <p>
<li><code>tss</code>: When using <code>tss</code> ORB, the
programmer is responsible for spawning the ORB threads and
setting up the ORB by calling <code>ORB_init ()</code> for
each ORB threads. Any ORB spawned thread (i.e., thru
thread-per-connection) shares the same resource the
spawning ORB uses.<p>
</ul><p>
<li><b><a name="poa">POA.</a></b><p>
<ul>
<li><code>global</code>: All ORBs share the same POA. The
advantage of using a global POA is that once an object is
registered to the POA under an ORB, it can be externalized
from other ORB.<p>
<li>per ORB (<code>tss</code>): Each ORB has its own POA,
which means, the programmer should also instantiate the POA
for each ORB (otherwise, a default RootPOA gets created,
which might not be what you what and thus, is discouraged.)<p>
</ul><p>
<li><b><a name="coltbl">Collocation Table:</a></b> <sup>*</sup>Care
must be taken when using CORBA objects to control the ORB
directly. For you are actually executing the collocated object,
not in the object's ORB context, but in the calling ORB's
context.<p>
<ul>
<li><code>global</code>: Process keeps a global collocation
table which contains tuples of listening endpoint and its
corresponding RootPOA. <p>
<LI>per ORB (<code>tss</code>): At this moment, since TAO only
supports one listening endpoint per ORB, there is no
per-ORB collocation Table. Checking of collocated objects
is done by comparing object's IIOP profile and the calling
ORB's listening endpoint.<p>
</ul><p>
<li><b><a name="iiopprofile">Forwarding IIOP Profile:</a></b>
In the case of multiple threads using the same <code>CORBA::Object</code> and
using forwarding, it is necessary to protect the forwarding
<code>IIOP_Profile</code>, which is part of the <code>IIOP_Object</code>,
which is part of the CORBA::Object against multiple access. Therefore
a mutex lock is used by default to ensure proper access. Using
the switch <code>-ORBiiopprofilelock</code> this policy can
be deactivated specifying <code>-ORBiiopprofilelock null</code>.
A motivation to do this might be performance reasons in cases,
where no forwarding is used or no multithreading with access
to shared <code>CORBA::Object</code>'s. Deactivating forces the ORB
to use a null mutex, which does introduce only a very small
overhead, compared with overhead introduced by a regular mutex lock.
<p>
</ul>
<HR><P>
<H3>Configuration Example<a name="examples"></a></H3>
<UL>
<LI>Single-threaded, reactive model.<A NAME="reactive"></A>
<p>
<table border=2 width="90%" cellspacing="2" cellpadding="0">
<th align=left>Typical Use</th>
<td>
This is the default configuration of TAO, where one thread handles
requests from multiple clients via a single Reactor. It is
appropriate when the requests (1) take a fixed, relatively uniform
amount of time and (2) are largely compute bound.
</td>
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>1</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>OS or whomever creates the main ORB thread in a process.</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Resources are stored process-wide.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>The single thread processes all connection requests and CORBA messages.</td>
</tr>
<tr align=left>
<th>Options</th>
<td>
<code>TAO_Resource_Manager</code>: <code>-ORBresources global</code><br>
<code>TAO_Server_Strategy_Factory</code>: <code>-ORBconcurrency reactive</code>
</td>
</tr>
</table>
</p>
<LI>Multiple threads, thread-per-connection model.<A NAME="tpc"></A>
<p>
<table border=2 width="90%" cellspacing="2" cellpadding="0">
<tr align=left>
<th>Typical Use</th>
<td>This configuration spawns a new thread to serve requests
from a new connection. This approach works well when
there are multiple connections active simultaneously and each
request-per-connection may take a fair amount of time to
execute.
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>1 plus the number of connections.</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>Programmer must set up the main thread which is
responsible to create new threads for new connections.</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Process-wise.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>The main thread handles new connections and spawns new
threads for them. Other threads handle requests for
established connections.</td>
</tr>
<tr align=left>
<th>Options</th>
<td>
<code>TAO_Resource_Manager</code>: <code>-ORBresources global</code><br>
<code>TAO_Server_Strategy_Factory</code>: <code>-ORBconcurrency thread-per-connection</code>
</td>
</tr>
</table>
</p>
<LI>Multiple threads, ORB-per-thread model.<A NAME="multiorb"></A>
<p>
<table border=2 width="90%" cellspacing="2" cellpadding="0">
<tr align=left>
<th>Typical Use</th>
<td>In this configuration, there multiple ORBs per process each
running in its own thread. Each thread handles requests
reactively. It's good for hard real-time applications that require
different thread priorities for the various ORBs.</td>
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>The number of ORBs.</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>The main process (thread).</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Thread specific.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>Service the requests from associating ORB.</td>
</tr>
<tr align=left>
<th>Options</th>
<td>
<code>TAO_Resource_Manager</code>: <code>-ORBresources tss</code><br>
<code>TAO_Server_Strategy_Factory</code>: <code>-ORBconcurrency reactive</code>
</td>
</tr>
</table>
</p>
<LI>Multiple threads, ORB-per-thread, thread-per-connection
model.<A NAME="multiorb-tpc"></A></H3>
<p>
<table border=2 width="90%" cellspacing="2" cellpadding="0">
<tr align=left>
<th>Typical Use</th>
<td>This approach provides a range of thread priorities plus connections
that don't interfere with each others.</td>
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>Number of ORBs plus number of connections.</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>Main threads creates threads running ORBs. They, in
turns, create connection handling threads.</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Thread specific.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>There are ORB threads which handle connection requests
and handler threads which service requests form
establiched connections.</td>
</tr>
<tr align=left>
<th>Options</th>
<td>
<code>TAO_Resource_Manager</code>: <code>-ORBresources tss</code><br>
<code>TAO_Server_Strategy_Factory</code>: <code>-ORBconcurrency thread-per-connection</code>
</td>
</tr>
</table>
</p>
<LI><A NAME="tpool">Multiple threads, thread-pool model.</A>
(Not yet implemented.)
<p>
<table border=2 width="90%" cellspacing="2" cellpadding="0">
<tr align=left>
<th>Typical Use</th>
<td>This model implements a highly optimized thread pool that
minimizes context switching, synchronization, dynamic memory
allocations, and data movement between threads.</td>
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>The number of threads used by ORB-related activities.</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>Identifies the creator of the threads discussed above.</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Where information on various resources is stored.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>Describes what task is undertaken for each thread.</td>
</tr>
</table>
</p>
<LI>Multiple threads, ORB-per-thread, thread-pool model.<A
NAME="multiorb-tpool"></A> (Not yet implemented.)
<p>
<table border=2 width="90%" cellspacing="2" cellpadding="0">
<tr align=left>
<th>Typical Use</th>
<td>A brief description of the scenario and its typical use.</td>
</tr>
<tr align=left>
<th>Number of Threads</th>
<td>The number of threads used by ORB-related activities.</td>
</tr>
<tr align=left>
<th>Thread Creator</th>
<td>Identifies the creator of the threads discussed above.</td>
</tr>
<tr align=left>
<th>Resource Location</th>
<td>Where information on various resources is stored.</td>
</tr>
<tr align=left>
<th>Thread task</th>
<td>Describes what task is undertaken for each thread.</td>
</tr>
</table>
</p>
</UL>
</blockquote>
<HR><P>
<h3>Configuration for homogenous systems<a name="homogenous"></a></h3>
<UL>
<LI><P><B>Compile time options<a name="homogenous_compile"></a></B></P>
<P>Many real-time applications run on homogenous environments,
TAO can take advantage of this fact by simplifying the server
side demarshaling;
to enable this feature you have to edit the
<CODE>$TAO_ROOT/tao/orbconf.h</CODE> file and enable the macro
<CODE>TAO_DISABLE_SWAP_ON_READ</CODE>.
</P>
<P>In this systems it is also common that server and the
client startup and shutdown simultaneously,
in those circumstances there is no need to check the
timestamps in the POA,
another macro (<CODE>POA_NO_TIMESTAMP</CODE>) can be used for
this purpose.
</P>
<P>Users running in embebbed systems may also need to modify
the default options for TAO,
the macros <CODE>TAO_DEFAULT_RESOURCE_FACTORY_ARGS</CODE>,
<CODE>TAO_DEFAULT_CLIENT_STRATEGY_FACTORY_ARGS</CODE>
and <CODE>TAO_DEFAULT_SERVER_STRATEGY_FACTORY_ARGS</CODE>
can be used for those purposes.
If the footprint size is an issue users may consider writing
custom strategy factories that only create the right
strategies, this eliminates the parsing code for the
different options.
</P>
</LI>
<LI><P><B>Runtime options<a name="homogenous_runtime"></a></B></P>
<P>If the only ORB running is TAO and there is no need to be
IIOP interoperable the option <CODE>-ORBiioplite</CODE> can
be used to reduce the message size and the processing time.
</P>
<P>Some embedded systems run without the benefit of a DNS
server, in that case they can use the
<CODE>-ORBdotteddecimaladdresses</CODE> option;
the ORB will avoid the use of hostnames in the profiles it
generates,
thus clients don't need to do any name resolution.
The compile-time define
<CODE>TAO_USES_DOTTED_DECIMAL_ADDRESSES</CODE>
in <CODE>$TAO_ROOT/tao/orbconf.h</CODE> to make this the
default behavior.
</P>
</LI>
</UL>
<HR>
<H3 ALIGN=CENTER>Hints</H3>
<P>
Choosing the right configuration is hard and,
of course,
depends on your application.
In the following section we will attempt to describe some
motivations for features in TAO,
hopefully that can guide you through the choice of your
configuration options.
</P>
<UL>
<LI>
<P><B>ORB-per-thread</B>
The main motivation behind this options
is to minimize priority invertion,
since threads share no ORB resources no locking is required
and thus,
priority is preserved in most cases (assuming proper support
from the OS).
If you are not too concerned about priority inversion try to
use a global ORB,
using ORB-per-thread has some tradeoffs
(like calling ORB_init on each thread, activation of a servant
is more complicated, etc.)
Some of the problems, can be minimized, but they require
even more careful analysis.
For example,
object activation can be simplified by using a global POA;
the careful reader will wonder how could global POA be
useful in anyway since it will require locks and thus
introduce priority inversions again;
some applications activate all their objects beforehand so
locks in the POA are not always needed;
other applications only activate a few objects after
startup,
so they can use a child POA with the right locking policy
for the dynamic servants and the root poa (with no locking)
for the majority of the servants.
</P>
<P>
As the reader will note this is a delicate configuration
option, the rule of thumb should be <B>not</B> to use
ORB-per-thread unless it is really required.
</P>
</LI>
<LI><B>Collocation tables</B>
Why could the application what a non-global collocation table?
If objects are to serve requests only at a well known priority
the application can be configured with the ORB-per-thread
option, and the object is activated only in the thread (ORB)
corresponding to the desired priority.
But using a global table would subert the priority assignment
(because calls would run at the priority of the client).
<P></P>
</LI>
<LI><B>Single-threaded vs. Multi-threaded Connection Handlers</B>
The <CODE>Client_Connection_Handler</CODE> is the component in
TAO that writes the requests to the underlying transport
socket;
this is also the component that reads the response back from
the server.
<P>While waiting for this response new requests to the local
ORB can arrive, this is the so-called nested upcall support.
TAO supports two mechanisms for handling nested upcalls,
the default uses the leader-follower model to allow multiple
threads to wait on a single reactor for several concurrent
requests;
sometimes this configuration can be an overkill,
if only one thread is using a reactor at the same time a
lighter weight implementation can be used.
</P>
<P>This configuration is controled by the
<CODE>-ORBclientconnectionhandler</CODE> option,
good opportunities to use this option are:
</P>
<UL>
<LI>Single threaded servers</LI>
<LI>Servers running in ORB-per-thread mode</LI>
<LI>Pure clients that will never receive a request</LI>
</UL>
<P></P>
</LI>
<LI><B>Allocator for input CDR streams</B>
Normally the application has no access to this buffer, and it
is only used on the demarshaling of arguments (or results).
It is almost always better to use the
"<CODE>-ORBinputcdrallocator tss</CODE>" option since it will
allocate memory from a thread specific allocator and it will
not need locks to manage that memory.
<P>In some cases the user <I>may</I> gain access to the CDR
stream buffer:
TAO makes no copies when demarshaling octet sequences,
instead the octet sequence simply points to the CDR buffer,
since the octet sequence does not own this buffer a copy must
be made if the user wants to keep the buffer after the
upcall.
</P>
<P>The user can, however, increase the reference count on the
CDR stream buffer, thus allowing her to extend the lifetime
of this buffer.
Still passing this buffer to another thread and attempting
to release it in that thread will result in some memory leak
or corruption.
Users willing to use this feature of TAO can still do so,
<B>if</B> they use a global allocator for their input CDR
stream, but that will introduce extra locking on the
critical path.
</P>
<P>As the reader can see this is an option that has limited
applicability and requires careful consideration of the
tradeoffs involved.</P>
</LI>
</UL>
<P><HR><P>
Back to the TAO <A HREF="components.html">components documentation</A>.
<!--#include virtual="/~schmidt/cgi-sig.html" -->
</BODY>
</HTML>
|