summaryrefslogtreecommitdiff
path: root/FAQ
diff options
context:
space:
mode:
authorlevine <levine@ae88bc3d-4319-0410-8dbf-d08b4c9d3795>1996-10-21 21:41:34 +0000
committerlevine <levine@ae88bc3d-4319-0410-8dbf-d08b4c9d3795>1996-10-21 21:41:34 +0000
commita5fdebc5f6375078ec1763850a4ca23ec7fe6458 (patch)
treebcf0a25c3d45a209a6e3ac37b233a4812f29c732 /FAQ
downloadATCD-a5fdebc5f6375078ec1763850a4ca23ec7fe6458.tar.gz
Initial revision
Diffstat (limited to 'FAQ')
-rw-r--r--FAQ1877
1 files changed, 1877 insertions, 0 deletions
diff --git a/FAQ b/FAQ
new file mode 100644
index 00000000000..bab85562258
--- /dev/null
+++ b/FAQ
@@ -0,0 +1,1877 @@
+There are many changes and improvements in the new version of ACE.
+The ChangeLog file contains complete details about all of them.
+
+I've tested ACE thoroughly on Solaris 2.3 and 2.4 with the SunC++ 4.x
+compiler and Centerline 2.x. I've also tested it with the SunC++ 3.x
+compiler on the SunOS 4.x platform. However, I've not been able to
+test it on other platforms. If anyone has time to do that, and can
+report the results back to me I'd appreciate that.
+
+Please let me know if you have any questions or comments.
+
+ Doug
+
+----------------------------------------
+
+1. SIGHUP
+
+> 1) Where the heck does the HUP signal get registered for the
+> $WRAPPER_ROOT/tests/Service_Configurator/server stuff? I looked there and
+> in $WRAPPER_ROOT/libsrc/Service_Configurator. No luck. I guess I am
+> just blind from reading.
+
+ Take a look in ./libsrc/Service_Configurator/Service_Config.h.
+The constructor for Service_Config is where it happens:
+
+ Service_Config (int ignore_defaults = 0,
+ size_t size = Service_Config::MAX_SERVICES,
+ int signum = SIGHUP);
+
+----------------------------------------
+2. Multi-threaded Signal_Handler support
+
+> It appears Signal_Handler is
+> not setup for multi-threaded apps. How do you handle signals
+> in different threads? Do I have to put in the hooks in my app or should
+> it go in the Threads arena?
+
+ Ah, good question... My design follows the approach espoused
+by Sun. Basically, they suggest that you implement per-thread signal
+handling atop of the basic UNIX signal handlers (or in the case of
+ACE, the handle_signal() callbacks on Event_Handler subclasses) by
+using the thread id returned by thr_self() to index into a search
+structure containing the handlers. This should be pretty straight
+forward to layer atop the existing ACE Signal_Handler mechanisms.
+However, you might ask yourself whether you really want (1) separate
+signal handler *functionality* in different threads or (2) different
+threads that mask out certain signals. The latter might be easier to
+implement and reason about!
+
+----------------------------------------
+3. Problems compiling ACE with G++
+
+> I substituted -lg++ for -lC in macro_wrappers.GNU and ran make.
+>
+> Most stuff seemed to build. Continually got messages like the following:
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libASX.a: warning: archive has no table of c
+> ontents; add one using ranlib(1)
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libThreads.a: warning: archive has no table
+> of contents; add one using ranlib(1)
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libSPIPE.a: warning: archive has no table of
+> contents; add one using ranlib(1)
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libASX.a: warning: archive has no table of c
+> ontents; add one using ranlib(1)
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libThreads.a: warning: archive has no table
+> of contents; add one using ranlib(1)
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libSPIPE.a: warning: archive has no table of
+> contents; add one using ranlib(1)
+
+> no matter how many times I used ranlib or removed the libraries and re-compiled
+> or whatever. Perhaps these are System V specific and will not work on 4.1.3?
+
+ Yes, that's exactly right. If you look at the files, they all
+contain ifdef's for features that aren't included in the
+./include/makeinclude/wrapper_macros.GNU file. To make this more
+obvious, I've enclosed the following message in the INSTALL file:
+
+ * Sun OS 4.1.x
+
+ Note that on SunOS 4.x you may get warnings from the
+ linker that "archive has no table of contents; add
+ one using ranlib(1)" for certain libraries (e.g.,
+ libASX.a, libThreads.a, and libSPIPE.a). This
+ occurs since SunOS 4.x does not support these features.
+
+> never able to get .so -- assume these are shared libraries that gcc can not
+> deal with.
+
+ Yes, if you use the stock gcc/gas/gnu ld
+compiler/assembler/linker, you won't get shared libraries to work. It
+is possible to hack this by using the "collect" version of g++.
+However, as usual, I strongly advise people to stay away from g++ if
+you want to use shared libraries or templates.
+
+> got some linker errors as follows:
+>
+> g++ -g -DACE_NTRACE -DACE_HAS_MT_SAFE_SOCKETS -DACE_HAS_NO_T_ERRNO -DACE_HAS_
+> OLD_MALLOC -DACE_HAS_POLL -DACE_HAS_SEMUN -DACE_HAS_SETOWN -DACE_HAS_STRBUF_T -
+> DACE_HAS_STREAMS -DACE_HAS_SVR4_DYNAMIC_LINKING -DACE_HAS_TIUSER_H -DACE_HAS_SY
+> S_FILIO_H -DACE_PAGE_SIZE=4096 -DACE_HAS_ALLOCA -DACE_HAS_CPLUSPLUS_HEADERS -DA
+> CE_HAS_SVR4_SIGNAL_T -DACE_HAS_STRERROR -DMALLOC_STATS -I/usr2/tss/jvm/ACE_wrap
+> pers/include -I/usr2/tss/jvm/ACE_wrappers/libsrc/Shared_Malloc -o test_malloc
+> .obj/test_malloc.o -L/usr2/tss/jvm/ACE_wrappers/lib -Bstatic -lSemaphores -lS
+> hared_Malloc -lShared_Memory -lReactor -lThreads -lMem_Map -lLog_Msg -lFIFO -lI
+> PC_SAP -lMisc -lnsl -lg++
+> ld: /usr2/tss/jvm/ACE_wrappers/lib/libThreads.a: warning: archive has no table
+> of contents; add one using ranlib(1)
+> ld: Undefined symbol
+> _free__t6Malloc2Z18Shared_Memory_PoolZ13PROCESS_MUTEXPv
+> _free__t6Malloc2Z17Local_Memory_PoolZ10Null_MutexPv
+> _malloc__t6Malloc2Z18Shared_Memory_PoolZ13PROCESS_MUTEXUl
+> _malloc__t6Malloc2Z17Local_Memory_PoolZ10Null_MutexUl
+> _remove__t6Malloc2Z17Local_Memory_PoolZ10Null_Mutex
+> ___t6Malloc2Z17Local_Memory_PoolZ10Null_Mutex
+> _print_stats__t6Malloc2Z17Local_Memory_PoolZ10Null_Mutex
+> _remove__t6Malloc2Z18Shared_Memory_PoolZ13PROCESS_MUTEX
+> ___t6Malloc2Z18Shared_Memory_PoolZ13PROCESS_MUTEX
+> _print_stats__t6Malloc2Z18Shared_Memory_PoolZ13PROCESS_MUTEX
+> collect2: ld returned 2 exit status
+> gcc: file path prefix `static' never used
+> make[2]: *** [test_malloc] Error 1
+> make[2]: Leaving directory `/usr2/tss/jvm/ACE_wrappers/tests/Shared_Malloc'
+> <======== End all: /usr2/tss/jvm/ACE_wrappers/tests/Shared_Malloc
+
+ That looks like a problem that G++ has with templates. I
+don't know of any reasonable solution to this problem using g++.
+
+> Finally decided there was enough stuff that it looked like I might have some
+> thing so I tried to run some tests and could not find so much as one piece
+> of documentation that might give me some clue about running tests.
+
+You should take a look at ./tests/Service_Configurator/server/README
+file. That explains how to run the more complicated tests. As for
+the other tests, it is pretty straight forward if you look at the
+./tests/IPC_SAP/SOCK_SAP and ./tests/Reactor/* directory code to
+figure out how to run the tests. I don't have a Q/A department, so
+any documentation has to come from volunteers.
+
+----------------------------------------
+4. Is there any docs or man pages on the Log_Record class?
+
+There is a paper in the C++_wrappers_doc.tar.Z file on ics.uci.edu
+called reactor2.ps that has some examples of using Log_Record. The
+./apps/Logger directories show several examples using Log_Record.
+Finally, the source code for Log_Record is pretty short (though it
+clearly could be commented better ;-)).
+
+----------------------------------------
+5. Signal handling prototypes
+
+> According to the man page on sigaction on our system, that line
+> should look something like the following:
+>
+> sa.sa_handler = SIG_DFL;
+
+ The problem is that most versions of UNIX I've come across
+don't have a correct prototype for this field of struct sigaction.
+That's why I define two variants of signal handler typedefs: one that
+is a typedef of the "correct version" (which I call SignalHandler) and
+one of which is a typedef of the "incorrect version" (which I call
+SignalHandlerV). You might check out the sysincludes.h file to see
+how it is defining SignalHandlerV and make sure this matches what your
+OS/Compiler defines in <sys/signal.h>
+
+----------------------------------------
+6. Omitting shared libraries
+
+> Can anyone tell me a way to turn off the creation of the shared libraries
+> in the ACE build.
+
+You can simply comment out the LIB target in the $WRAPPER_ROOT/ace/Makefile
+or change the BUILD target from
+
+BUILD = $(VLIB) $(VSHLIB) $(SHLIBA)
+
+to
+
+BUILD = $(VSHLIB) $(SHLIBA)
+
+----------------------------------------
+7. DCE threading and signal handling
+
+>Reading the DCE docs leaves me confused as to how to make everyone
+>work together in a happy hormonious whole. May basic need is to catch
+>asynchronous signals so i can release some global resources before
+>the process exits.
+
+You need to spawn a separate thread to handle signals. As part of
+your init, do this:
+ pthread_create(&tid, thread_attr, signal_catcher, NULL);
+ pthread_detach(&tid);
+
+Where signal_catcher is like this:
+static void *
+signal_catcher(void *arg)
+{
+ static int catch_sigs[] = {
+ SIGHUP, SIGINT, SIGQUIT, SIGTERM, SIGCHLD
+ };
+ sigset_t catch_these;
+ int i;
+ error_status_t st;
+
+ for ( ; ; ) {
+ sigemptyset(&catch_these);
+ for (i = 0; i < sizeof catch_sigs / sizeof catch_sigs[0]; i++)
+ sigaddset(&catch_these, catch_sigs[i]);
+ i = sigwait(&catch_these);
+ /* Note continue below, to re-do the loop. */
+ switch (i) {
+ default:
+ fprintf(stderr, "Caught signal %d. Exiting.\n", i);
+ CLEANUP_AND_EXIT();
+ /* NOTREACHED */
+#if defined(SIGCHLD)
+ case SIGCHLD:
+ srvrexec__reap();
+ continue;
+#endif /* defined(SIGCHLD) */
+ }
+ }
+ return NULL;
+}
+----------------------------------------
+8.
+
+> I have installed ACE2.15.5 on SunOS 4.1.3 with gcc2.6.0. I run the test program
+> ---server_test. The static is OK, but error found when I commented out the first
+> one and uncommented out the second one in the svc.conf file:
+>
+> #static Svc_Manager "-d -p 3912"
+> dynamic Remote_Brdcast Service_Object * .shobj/Handle_Broadcast.so:remote_broad
+> cast "-p 10001"
+>
+> The error goes like this:
+>
+> -----------
+> jupiter[12] %server_test -d
+> starting up daemon server_test
+> opening static service Svc_Manager
+> did static on Svc_Manager, error = 0
+> signal signal 1 occurred
+> beginning reconfiguration at Sat Feb 25 13:40:29 1995
+> Segmentation fault (core dumped)
+> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+My guess is that the code generated by GCC on SunOS 4.x does not
+correctly initialize static variables from shared libraries. The
+SunC++ 4.0.x compiler does this correctly on Solaris 2.x (though I
+believe that on SunOS 4.x it doesn't work without some extra coaxing).
+
+In general, I try to avoid using ACE's explicit dynamic linking
+mechanisms on SunOS 4.x and GCC. You can write plenty of interesting
+and useful code with ACE without using those features. Those tests
+are mostly there to illustrate the "proof of concept."
+----------------------------------------
+9.
+
+> a) I noticed the default constructor for the reactor does an open w/ defaults.
+> Does this mean I need to close it if I wish to re-open it with different
+> size and restart values?
+
+ No. With the latest versions of ACE, you can now just call
+open() with a new size and it will correctly resize the internal
+tables to fit.
+
+> b) What is the usage difference between the normal FD_Set objects
+> (rd/wr/ex_handle_mask_) and the ready FD_Set objects
+> (rd/wr/ex_handle_mask_ready)?
+
+ The normal FD_Sets (now called Handle_Set in ACE 3.0.5) holds
+the "waitable" descriptors (these are the descriptors given to
+select() or poll()). In contrast, the ready FD_Sets may be set by
+Event_Handler subclasses (by called the set_ready() API) to indicate
+to the Reactor that they want to be redispatched on the next go-round
+*without* blocking. If you look at the Reactor code, you'll see that
+the wait_for() method checks the ready sets first and doesn't block if
+there are any bits set in those masks. This features makes it
+possible for Event_Handlers to control subsequent dispatching policies
+of the Reactor.
+
+> c) What does the positive return value do from an event handler callback:
+> -1 detaches the event handler for that mask
+> 0 does nothing - keeps the event handler registered for that mask
+> >0 resets a bit in the current dispatching mask (I think) - does this mean
+> this event will be called again before the current dispatch cycle is done?
+
+Almost... (it's tied in with my description of the ready sets above).
+It means that once the Reactor finishes cycling through the set of
+descriptors it got back from select() or poll(), it will redispatch
+the ready set descriptors before sleeping.
+
+> Without direct access to the bit masks in X, I'm not sure I could emulate
+> this activity - what do you think?
+
+I'm not sure. I'm not enough of an X guru. Maybe someone else on the
+list knows the answer to this?
+
+> d) If I let X do the select blocking, will that have any affect on
+> the Reactor performing signal handling?
+
+ Yes, I think that will cause problems since the Reactor relies
+on a "handshake" between its Signal_Handler component and its
+handle_events loop to properly handle signals.
+
+> e) Is the Poll method preferred over Select if it is available - why?
+
+For systems that implement select() in terms of poll() (e.g., Solaris
+2.x) then it may be somewhat faster. Otherwise, it doesn't really
+matter since (1) they (should) do the same thing and (2) the end user
+shouldn't notice any change in behavior.
+
+----------------------------------------
+10.
+
+> I would very much like to evaluate/use the ACE Toolkit,
+> but am limited as to disk space on our system.
+> What is the total disk space required for a compiled,
+> usable toolkit?
+
+The source code itself is around 2 Meg, uncompressed.
+
+The compiled version of ACE is around 90 Meg compiled with the SunC++
+4.x compiler (naturally, this will differ with other compilers).
+However, of this amount, about 40 meg are for the libraries, and
+another 50 meg are for the test programs. Naturally, you don't need
+to keep the test programs compiled.
+
+The postscript documentation is around 5 Meg, compressed.
+
+----------------------------------------
+11.
+
+> This is regarding the newer release of ACE and pertaining to the library
+> archive file. My question is, if all the ".o" files are archived into one
+> single "libACE.a", does it increase the size of the executable program?
+
+No. The use of a *.a file allows the linker to extract out only those
+*.o files that are actually used by the program.
+
+> If it does, then does a large executable program mean possibility of it being
+> slower?
+
+ No.
+
+----------------------------------------
+12.
+
+> What happens if I have several reactors in a process (e.g. in different
+> threads)?
+>
+> Programmer 1 decides to register at reactor 1 in his thread 1 a signal handler
+> for SIGUSR.
+> Programmer 2 decides to register at reactor 2 in his thread 2 a signal handler
+> for SIGUSR.
+
+ Naturally, the behavior of this all depends on the semantics
+of the threads package... In Solaris 2.x, signal handlers are shared
+by all threads. Moreover, the Reactor uses a static table to hold the
+thread handlers. Thus, only one of the handler's would be registered
+(i.e., whichever one was registered second).
+
+> Programmer 3 designs the process and decides to have thread 1 and thread 2
+> running in the same process and also makes use of a third party software library
+> that internally has also registered a signal handler (not at the reactor) for
+> SIGUSR.
+
+ Now you've got big problems! This is an example of a
+limitation with UNIX signal handlers... In general, it's a bad idea
+to use signal handlers if you can avoid it. This is yet another
+reason why.
+
+> When looking into Ace/ACE_wrappers/tests/Reactor/misc/signal_tester.C you
+> have shown a way to do this by marking the dummy file_descriptor of the
+> Sig_Handler object ready for reading asynchronously. The handle_input()
+> routine of Sig_Handler object will then be dispatched synchronously.
+> But what happens if I have several reactors.
+> The asynchronously dispatched
+> handle_signal() routine does not know via which reactor it has been registered
+> so in which reactor to modify the dummy file_descriptor.
+> Is your suggestion to have just one process global reactor in such a case?
+
+ Yes, precisely. I would *strongly* recommend against using
+several reactors within separate threads within the same process if
+you are going to be having them handle signals. Can you use 1
+reactor and/or have one reactor handle signals within a process?
+
+> One thing we want to do is the priorization of Event_Handlers. I.e. in case
+> of concurrent events the sequence in which the Event_Handler methods will be
+> activated depends on their priority relative to each other.
+> We have two choices:
+> - complete priorization, which means a high priority Input Event_Handler may
+> be activated prior to a lower prioritized Output Event_Handler (and doing
+> so violating the 'hardcoded rule' that output must be done prior to input).
+> - priorization only in categories, which means all Output Event_handler are
+> ordered by their priority regardless of priorities for the category of Input
+> Event_Handlers. The priority is fixed between the categories, i.e. first
+> output then input then out-of-band.
+>
+> Right now I would think that we have to use the second choice if we want to
+> use the feature of asynchronous output with automatical re-queueing. Am I right
+> ?
+
+ Hum, that's an interesting problem. It might be better to
+subclass the Reactor to form a new class called Priority_Reactor.
+This subclass would override the Reactor's dispatch method and
+dispatch the event handlers in "priority" order. I've never done
+that, but I don't think it would be all that difficult.
+
+----------------------------------------
+13.
+
+> Is the non CORBA version still aroung? I think I still need it from the
+> follow error, or is something else?
+
+Aha, there are two ways to get around this problem:
+
+1. Set your ORBIX_ROOT environment variable to the location of the
+ Orbix release (e.g., /opt/Orbix). Naturally, this only works
+ if you've got Orbix installed on your machine.
+
+2. If you don't have Orbix, then to get rid of that problem all you
+ need to do is change the symbolic links on the
+
+./include/config.h
+./include/makeinclude/platform_macros.GNU
+
+files to
+
+./include/config-sunos5-sunc++-4.x
+./include/makeinclude/platform_sunos5_sunc++.GNU
+
+rather than the *-orbix* versions, which they point to by default.
+And then recompile ACE.
+----------------------------------------
+14.
+> We are using your ACE software and ran into a problem which may or may not
+> be related to the mutex locks. The question may have more to do with how
+> mutex locks should be used. We had a class which was using your mutex
+> lock wrapper. Each member function of the class acquired the lock before
+> processing and released on exiting the function. Some member functions may
+> call other member functions. The following is an example:
+>
+> class foo {
+>
+> void a()
+> {
+> MT( Mutex_Block<Mutex> m( this->lock_ ));
+>
+> if( cond )
+> b();
+> }
+>
+> void b()
+> {
+> MT( Mutex_Block<Mutex> m( this->lock_ ));
+>
+> if( cond )
+> a();
+> }
+>
+> };
+>
+> Is this valid ? My assumtpion is that the mutex lock is recursive and
+> the same thread can acquire the lock multiple times in different member
+> functions.
+
+ Ah, that's a great question since there are subtle and
+pernicious problems lurking in the approach you are trying above.
+Basically, Solaris mutex locks are *not* recursive (don't ask why...)
+Thus, if you want to design an application like the one above you'll
+need to use one or more of the following patterns:
+
+----------------------------------------
+A. Use recursive mutexes. Although these are not available in
+ Solaris directly they are supported in the later versions
+ of ACE. You might want to take a look at the latest
+ version (./gnu/ACE-3.1.9.tar.Z). It's got lots of new
+ support for threading and synchronization. In that case,
+ you simply do the following:
+
+ class Foo
+ {
+ public:
+ void a()
+ {
+ MT( Guard<Recursive_Lock <Mutex> > m( this->lock_ ));
+ b ();
+ }
+
+ void b()
+ {
+ MT( Guard<Recursive_Lock <Mutex> > m( this->lock_ ));
+ b_i ();
+ }
+
+ };
+
+ The advantage with this is that it requires almost no
+ changes to existing code. The disadvantage is that
+ recursive locks are just slightly more expensive.
+
+B. Have two layers of methods (a) which are public and acquire
+ the Mutex and then call down to methods in layer (b), which
+ are private and do all the work. Methods in layer b assume
+ that the locks are held. This avoids the deadlock problem
+ caused by non-recursive mutexes. Here's what this approach
+ looks like (using the more recent ACE class names):
+
+ class Foo
+ {
+ public:
+ void b()
+ {
+ MT( Guard<Mutex> m( this->lock_ ));
+ b_i ();
+ }
+
+ void b_i()
+ {
+ if( cond )
+ a_i();
+ }
+
+ void a_i()
+ {
+ if( cond )
+ b_i();
+ }
+
+ void a()
+ {
+ MT( Guard<Mutex> m( this->lock_ ));
+ a_i ();
+ }
+
+ };
+
+ The advantage here is that inline functions can basically
+ remove all performance overhead. The disadvantage is that
+ you need to maintain two sets of interfaces.
+
+C. Yet another approach is to release locks when calling
+ other methods, like this:
+
+ class Foo
+ {
+ public:
+ void b()
+ {
+ MT( Guard<Mutex> m( this->lock_ ));
+ m.release ();
+ a ();
+ m.acquire ();
+ }
+
+ void a()
+ {
+ MT( Guard<Mutex> m( this->lock_ ));
+ m.release ();
+ b ();
+ m.acquire ();
+ }
+
+ };
+
+ The disadvantage with this, of course, is that you
+ greatly increase your locking overhead. In addition,
+ you need to be very careful about introducing race
+ conditions into the code. The primary reason for
+ using this approach is if you need to call back to
+ code that you don't have any control over (such as
+ OS I/O routines) and you don't want to hold the
+ lock for an indefinite period of time.
+----------------------------------------
+
+ BTW, all three of these patterns are used in the ACE Reactor
+class category. The Reactor has a number of fairly complex
+concurrency control and callback issues it must deal with and I've
+found it useful to use all three of these patterns jointly.
+
+ I'd be interested to hear any comments on these approaches.
+
+ Doug
+----------------------------------------
+15.
+
+> I am working on Solaris 2.3 and trying to understand how to get around
+> the problem of trying to open a Socket connection to a remote host that
+> is "dead". Of course you get a nice long process block if the socket
+> is in Blocking mode (TCP lets you know when you can continue - how polite).
+>
+> So how does a non-blocking connect work with respect to using
+> the Reactor and a SOCK_Stream object to coordinate the opening
+> of the connection? Do I wait on the OUTPUT event for the FD?
+> How do I know if the connect worked or possibly timed-out? Is
+> this a reliable approach (I read somewhere that this will only
+> work if the STREAMS module is at the top of the protocol stack
+> - MAN page I think)?
+
+An example of implementing this is in the Gateway sample application
+in the new ACE. It's also encapsulated in the Connector<> pattern of
+the Connection class category in ./libsrc/Connection. You may want to
+take a look at those two things for concrete usage examples.
+
+However, the basics of getting non-blocking to work are:
+- set socket to non-blocking
+- initiate connect() request
+- if connect() returned 0 you're connected
+- if connect() returned -1 and errno is EWOULDBLOCK (or EAGAIN, depending
+on where you are), then register an event handler for read and write events
+on the socket
+- any other errno value is fatal
+
+When an event is returned
+- no matter which event you get back (read or write), you may have gotten
+the event out of error. Thus, re-attempt the connect() and check to see if
+errno is EISCONN (if it's not there's a problem!)
+- if errno was EISCONN, the connection is ready to go, otherwise you must
+handle an error condition
+
+If you want to "time out" after a certain period of time, consider
+registering for a timer event with Reactor. If the timer goes off before
+the connection succeeds, close down the appropriate socket.
+
+> Is using a separate thread to make the connection a better way to avoid
+> the potentialy long block in the main thread during the connect call?
+
+You could do that, but it can all be accomplised in a single process using
+the facilities available.
+----------------------------------------
+16.
+
+> I was wondering, does the Reactor class have the ability to prioritize
+> activity on the registered event handlers?
+
+ The default strategy for the Reactor's dispatch routine
+(Reactor::dispatch) does not prioritize dispatching other than to
+dispatch callbacks in ascending order from 0 -> maxhandlep1.
+
+> We have a requirment to be able to process both real-time, as well as, stored
+> telemetry and ERMs concurrently. Real-time needs to be processed at a higher
+> priority than stored data. Our design is based on both real-time and stored
+> data coming into our process via separate sockets.
+
+ I can think of several ways to do this:
+
+ 1. Use dup() or dup2() to organize your sockets such that the
+ higher priority sockets come first in the Handle_Sets that
+ the Reactor uses to dispatch sockets. This is pretty easy
+ if you don't want to muck with the Reactor code at all.
+
+ 2. You could subclass Reactor::dispatch() and revise it so
+ that it dispatches according to some other criteria that
+ you define in order to ensure your prioritization of
+ sockets.
+
+BTW, I'm not sure what you mean by "real-time" but I assume that you
+are aware that there is no true "real-time" scheduling for network I/O
+in Solaris. However, if by "real-time" you mean "higher priority"
+then either of the above strategies should work fine.
+----------------------------------------
+17.
+
+> I compiled the new ACE 3.2.0 's apps/Gateway. The compiling went
+> through without any errors. But I could not get it running, neither single
+> threaded nor multi-threaded. The cc_config and rt_config files entries are given
+> below. Also the machine configurations are given below. Does it need some more
+> settings or some patch !!??
+
+ I believe you are seeing the effects of the dreaded Sun MP bug
+with non-blocking connects. The easy work around for now is simply to
+give the "-b" option to the Gateway::init() routine via the svc.conf
+file:
+
+dynamic Gateway Service_Object *.shobj/Gateway.so:_alloc_gatewayd() active
+ "-b -d -c cc_config -f rt_config"
+
+If you check line 137 of the Gateway::parse_args() method you'll see
+what this does.
+----------------------------------------
+18.
+
+How to get ACE to work with GCC C++ templates.
+
+The first and foremost thing to do is to get the latest version of GCC
+(2.7.2) and also get the template repository patches from
+
+ftp://ftp.cygnus.com/pub/g++/gcc-2.7.1-repo.gz
+
+This will get the ball rolling...
+
+Here is some more info on G++ templates courtesy of Medhi TABATABAI
+<Mehdi.TABATABAI@ed.nce.sita.int>:
+
+Where's the Template?
+=====================
+
+ C++ templates are the first language feature to require more
+intelligence from the environment than one usually finds on a UNIX
+system. Somehow the compiler and linker have to make sure that each
+template instance occurs exactly once in the executable if it is
+needed, and not at all otherwise. There are two basic approaches to
+this problem, which I will refer to as the Borland model and the
+Cfront model.
+
+Borland model
+ Borland C++ solved the template instantiation problem by adding
+ the code equivalent of common blocks to their linker; template
+ instances are emitted in each translation unit that uses them, and
+ they are collapsed together at run time. The advantage of this
+ model is that the linker only has to consider the object files
+ themselves; there is no external complexity to worry about. This
+ disadvantage is that compilation time is increased because the
+ template code is being compiled repeatedly. Code written for this
+ model tends to include definitions of all member templates in the
+ header file, since they must be seen to be compiled.
+
+Cfront model
+ The AT&T C++ translator, Cfront, solved the template instantiation
+ problem by creating the notion of a template repository, an
+ automatically maintained place where template instances are
+ stored. As individual object files are built, notes are placed in
+ the repository to record where templates and potential type
+ arguments were seen so that the subsequent instantiation step
+ knows where to find them. At link time, any needed instances are
+ generated and linked in. The advantages of this model are more
+ optimal compilation speed and the ability to use the system
+ linker; to implement the Borland model a compiler vendor also
+ needs to replace the linker. The disadvantages are vastly
+ increased complexity, and thus potential for error; theoretically,
+ this should be just as transparent, but in practice it has been
+ very difficult to build multiple programs in one directory and one
+ program in multiple directories using Cfront. Code written for
+ this model tends to separate definitions of non-inline member
+ templates into a separate file, which is magically found by the
+ link preprocessor when a template needs to be instantiated.
+
+ Currently, g++ implements neither automatic model. The g++ team
+hopes to have a repository working for 2.7.0. In the mean time, you
+have three options for dealing with template instantiations:
+
+ 1. Do nothing. Pretend g++ does implement automatic instantiation
+ management. Code written for the Borland model will work fine, but
+ each translation unit will contain instances of each of the
+ templates it uses. In a large program, this can lead to an
+ unacceptable amount of code duplication.
+
+ 2. Add `#pragma interface' to all files containing template
+ definitions. For each of these files, add `#pragma implementation
+ "FILENAME"' to the top of some `.C' file which `#include's it.
+ Then compile everything with -fexternal-templates. The templates
+ will then only be expanded in the translation unit which
+ implements them (i.e. has a `#pragma implementation' line for the
+ file where they live); all other files will use external
+ references. If you're lucky, everything should work properly. If
+ you get undefined symbol errors, you need to make sure that each
+ template instance which is used in the program is used in the file
+ which implements that template. If you don't have any use for a
+ particular instance in that file, you can just instantiate it
+ explicitly, using the syntax from the latest C++ working paper:
+
+ template class A<int>;
+ template ostream& operator << (ostream&, const A<int>&);
+
+ This strategy will work with code written for either model. If
+ you are using code written for the Cfront model, the file
+ containing a class template and the file containing its member
+ templates should be implemented in the same translation unit.
+
+ A slight variation on this approach is to use the flag
+ -falt-external-templates instead; this flag causes template
+ instances to be emitted in the translation unit that implements
+ the header where they are first instantiated, rather than the one
+ which implements the file where the templates are defined. This
+ header must be the same in all translation units, or things are
+ likely to break.
+
+ *See Declarations and Definitions in One Header: C++ Interface,
+ for more discussion of these pragmas.
+
+ 3. Explicitly instantiate all the template instances you use, and
+ compile with -fno-implicit-templates. This is probably your best
+ bet; it may require more knowledge of exactly which templates you
+ are using, but it's less mysterious than the previous approach,
+ and it doesn't require any `#pragma's or other g++-specific code.
+ You can scatter the instantiations throughout your program, you
+ can create one big file to do all the instantiations, or you can
+ create tiny files like
+
+ #include "Foo.h"
+ #include "Foo.cc"
+
+ template class Foo<int>;
+
+ for each instance you need, and create a template instantiation
+ library from those. I'm partial to the last, but your mileage may
+ vary. If you are using Cfront-model code, you can probably get
+ away with not using -fno-implicit-templates when compiling files
+ that don't `#include' the member template definitions.
+
+4. Placing a function that looks like this near the top of a .C file
+ that uses any inline template member functions permits proper inlining:
+
+ // #ifdef __GNUG__
+ // This function works around the g++ problem with inline template member
+ // calls not being inlined ONLY in the first block (in a compilation
+ // unit) from which they are called.
+ // This function is inline and is never called, so it does not produce
+ // any executable code. The "if" statements avoid compiler warnings about
+ // unused variables.
+ inline
+ void
+ gcc_inline_template_member_function_instantiator()
+ {
+ if ( (List<FOO> *) 0 );
+ }
+ // #endif // __GNUG__
+
+ other prerequisites:
+ -- All inline template member functions should be defined in
+ the template class header. Otherwise, g++ will not inline
+ nested inline template member function calls.
+ -- Template .h and .C files should NOT include iostream.h
+ (and therefore debugging.h).
+ This is because iostream.h indirectly includes other
+ GNU headers that have unprotected #pragma interface,
+ which is incompatible with -fno-implicit-templates and optimal
+ space savings.
+ -- inline virtual destructors will not be inlined, unless necessary,
+ if you want to save every last byte
+ -- be sure that -Winline is enabled
+
+----------------------------------------
+19.
+
+> 1. when are dynamically loaded objects removed from the Service_Config.
+
+The Service Configurator calls dlclose() when a "remove Service_Name"
+directive is encountered in the svc.conf file (or programmatically
+when the Service_Config::remove() method is invoked). Check out the
+code in ./libsrc/Service_Config/Service_Repository.i and
+./libsrc/Service_Config/Service_Config.i to see exactly what happens.
+
+> 2. In the Service Configurator, when an item is entered in the svc.conf
+> how dow you know which items will be invoked as threads and
+> which items are forked. I know that static items are executed
+> internally.
+
+ No! It's totally up to the subclass of Service_Object to
+decide whetehr threading/forking/single-threading is used. Check out
+the ./apps/Logger/Service_Configurator_Logger for examples of
+single-threaded and multi-threaded configuration.
+----------------------------------------
+20.
+
+> I have been reading the Service Configurator Logger. I was wondering about
+> cleanup of new objects. In the handle_input method for the Acceptor a new
+> svc_handler is allocated for each new input request and deleted in the
+> handle_close. I was wondering how handle close was called when a client who
+> has created a socket terminates the connection (i.e., when is handle_close
+> called).
+
+handle_close() is automatically called by the Reactor when a
+handle_input()/handle_output()/etc. method returns -1. This is the
+"hook" that instructs the Reactor to call handle_**() and then remove
+the Event_Handler object from its internal tables.
+
+----------------------------------------
+21.
+
+> How does the Logger know to remove the client socket and the svc_handler object.
+> Does he recieve an exception.
+
+ No. when the client terminates the underlying TCP/IP
+implementation sends a RESET message to the logger host. This is
+delivered to the logger process as a 0-sized read(). It then knows to
+close down.
+
+> What I am worried about is a leak. Where by alot of clients connect and
+> disconnect and the server does not cleanup correctly. Such as a core dump
+> from the client where he cannot close correctly.
+
+ That's handled by the underlying TCP (assuming it is
+implemented correctly...).
+
+> What I am doing is attempting to convert the logger example into an alarm
+> manager for remote nodes. In this application a node may be powered down
+> there by terminating a Logger/Alarm server connection abnormally, this could
+> leave the Logger with many dangling sockets and allocated svc_handler objects.
+
+ If the TCP implementation doesn't handle this correctly then
+the standard way of dealing with it is to have an Event_Handler use a
+watchdog timer to periodically "poll" the client to make sure it is
+still connected. BTW, PCs tend to have more problems with this than
+UNIX boxes since when they are turned off the TCP implementation may
+not be able to send a RESET...
+----------------------------------------
+22.
+
+Using templates with Centerline.
+
+Centerline uses ptlink to process the C++ templates. ptlink expect the
+template declarations and definitions (app.h and app.C) to reside in
+the same directory. This works fine for the ACE hierarchy since
+everything is a link to the appropriate src directory (include/*.[hi]
+--> ../src/). When a users of the ACE distribution attempts to include
+the ACE classes in an existing application hierarchy this problem will
+arise if ptlink is used.
+
+The solution is to create a link to the declaration file from the
+definition file directory and use the "-I" to point to the definition
+directory.
+
+----------------------------------------
+
+23.
+
+> When I try to compile $WRAPPER_ROOT/src/Message_Queue.C on a Solaris
+> 5.3 system using SUNPro CC 4.0, the compiler aborts with a Signal 10
+> (Bus Error). Our copy of CC 4.0 is over a year old and I do not
+> know if any patches or upgrades exist for it. If they do, then we
+> have not applied them to our compiler.
+
+ Several other people have run across this as well. It turns
+out that there is a bug in the Sun 4.0.0 C++ compiler that will get a
+bus error when -g is used. If you compilg Message_Queue.C *without*
+-g then it works fine. The later versions of SunC++ don't have this
+bug. I'd recommend that you upgrade as soon as possible.
+
+----------------------------------------
+
+24.
+
+> I have added a dynamic service to the Service Configurator. This new service
+> fails on the load because it uses application libraries that are not shared
+> object libraries (i.e., objects in libApp.a). I am assuming from the error
+> message that the problem is the mix match of shared and non-shared objects.
+
+ Right, exactly.
+
+> I was wondering if there is an easy way to add static services to the
+> Service Configurator. The example directory listing static service is
+> very tightly coupled with the Service_Config object. Is there another
+> way of adding static services.
+
+ Sure, that's easy. The best way to do this is to use the
+interfaces of the Service_Respository class to configure static
+services into the Service_Config. A good example of how to do this is
+in Service_Config.[Chi]:
+
+int
+Service_Config::load_defaults (void)
+{
+ for (Static_Svc_Descriptor *sl = Service_Config::service_list_; sl->name_ != 0; sl++)
+ {
+ Service_Type *stp = ace_create_service_type (sl->name_, sl->type_,
+ (const void *) (*sl->alloc_)(),
+ sl->flags_);
+ if (stp == 0)
+ continue;
+
+ const Service_Record *sr = new Service_Record (sl->name_, stp, 0, sl->active_);
+
+ if (Service_Config::svc_rep->insert (sr) == -1)
+ return -1;
+ }
+ return 0;
+}
+
+----------------------------------------
+25.
+
+> 8. Do you have examples of the SYNC/ASYNC pattern?
+
+ Yes. Check out the following:
+
+ 1. The latest version of ./apps/Gateway/Gateway has
+ an example of this when you compile with the USE_OUTPUT_MT
+ flag. In this case, the Reactor performs the "Async"
+ processing, which multiplexes all incoming messages from peers
+ arriving on Input_Channels. These messages are then queued
+ up at the appropriate Output_Channels. Each Output_Channel
+ runs in a separate thread, performing the "Sync"
+ processing.
+
+ 2. Also, the latest version of the OOCP-tutorial4.ps.gz
+ file available from wuarchive.wustl.edu in the
+ directory /languages/c++/ACE/ACE-documentation shows
+ an example of using the Half-Sync/Half-Async pattern
+ to build an Image Server. I'm using this as an
+ example in my tutorials these days.
+
+----------------------------------------
+26.
+
+> We had a discussion about something we saw in the new ACE code.
+> I thing there was a member function of a class that was doing a
+> "delete this". Is this safe?
+
+In general it is safe as long as (1) the object has been allocated
+dynamically off the heap and (2) you don't try to access the object
+after it has been deleted. You'll note that I tend to use this idiom
+in places where an object is registered with the Reactor, which must
+then must ensure the object cleans itself up when handle_close() is
+called. Note that to ensure (1) I try to declare the destructor
+"private" or "protected" so that the object must be allocated off the
+heap (some compilers have a problem with this, so I may not be as
+consistent as I ought to...).
+
+----------------------------------------
+27.
+
+> 5. What is the correct way for building a modified ACE library?
+> Changing in "libsrc" or in "include" directory?
+> When I make a complete new directory, how can I get introduced
+> the dependencies within my new makefile, can you give a short hint?
+
+Sure, no problem. For instance, here's what I did tonight when I
+added the new Thread_Specific.[hiC] files to ACE:
+
+ 1. Created three new files Thread_Specific.[hiC] in
+ ./libsrc/Threads.
+
+ 2. cd'd to ../../include/ace and did a
+
+ % ln -s ../../libsrc/Threads/Thread_Specific.[hi] .
+
+ 3. cd'd to ../../src and did a
+
+ % ln -s ../../libsrc/Threads/Thread_Specific.C .
+
+ 4. then I did
+
+ % make depend
+
+ on the ./src directory, which updated the dependencies.
+
+----------------------------------------
+28. The following is from Neil B. Cohen (nbc@metsci.com), who is
+ writing about how to work around problems he's found with HP/UX.
+
+I've been trying to compile the latest beta (3.2.9) on an HP running
+HPUX9.05 for the past week or so. I've had problems with templates up
+and down the line. I finally discovered (after some discussions with
+the HP support people) that they have made numerous changes to their
+C++ compiler recently to fix problems with templates and
+exceptions. If you are trying to compile ACE under HPUX with anything
+less than version 3.70 of the HP compiler, you may have serious
+problems (we were using v3.50 which came with the machine when we
+bought it a few months ago).
+
+Also, unlike earlier ACE versions, I was forced to add the following
+line to the rules.lib.GNU file to "close" the library - ie. force the
+various template files to be instantiated and linked to the ACE
+library itself. I don't know if this is necessary, or the only way to
+make things work, but it seems to do the job for my system.
+
+in rules.lib.GNU...
+
+$(VLIB): $(VOBJS)
+ - CC -pts -pth -ptb -ptv -I$(WRAPPER_ROOT)/include $(VOBJS)
+ $(AR) $(ARFLAGS) $@ $? ./ptrepository/*.o
+ -$(RANLIB) $@
+ -chmod a+r $@
+
+I added the CC line, and added the "./ptrepository/*.o" to the $(AR)
+cmd. Sun has an -xar option, I believe that does something similar to
+this. Also - note that I'm not sure that the "-ptb" option is
+necessary. I added that before we upgraded the compiler, so it may not
+be needed now...
+
+----------------------------------------
+29.
+
+> I just ran my program with Purify, and it is telling me that there
+> is at least one large (~4k) memory leak in
+> ACE_Thread_Specific<ACE_Log_Msg>. This may or may not be serious,
+> but it is probably worth looking into.
+
+Right, that's ok. This is data that's allocated on a "per-thread"
+basis the first time a thread makes a call using the LM_ERROR or
+LM_DEBUG macros. The data isn't freed-up until the thread exits.
+
+----------------------------------------
+
+30.
+
+> In my trying to use the Reactor pattern for my application I
+> noticed that I had to couple my eventHandler derived objects with a
+> specific IPC_SAP mechanism. To use some of your own examples your
+> Client_Stream object contains a TLI_Stream object to use in data
+> transfer. My application calls for determining the communication
+> mechanism at run time. To do this my eventHandler must be able to
+> create the appropriate IPC_Stream object at run time and use its
+> methods through a super class casting. The problem is that there is no
+> super class with the virtual methods for send, recv, etc. To solve my
+> problem I will create that super class and have the TLI ( as well as
+> other wrapper objects) inherit from it instead of IPC_SAP. My question
+> is I am suspicious of why ACE wasn't designed with that in mind? Is my
+> application that unique ? or is there a better way to do this that I
+> am not aware of ? Your help in this matter will be much appreciated.
+
+ACE was developed using static binding for IPC_SAP in order to
+emphasize speed of execution over dynamic flexibility *in the core
+infrastructure*. To do otherwise would have penalized the performance
+of *all* applications in order to handle the relatively infrequent
+case where you want to be able to swap mechanisms at run-time.
+
+Since it is straightforward to create an abstract class like the one
+you describe above I decided to make this a "layered" service rather
+than use this mechanism in the core of ACE.
+
+BTW, I would not modify TLI_SAP and SOCK_SAP to inherit from a new
+class. Instead, I would use the Bridge and Adapter patterns from the
+"Gang of Four" patterns catalog and do something like this:
+
+----------------------------------------
+// Abstract base class
+class ACE_IPC_Stream
+{
+public:
+ virtual ssize_t recv (void *buf, size_t bytes) = 0;
+ virtual ssize_t send (const void *buf, size_t bytes) = 0;
+ virtual ACE_HANDLE get_handle (void) const = 0;
+ // ...
+};
+----------------------------------------
+
+and then create new classes like
+
+----------------------------------------
+template <class IPC>
+class ACE_IPC_Stream_T : public ACE_IPC_Stream
+{
+public:
+ virtual ssize_t recv (void *buf, size_t bytes)
+ {
+ return this->ipc_.recv (buf, bytes);
+ }
+
+ virtual ssize_t send (const void *buf, size_t bytes)
+ {
+ return this->ipc_.send (buf, bytes);
+ }
+
+ virtual ACE_HANDLE get_handle (void)
+ {
+ return this->ipc_.get_handle ();
+ }
+ // ...
+
+private:
+ IPC ipc_;
+ // Target of delegation
+ // (e.g., ACE_SOCK_Stream or ACE_TLI_Stream).
+}
+----------------------------------------
+
+Then you could write code that operated on ACE_SAP *'s to get a
+generic interface, but that reused existing code like SOCK_SAP and
+TLI_SAP, e.g.,
+
+----------------------------------------
+class My_Event_Handler : public ACE_Event_Handler
+{
+public:
+ My_Event_Handler (void) {
+ // Figure out which IPC mechanism to use somehow:
+
+ if (use_tli)
+ this->my_ipc_ = new ACE_SAP_IPC<ACE_TLI_Stream>;
+ else if (use_sockets)
+ this->my_ipc_ = new ACE_SAP_IPC<ACE_SOCK_Stream>;
+ else
+ ...
+ }
+
+private:
+ ACE_IPC_Stream *my_ipc_;
+};
+----------------------------------------
+
+There are obviously details left out here, but this is the general idea.
+
+----------------------------------------
+31.
+
+> I was trying to view your 'Writting example applications in CORBA' article
+> /tutorial using ghostview but the .ps file seems to be corrupted ( I tried to
+> ftp it more than once). Any help would be much appreciated.
+
+There are two solutions to this problem (which seems to be caused by a
+weird interaction between ghostview and the "psnup" program I use to
+generate the slides 4-up on a page):
+
+ 1. If you want to print them or view them 1-up on a page you
+ can edit the postscript file and remove the first 551
+ lines or so (which are generated by the psnup script).
+ This will cause the document to be printed 1-up rather than
+ 4-up.
+
+ 2. You can try to print the 4-up file on a postscript printer.
+ Believe it or not, this typically works, even though ghostview
+ can't handle it!
+
+----------------------------------------
+32.
+
+> We would like to use the Reactor class as a static member on some of
+> our classes (one per process) so that we can see and use the Reactor
+> witnin each process on a global level. We are using it to set
+> timers several levels down in our class trees and don't want to pass
+> a pointer to it through all of our constructors. My question is:
+> are there any static initialization dependencies that you know of
+> when using the default "do nothing" constructor of the Reactor that
+> could prevent use from using it as a static member variable? Thanks
+> for any advice on this issue.
+
+The only problems you'll have are the typical ones about "order of
+initialization" of statics in separate files. You'll also have to
+live with the default size of the I/O handler table, which probably
+isn't a problem since the max is something like 1024 or so.
+
+BTW, I solve this problem in ACE via the Service_Config::reactor,
+which is a static *pointer* to a Reactor. If you really wanted to
+make this work nicely, you could use the Singleton pattern from the
+"Gang of Four" patterns catalog. That should solve your problem even
+more elegantly!
+
+----------------------------------------
+33.
+> I just got the ACE-3.3 version and am trying it on the HP-UX.
+> I run into a small problem while cloning the directories that
+> might be worth fixing.
+>
+> I made a directory called ACE_WRAPPERS/HP-UXA.09.05-g1, cd to it
+> and run "make -f ../Makefile clone". when I look in src, I have:
+> Acceptor.C@ -> ../libsrc/Connection/Acceptor.C
+>
+> However, ../libsrc does not exist. It is not one of the CLONE
+> variables in ACE_WRAPPERS/Makefile. I don't think you'd want to
+> clone libsrc too, since its files don't change.
+
+I think you can solve this problem as follows:
+
+% cd ACE_WRAPPERS
+% setenv WRAPPER_ROOT $cwd
+% cd HP-UXA.09.05-g1
+% make -f ../Makefile clone
+% setenv WRAPPER_ROOT $cwd
+% make
+
+That should build the links correctly since they'll point to the
+absolute, rather than relative, pathnames!
+
+----------------------------------------
+34.
+
+> Our quality personal has asked me the following questions for which
+> I think you are the right guy for answering that:
+
+> o How long is ACE used in industrial products?
+
+It was first used at Ericsson starting in the fall of 1992, so that
+makes it about 3 years now.
+
+> o What are reference projects comparable to ours that use ACE?
+
+The ones I have directly worked with include:
+
+Motorola -- satellite communication control
+Kodak Health Imaging Systems -- enterprise medical imaging
+Siemens -- enterprise medical imaging
+Ericsson/GE Mobile Communications -- telecommunication switch management
+Bellcore -- ATM switch signal software
+
+In addition, there are probably about 100 or more other companies that
+have used ACE in commercial products. The current mailing list has
+about 300 people from about 230 different companies and universities.
+If you'd like additional info, please let me know.
+
+> o How many persons have contributed on testing and writing error
+> reports for ACE?
+
+Around 60 or so. All the contributors are listed by name and email
+address at the end of the README file distributed with the ACE release.
+
+> o How many bug fixes have been made since ACE was public domain?
+
+All information related to bug fixes is available in the ChangeLog
+file distributed with the ACE release (I could count these for you if
+you need that level of detail).
+
+> o How many literature is there on ACE?
+
+All articles published about ACE are referenced in the BIBLIOGRAPHY
+file in the top-level directory of ACE.
+
+----------------------------------------
+
+35.
+
+> We are currently evaluating ACE for use on a new telecom switch.
+> Many of us like ACE but are having trouble convincing some team
+> members that wrappers are better than using the direct Unix
+> system calls.
+
+> I have read your papers that came with ACE, but was wondering if there
+> are other papers that address the benefits (or problems) of wrappers?
+
+This topic has been discussed in other places, most notably the book
+by Erich Gamma and Richard Helm and Ralph Johnson and John Vlissides
+called "Design Patterns: Elements of Reusable Object-Oriented
+Software" (Addison-Wesley, 1994), where it is described in terms of
+the "Adapter" pattern.
+
+Very briefly, there are several key reasons why you should *not* use
+UNIX system calls directly (regardless of whether you use ACE or not).
+
+1. Portability --
+
+ Unless you plan to develop code on only 1 UNIX platform (and
+ you never plan to upgrade from that platform as it goes
+ through new releases of the OS) you'll run across many, many
+ non-portable features. It's beyond the scope of this
+ FAQ to name them all, but just take a look at ACE sometime
+ and you'll see all the #ifdefs I've had to add to deal with
+ non-compatible OSs and compilers. Most of these are centralized
+ in one place in ACE (in the ace/OS.*files), but it took a lot
+ of work to factor this out. By using wrappers, you can avoid
+ most of this problem in the bulk of your application code
+ and avoid revisiting all of these issues yourself.
+
+ In addition, ACE is now ported to other platforms (e.g.,
+ Windows NT and Windows 95). If you want to write code that
+ is portable across platforms, wrappers are a good way to
+ accomplish this.
+
+2. Ease of programming --
+
+ I'd go as far as to say that anyone who wants to program
+ applications using C-level APIs like sockets or TLI is not
+ serious about developing industrial strength, robust, and easy
+ to maintain software. Sockets and TLI are *incredibly*
+ error-prone and tedious to use, in addition to being
+ non-portable. I've got a paper that discusses this in detail
+ at URL http://www.cs.wustl.edu/~schmidt/COOTS-95.ps.Z
+
+3. Incorporation with higher-level patterns and programming methods --
+
+ Here's where the Adapter pattern stuff really pays
+ off. For example, by making all the UNIX network
+ programming interfaces and synchronization mechanisms
+ have the same API I can write very powerful higher-level
+ patterns (e.g., Connector and Acceptor) that generalize
+ over these mechanisms. For proof of this, take a look
+ at the ./tests/Connection/non_blocking directory
+ in the latest ACE-beta.tar.gz at wuarchive.wustl.edu
+ in the /languages/c++/ACE directory. It implements
+ the same exact program that can be parameterized
+ with sockets, TLI, and STREAM pipes *without*
+ modifying any application source code. It is
+ literally impossible to do this without wrappers.
+
+----------------------------------------
+36.
+
+> How can I use a kind of "Reactor" in such a way that a reading
+> thread can notice the arrival of new data on several shared memory
+> areas ?
+
+Ah, that is a tricky issue! The underlying problem is that UNIX is
+inconsistent with respect to the ability to "wait" on different
+sources of events. In this case, Windows NT is much more consistent
+(but it has its own set of problems...).
+
+> Poll, Select and Reactor (so far I read) assume that file
+> descriptors are present, which is not the case with shared memory.
+
+That's correct (though to be more precise, the Reactor can also deal
+with signals, as I discuss below).
+
+> Is there a common and efficient way to deal with that kind of
+> situation, or do I have to insert extra ipc mechanisms (based on
+> descriptors) ?
+
+There are several solutions:
+
+1. Use the Reactor's signal handling capability (see the
+ ./tests/Reactor/misc/signal_tester.C for an example)
+ and have the process/thread that writes to shared
+ data send a signal to the reader process(es). The
+ disadvantage of this is that your code needs to
+ be signal-safe now...
+
+2. Use a combination of SPIPE_Streams and the Reactor
+ to implement a simple "notification protocol," e.g.,
+ the receiver process has an Event_Handler with a
+ SPIPE_Stream in it that can be notified when the
+ sender process writes data to shared memory.
+ The disadvantage here is that there's an extra
+ trip through the kernel, though the overhead
+ is very small since you only need to send 1 byte.
+
+3. Use threads and either bypass the Reactor altogether
+ or integrate the threads with the Reactor using its
+ Reactor::notify() mechanism (see the
+ ./tests/Reactor/misc/notification.C file for an
+ example of how Reactor::notify() works). The
+ disadvantage of this approach is that it won't
+ work for platforms that lack threads.
+
+----------------------------------------
+37.
+
+> What do you think about wrapping communication methodologies in C++ streams?
+> What I mean is having defining a stream and extractor/insertor functions
+> which the underlying implementation reads/writes on comm mechanisms instead of
+> files. I would think this to be a very general interface for all comms
+> implementations. All user code would look the same, but the underlying stream
+> implementations would be different. Whether the stream functionality would
+> be defined by the stream itself (eg tcpstream) or with manipulators
+> (eg commstream cs; cs << tcp;) is up for grabs in my mind.
+>
+> Anyhow, I was wondering your input...
+
+That technique has been used for a long time. In fact, there are
+several freely available versions of iostreams that do this and
+RogueWave also sells a new product (Net.h++) that does this. I think
+this approach is fine for simple applications.
+
+However, it doesn't really work well if you need to write
+sophisticated distributed applications that must use features like
+non-blocking I/O, concurrency, or that must be highly robust against
+the types of errors that occur in a distributed system.
+
+For these kinds of systems you either need some type of ORB, or you
+need to write the apps with lower-level C++ wrappers like the ones
+provided by ACE.
+
+----------------------------------------
+
+38.
+
+> What is the difference between cont() and next() in an ACE_Message_Block?
+
+Ah, good question. cont() gives you a pointer to the next
+Message_Block in a chain of Message_Block fragments that all belong to
+the same logical message. In contrast, next() (and prev()) return
+pointers to the next (and previous) Message_Block in the doubly linked
+list of Message_Blocks on a Message_Queue.
+
+BTW, this is *exactly* the same structure as in System V Streams...
+
+> Which would I use if I wanted to add a header and a trailer, each stored in
+> ACE_Message_Blocks of their own, to another ACE_Message_Block?
+
+You should use cont() for that. Does that make sense?
+----------------------------------------
+
+39.
+
+> I think that your site is cool, but it's being a terrible tease in
+> that I really want to read the contents, but don't know anything
+> about x-gzip formatting. I'm running Netscape 2.0 under MS Windows
+> NT.
+
+ x-gzip is a hook for the GNU "gzip" program, which should be
+freely available for NT at prep.ai.mit.edu in the /pub/gnu directory.
+Here's how our "Global Mailcap" entry for Netscape looks like (see the
+"Helper Applications" menu under "preferences":
+
+----------------------------------------
+# For the format of this file, see
+# ftp://wuarchive/doc/internet-drafts/draft-borenstein-mailcap-00.txt.Z
+
+audio/*; audiotool %s; test=test -n "$DISPLAY" && test -w /dev/audio
+image/*; xv %s; test=test -n "$DISPLAY"
+application/postscript; ghostview %s; test=test -n "$DISPLAY"
+video/mpeg; mpeg_play %s; test=test -n "$DISPLAY"
+video/*; xanim +Ae %s; test=test -n "$DISPLAY"
+application/x-dvi; xdvi %s; test=test -n "$DISPLAY"
+application/x-compress; uncompress %s; test=test -n "$DISPLAY"
+application/x-gzip; gunzip %s; test=test -n "$DISPLAY"
+application/x-zip; unzip %s; test=test -n "$DISPLAY"
+----------------------------------------
+
+BTW, if you can't get uncompress to work, please ftp to
+wuarchive.wustl.edu and look in the directory
+/languages/c++/ACE/ACE-documentation/. All the papers are there, as
+well.
+
+----------------------------------------
+
+40.
+
+> What I am doing is
+> 1. Making an ACE_SOCK_Dgram and let it choose the next available port number.
+> 2. Making a message that will be broadcasted to X number of servers. This
+> message has a port number which the server will use to send its reply.
+> 3. Broadcast the message to a fixed port number.
+> 4. Wait for replies from the servers.
+>
+>
+> It looks like I need "ACE::bind_port" to return the port number that
+> it picked and "ACE_SOCK_Dgram::shared_open" will need it store the
+> port number so I could call some function like
+> ACE_SOCK_Dgram::get_port_number or it would need to return the port
+> number instead of the handle(I could always call
+> ACE_SOCK_Dgram::get_handle if I needed the handle).
+>
+> Is there I way to get the port number that I have missed?
+
+Sure, can't you just do this:
+
+// Defaults to all "zeros", so bind will pick port.
+ACE_INET_Addr dg_addr;
+
+ACE_SOCK_Dgram dg;
+
+dg.open (dg_addr);
+
+dg.get_local_addr (dg_addr);
+
+dg_addr.get_port_number ();
+
+----------------------------------------
+
+41. How can you rename a core file?
+
+new_disposition.sa_handler = &Handle_Coredump_Signal;
+sigemptyset(&new_disposition.sa_mask);
+sigaddset(&new_disposition.sa_mask,SIGCHLD);
+new_disposition.sa_flags = 0;
+sigaction(SIGSEGV,&new_disposition,&old_disposition);
+
+*****************
+
+void
+Handle_Coredump_Signal(void)
+{
+ int status;
+ pid_t child;
+ char new_core_name[64];
+
+ if(0 == (child = fork()))
+ {
+ abort();
+ }
+ else
+ {
+ if(-1 == waitpid(child,&status,NULL))
+ {
+ exit(-1);
+ }
+ sprintf(new_core_name,"core_%d",getpid());
+ rename("core",new_core_name);
+ exit(0);
+ }
+}
+
+----------------------------------------
+
+42.
+
+> I have seen 2 different inlining policies in ACE
+>
+> 1) The .i file is included unconditionally by both the .h and .C file
+> and all functions in the .i file carry the "inline" keyword.
+
+Right. Those are for cases where I *always* want to inline those
+methods. I do this mostly for very short wrapper methods (e.g.,
+read() or write()) that are likely to be on the "fast path" of an
+application.
+
+> 2) The .i file is included by the .h file ONLY if __INLINE__ is defined
+> for the compile. This causes the functions in the .i file to be
+> compiled as inline functions (INLINE translates to inline in this case).
+> If __INLINE__ is not defined, the .i file is only included by the .C
+> file and the functions do NOT carry the "inline" keyword.
+
+I do this for cases where it's really not essential to have those
+methods inline, but some users might want to compile ACE that was if
+they want to eliminate all the wrapper function-call overhead. For
+instance, I'll typically do this when I'm running benchmarks.
+
+----------------------------------------
+
+43. Integrating ACE and CORBA
+
+> Our goal is to implement a CORBA-II compliant application. I am
+> trying to conceptually visualize the applicability to ACE to this
+> attempt (which we're pretty excited about), and I was hoping you'd
+> offer any opinions / observations that you might have.
+
+We've successfully integrated ACE with several implementations of
+CORBA (in particular Orbix 1.3 and 2.0) and used it in a number of
+commercial applications. In these systems, we use ACE for a number of
+tasks, including the following:
+
+1. Intra-application concurrency control, threading, and
+ synchronization via the ACE_Thread_Manager and Synch* classes.
+
+2. Dynamic linking of services via the ACE_Service_Config.
+
+3. Integration of event loops via the ACE_Reactor.
+
+4. Management of shared memory via ACE_Malloc.
+
+5. High-performance network I/O via the ACE_SOCK* wrappers.
+
+plus many more.
+
+You can find out more info about the ACE/CORBA integration and the
+performance issues associated with it in the following paper:
+
+http://www.cs.wustl.edu/~schmidt/COOTS-96.ps.gz
+
+----------------------------------------
+
+44.
+
+> Can the Reactor's event loop be called recursively?
+
+This is not advisable. The Reactor's dispatch() method is not
+reentrant (though it is thread-safe) since it maintains state about
+the active descriptors it is iterating over. Therefore, depending on
+the descriptors you're selecting on, you could end up with spurious
+handle_*() callbacks if you make nested calls to the
+Reactor::handle_events() method.
+
+> For example, if I have a program that sets up some event handlers
+> and then calls, in an infinite loop, ACE_Reactor::handle_events().
+> Can one of the event handlers call handle_events() again if it needs
+> to block, while allowing other event handlers a chance to run?
+
+I'm not sure if this is really a good idea, even if the Reactor were
+reentrant. In particular, what good does it do for one Event_Handler
+to "block" by calling handle_events() again? The event the handler is
+waiting for will likely be dispatched by the nested handle_events()
+call! So when you returned back from the nested call to
+handle_events() it will be tricky to know what state you were in and
+how to proceed.
+
+Here's how I design my single-threaded systems that have to deal with
+this:
+
+ 1. I use a single event loop based on the Reactor, which acts
+ a cooperative multi-tasking scheduler/dispatcher.
+
+ 2. I then program all Event_Handler's as non-blocking I/O
+ objects. This is straightforward to do for both input and
+ output using the ACE_Reactor::schedule_wakeup() and
+ ACE_Reactor::cancel_wakeup() methods (available with the
+ latest version of ACE).
+
+ 3. Then, whenever an Event_Handler must block on I/O, it
+ queues up its state on an ACE_Message_Queue, calls
+ ACE_Reactor::schedule_wakeup(), and returns to the
+ main event loop so that other Event_Handlers can be
+ dispatched. When the I/O is ready, the Reactor will
+ call back to the appropriate handle_* method, which
+ can pick up the state it left in the Message_Queue and
+ continue.
+
+There are a number of places to find more information on this sort of
+design:
+
+ 1. $WRAPPER_ROOT/apps/Gateway/Gateway/Channel.cpp --
+ This Gateway application example shows the C++ code.
+
+ 2. http://www.cs.wustl.edu/~schmidt/TAPOS-95.ps.gz --
+ This paper describes the underlying patterns.
+
+ 3. http://www.cs.wustl.edu/~schmidt/OONP-tutorial4.ps.gz
+ -- This tutorial explains the source code and
+ the patterns.
+
+BTW, I'll be describing patterns for this type of design challenge in
+my tutorial at USENIX COOTS in June. Please check out
+http://www.cs.wustl.edu/~schmidt/COOTS-96.html for more info.
+
+----------------------------------------
+
+45.
+
+> In one of my programs, a process needs to receive input from
+> multiple input sources. One of the input sources is a file
+> descriptor while another is a message queue. Is there a clean way to
+> integrate this a message queue source into the Reactor class so that
+> both inputs are handled uniformly?
+
+Do you have multiple threads on your platform? If not, then life will
+be *very* tough and you'll basically have to use multiple processes to
+do what you're trying to do. There is *no* portable way to combine
+System V message queues and file descriptors on UNIX, unfortunately.
+
+If you do have threads, the easiest thing to do is to have a thread
+reading the message queue and redirecting the messages into the
+Reactor via its notify() method.
+
+Please take a look at the program called
+
+examples/Reactor/Misc/notification.cpp
+
+for an example.
+
+----------------------------------------
+
+46.
+
+> I'm writing a program to find out the address for a socket. The
+> idea is that we open an ACE_Acceptor (and will eventually perform
+> accept() on it.) Before we can do that we need to find out the
+> address of the ACE_Acceptor so that we can publish it (for others to
+> be able to connect to it.) The trouble is that the call
+> ACE_INET_Addr::get_host_name () prints "localhost" as the host name
+> while I would like to principal host name to be printed instead.
+
+All ACE_INET_Addr::get_host_name() is doing is calling
+ACE_OS::gethostbyaddr(), which in turn will call the socket
+gethostbyaddr() function. I suspect that what you should do is
+something like the following:
+
+ACE_Acceptor listener (ACE_Addr::sap_any);
+
+ACE_INET_Addr addr;
+
+listener.get_local_addr (addr);
+
+char *host = addr.get_host_name ();
+
+if (::strcmp (host, "localhost") == 0)
+{
+ char name[MAXHOSTNAMELEN];
+ ACE_OS::hostname (name, sizeof name);
+ cerr << name << endl;
+}
+else
+ cerr << host << endl;
+
+----------------------------------------
+
+47.
+
+> Could you please point me to stuff dealing with asynchronous cross
+> platform socket calls. I want to use non blocking socket calls on
+> both UNIX and NT.
+
+Sure, no problem. Take a look at the
+
+./examples/Connection/non_blocking/
+
+directory. There are a number of examples there. In addition, there
+are examples of non-blocking connections in
+
+./examples/IPC_SAP/SOCK_SAP/CPP-inclient.cpp
+
+The code that actually enables the non-blocking socket I/O is in
+ace/IPC_SAP.cpp
+
+----------------------------------------
+
+48.
+
+> Is ACE exception-safe? If I throw an exception out of event
+> handler, will the Reactor code clean itself?
+
+Yes, that should be ok. In general, the two things to watch out for
+with exceptions are:
+
+ 1. Memory leaks -- There shouldn't be any memory leaks internally
+ to the Reactor since it doesn't allocate any memory when
+ dispatching event handlers.
+
+ 2. Locks -- In the MT_SAFE version of ACE, the Reactor acquires
+ an internal lock before dispatching Event_Handler callbacks.
+ However, this lock is controlled by an ACE_Guard, whose
+ destructor will release the lock if exceptions are thrown
+ from an Event_Handler.
+
+----------------------------------------
+
+49.
+
+> I am building a Shared memory manager object using MMAP and MALLOC
+> basically as:
+>
+> typedef ACE_Malloc<ACE_MMAP_Memory_Pool, ACE_Process_Mutex> SHMALLOC;
+>
+> I noticed that the ACE_MMAP_Memory_Pool class provides for the users
+> to specify a Semaphore key. However, once I use it via the
+> ACE_Malloc<..>::ACE_Malloc(const char* poolname) constructor, I lose
+> this option.
+
+Yes, that is correct. That design decision was made to keep a clean
+interface that will work for all the various types of memory pools.
+
+> Is there any recommended way to specialize ACE classes to allow this
+> key to be overridden?
+
+Yes indeed, you just create a new subclass (e.g., class
+My_Memory_Pool) that inherits from ACE_MMAP_Memory_Pool and then you
+pass in the appropriate key to the constructor of ACE_MMAP_Memory_Pool
+in the constructor of My_Memory_Pool. Then you just say:
+
+typedef ACE_Malloc<My_Memory_Pool, ACE_Process_Mutex> SHMALLOC;
+
+Please check out the file:
+
+examples/Shared_Malloc/Malloc.cpp
+
+which illustrates more or less how to do this.
+
+----------------------------------------
+
+50.
+
+> What is the best way to turn on TRACE output in ACE. I commented
+> out the #define ACE_NTRACE 1 in config.h and rebuilt ACE and the
+> examples.
+
+The best way to do this is to say
+
+#define ACE_NTRACE 0
+
+in config.h.
+
+> When I run the CPP-inserver example in examples/IPC_SAP/SOCK_SAP, I
+> get some trace output but not everything I would expect to see.
+
+Can you please let me know what you'd expect to see that you're not
+seeing? Some of the ACE_TRACE macros for the lower-level ACE methods
+are commented out to avoid problems with infinite recursion (i.e.,
+tracing the ACE_Trace calls...). I haven't had a chance to go over
+all of these indepth, but I know that it should be possible to turn
+many of them back on.
+
+> It would be nice to have a runtime option for turning trace on and
+> off.
+
+There already is. In fact, there are two ways to do it.
+If you want to control tracing for the entire process, please check
+out ACE_Trace::start_tracing() and ACE_Trace::stop_tracing().
+
+If you want to control tracing on a per-thread basis please take a
+look at the ACE_Log_Msg class. There are methods called
+stop_tracing() and start_tracing() that do what you want.
+
+----------------------------------------
+
+51.
+
+> I've been using an acceptor and a connector in one (OS-) process.
+> What does happen, if a signal is sent to this process? Is the signal
+> processed by every ACE_Event_Handler (or its descendants) that is
+> around? The manual page simply states that handle signal is called
+> as soon as a signal is triggered by the OS.
+
+How this signal is handled depends on several factors:
+
+1. Whether your using ACE_Sig_Handler or ACE_Sig_Handlers to register
+ the signal handlers.
+
+2. If you're using ACE_Sig_Handler, then the ACE_Event_Handler * that
+ you've most recently registered to handle the signal will
+ have it's handle_signal() method called back by the Reactor.
+
+3. If you're using ACE_Sig_Handlers, then all of the ACE_Event_Handler *
+ that you've register will be called back.
+
+For examples of how this works, please check out
+
+$WRAPPER_ROOT/examples/Reactor/Misc/test_signals.cpp
+
+This contains a long comment that explains precisely how everything
+works!