summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohnny Willemsen <jwillemsen@remedy.nl>2011-10-05 11:00:59 +0000
committerJohnny Willemsen <jwillemsen@remedy.nl>2011-10-05 11:00:59 +0000
commit4503ef5d051d0760575a7f5c2bbdcb9ff41e6ea2 (patch)
treeb78edd9626a6797c22c3ebe7b7e19958f9fb045f
parent2c278797b6c280c5f1c4598637c76c9dc7ea91f3 (diff)
downloadATCD-4503ef5d051d0760575a7f5c2bbdcb9ff41e6ea2.tar.gz
Fuzz
-rw-r--r--TAO/examples/Callback_Quoter/README16
-rw-r--r--TAO/examples/Kokyu_dsrt_schedulers/README58
-rw-r--r--TAO/examples/Kokyu_dsrt_schedulers/fp_example/README18
-rw-r--r--TAO/examples/Kokyu_dsrt_schedulers/mif_example/README18
-rw-r--r--TAO/examples/Quoter/README38
-rw-r--r--TAO/examples/RTScheduling/MIF_Scheduler/README4
-rw-r--r--TAO/examples/Simulator/README42
-rw-r--r--TAO/examples/TypeCode_Creation/README14
-rw-r--r--TAO/examples/ior_corbaloc/README12
-rw-r--r--TAO/examples/mfc/README30
10 files changed, 131 insertions, 119 deletions
diff --git a/TAO/examples/Callback_Quoter/README b/TAO/examples/Callback_Quoter/README
index 201c39fb56b..8a7b079afa0 100644
--- a/TAO/examples/Callback_Quoter/README
+++ b/TAO/examples/Callback_Quoter/README
@@ -1,3 +1,5 @@
+$Id$
+
******************************************************************************
CALLBACK QUOTER TEST EXAMPLE -- Kirthika Parameswaran
<kirthika@cs.wustl.edu>
@@ -5,7 +7,7 @@ CALLBACK QUOTER TEST EXAMPLE -- Kirthika Parameswaran
This is an distributed application which highlights the importance
of the callback feature in helping meet the demands of various clients
-without them having to poll continously for input from the server.
+without them having to poll continuously for input from the server.
There are three parts to the Callback Quoter Example.
@@ -13,10 +15,10 @@ There are three parts to the Callback Quoter Example.
2) Notifier
3) Consumer
-
+
In detail:
_________
-
+
1) Supplier
--is the market feed daemon who keeps feeding the current stock
information to the Notifier periodically.
@@ -26,9 +28,9 @@ the Notifier.
2) Notifier
-- On getting information form the supplier, it checks whether there are
-any consumers ineterested in the information and accordingly sends it to
+any consumers interested in the information and accordingly sends it to
them. The consumer object is registered with the notifier and the data
-is pushed to the consumer usoing this refernce.
+is pushed to the consumer using this reference.
3) Consumer
-- He is the stock broker interested in the stock values in the market.
@@ -76,7 +78,7 @@ You can unregister by typing 'u' and quit by typing 'q'.
./supplier -ifilename
The -i option simply tells the daemon where to pick information from.
-TIP:: the contents of the input file per line should be: stockname and its price.
+TIP:: the contents of the input file per line should be: stockname and its price.
Sample: ./example.stocks
The other option includes setting the period for the stock feed.
@@ -111,7 +113,7 @@ You can unregister by typing 'u' and quit by typing 'q'.
./supplier -ifilename -fior_file -s
The -i option simply tells the daemon where to pick information from.
-TIP:: the contents of the input file per line should be: stockname and its price.
+TIP:: the contents of the input file per line should be: stockname and its price.
Sample: ./example.stocks
The other option includes setting the period for the stock feed.
diff --git a/TAO/examples/Kokyu_dsrt_schedulers/README b/TAO/examples/Kokyu_dsrt_schedulers/README
index e2e4c5ee040..dcd17a977ea 100644
--- a/TAO/examples/Kokyu_dsrt_schedulers/README
+++ b/TAO/examples/Kokyu_dsrt_schedulers/README
@@ -1,32 +1,34 @@
+$Id$
+
Design approaches for the Kokyu based DSRT scheduler/dispatcher
--------------------------------------------------------
The DSRT schedulers in this directory use the Kokyu DSRT
dispatching classes present in $ACE_ROOT/Kokyu. These
-act as wrappers/adapters around the Kokyu DSRT dispatcher.
+act as wrappers/adapters around the Kokyu DSRT dispatcher.
The Kokyu DSRT dispatcher is responsible for scheduling
threads which ask the dispatcher to schedule themselves.
Currently there are two implementations for the Kokyu
DSRT dispatcher. One uses a condition-variable based
approach for scheduling threads and the other manipulates
priorities of threads and relies on the OS scheduler for
-dispatching the threads appropriately.
+dispatching the threads appropriately.
CV-based approach
-----------------
In this approach, it is assumed that the threads "yield"
-on a regular basis to the scheduler by calling
+on a regular basis to the scheduler by calling
update_scheduling_segment. Only one thread is running at
-any point in time. All the other threads are blocked
+any point in time. All the other threads are blocked
on a condition variable. When the currently running
-thread yields, it will cause the condition variable
+thread yields, it will cause the condition variable
to be signalled. All the eligible threads are stored
-in a scheduler queue (rbtree), the most eligible thread
+in a scheduler queue (rbtree), the most eligible thread
determined by the scheduling discipline. This approach
-has the drawback that it requires a cooperative
-threading model, where threads yield voluntarily
-on a regular basis. The application threads are
-responsible for doing this voluntary yielding.
+has the drawback that it requires a cooperative
+threading model, where threads yield voluntarily
+on a regular basis. The application threads are
+responsible for doing this voluntary yielding.
OS-based approach
-----------------
@@ -43,29 +45,29 @@ while bumping down the priority of the currently running
thread, if it is not the most eligible. There are four
priority levels required for this mechanism to work,
listed in descending order of priorities. For example,
-a thread running at Active priority will preempt a
-thread running at Inactive priority level.
+a thread running at Active priority will preempt a
+thread running at Inactive priority level.
-1. Executive priority - priority at which the scheduler
+1. Executive priority - priority at which the scheduler
executive thread runs.
-2. Blocked priority - this is the priority to which
+2. Blocked priority - this is the priority to which
threads about to block on remote calls will be bumped
up to.
-3. Active priority - this is the priority to which
+3. Active priority - this is the priority to which
the most eligible thread is set to.
4. Inactive priority - this is the priority to which
all threads except the most eligible thread is set
to.
-As soon as a thread asks to be scheduled, a
+As soon as a thread asks to be scheduled, a
wrapper object is created and inserted into the queue.
This object carries the qos (sched params) associated
with that thread. A condition variable is signalled
-to inform the executive thread that the queue is
-"dirty". The scheduler thread picks up the most
+to inform the executive thread that the queue is
+"dirty". The scheduler thread picks up the most
eligble one and sets its priority to "active" and
-sets the currently running thread priority to
-"inactive".
+sets the currently running thread priority to
+"inactive".
The drawback to this approach is that it relies on
the OS scheduler to dispatch the threads. Also,
@@ -79,10 +81,10 @@ threads and this could cause priority inversions.
How to write a new DSRT scheduler using Kokyu
---------------------------------------------
One can use one of the schedulers as a starting
-point. The variation points are
+point. The variation points are
1. The scheduler parameters that need to be propagated
- along with the service context.
+ along with the service context.
2. The QoS comparison function, that determines which
thread is more eligible.
@@ -90,13 +92,13 @@ To aid (1), we have created a Svc_Ctxt_DSRT_QoS idl
interface (see ./Kokyu_qos.pidl). This interface
currently has the necessary things to be propagated
for FP, MIF and MUF schedulers. This can be altered
-if necessary to accomodate new sched params. The
+if necessary to accomodate new sched params. The
idea here is to let the IDL compiler generate the
marshalling code (including Any operators) so that
-these parameters can be shipped across in the
-service context in an encapsulated CDR.
+these parameters can be shipped across in the
+service context in an encapsulated CDR.
-To create customized QoS comparator functions, we
+To create customized QoS comparator functions, we
used the idea of C++ traits to let the user define
customized comparator functions. For example, the
MIF scheduler uses the following traits class.
@@ -130,12 +132,12 @@ MIF scheduler uses the following traits class.
The idea of traits makes the Kokyu dispatcher more flexible
in terms of creating new schedulers. For example, the
-Kokyu classes do not care about what concrete type
+Kokyu classes do not care about what concrete type
Guid is. It could be an OctetSequence for some applications,
whereas it could be an int for some others. The exact type
is defined by the application (in this case, the MIF scheduler)
using the traits class. In the above traits class the
-Guid's type is defined to be an octet sequence (indirectly).
+Guid's type is defined to be an octet sequence (indirectly).
The Kokyu dispatcher expects the following typedef's to
be present in the traits class:
diff --git a/TAO/examples/Kokyu_dsrt_schedulers/fp_example/README b/TAO/examples/Kokyu_dsrt_schedulers/fp_example/README
index 951fdfa0e2b..5b2874a0d06 100644
--- a/TAO/examples/Kokyu_dsrt_schedulers/fp_example/README
+++ b/TAO/examples/Kokyu_dsrt_schedulers/fp_example/README
@@ -1,16 +1,18 @@
+$Id$
+
This example illustrates the working of a Kokyu
based DSRT scheduler. There are 2 threads waiting
for requests on the server side. Two threads are
-started on the client side. The main thread as
+started on the client side. The main thread as
well as the worker threads are given the maximum
priority so that their "release" happen immediately.
Each of the worker threads make a remote two-way
-call to the server. The two requests are processed
-in different threads on the server side.
+call to the server. The two requests are processed
+in different threads on the server side.
-On the client side, the first thread is given lesser
-priority than the second thread and the second thread
-is started a little later than the first. It is
+On the client side, the first thread is given lesser
+priority than the second thread and the second thread
+is started a little later than the first. It is
expected that, on the server side, the first request
is processed by one of the two threads. When the
second request comes in, the other thread that processes
@@ -19,11 +21,11 @@ second request carries more importance than the first.
A timeline is generated which shows the sequence of
execution of the two different threads on the server.
-Make sure that you run in privileged mode.
+Make sure that you run in privileged mode.
To run the test using the CV-based approach (see ../README),
-./server -d
+./server -d
./client -d -x
To run the test using the OS-based approach (see ../README),
diff --git a/TAO/examples/Kokyu_dsrt_schedulers/mif_example/README b/TAO/examples/Kokyu_dsrt_schedulers/mif_example/README
index 5499ca9e937..c22e5080f79 100644
--- a/TAO/examples/Kokyu_dsrt_schedulers/mif_example/README
+++ b/TAO/examples/Kokyu_dsrt_schedulers/mif_example/README
@@ -1,16 +1,18 @@
+$Id$
+
This example illustrates the working of a Kokyu
based DSRT scheduler. There are 2 threads waiting
for requests on the server side. Two threads are
-started on the client side. The main thread as
+started on the client side. The main thread as
well as the worker threads are given the maximum
priority so that their "release" happen immediately.
Each of the worker threads make a remote two-way
-call to the server. The two requests are processed
-in different threads on the server side.
+call to the server. The two requests are processed
+in different threads on the server side.
-On the client side, the first thread is given lesser
-importance than the second thread and the second thread
-is started a little later than the first. It is
+On the client side, the first thread is given lesser
+importance than the second thread and the second thread
+is started a little later than the first. It is
expected that, on the server side, the first request
is processed by one of the two threads. When the
second request comes in, the other thread that processes
@@ -19,11 +21,11 @@ second request carries more importance than the first.
A timeline is generated which shows the sequence of
execution of the two different threads on the server.
-Make sure that you run in privileged mode.
+Make sure that you run in privileged mode.
To run the test using the CV-based approach (see ../README),
-./server -d
+./server -d
./client -d -x
To run the test using the OS-based approach (see ../README),
diff --git a/TAO/examples/Quoter/README b/TAO/examples/Quoter/README
index 35bddc465a8..f9ce6d4a829 100644
--- a/TAO/examples/Quoter/README
+++ b/TAO/examples/Quoter/README
@@ -1,4 +1,4 @@
-// $Id$
+$Id$
Here is a Stock Quoter example that features the use of the TAO IDL
compiler, the different types of configuration settings (global vs
@@ -20,25 +20,25 @@ Note: Moving the Quoter object is no longer available!
to use a servant locator on the POA managing the Quoter object.
Context: The Quoter example serves several tests, the first is the test
- of several multithreading policies and the second is showing the
- use of the Life Cycle Service as it is defined in the
+ of several multithreading policies and the second is showing the
+ use of the Life Cycle Service as it is defined in the
CORBA Common Object Services specification.
Life Cycle Service use-case:
-several processes exist: server,
- Factory_Finder,
+several processes exist: server,
+ Factory_Finder,
Generic_Factory,
Life_Cycle_Service
client
-several object exist: Quoter,
- Quoter_Factory,
- Quoter_Factory_Finder,
+several object exist: Quoter,
+ Quoter_Factory,
+ Quoter_Factory_Finder,
Quoter_Generic_Factory,
Quoter_Life_Cycle_Service
-server: The server process contains two kind of objects: Quoter and
+server: The server process contains two kind of objects: Quoter and
Quoter_Factory's. A Quoter is a very simple Object supporting
only one method. The focus is not on a sophisticated object
but on showing how policies work.
@@ -46,26 +46,26 @@ server: The server process contains two kind of objects: Quoter and
Factory_Finder: The COS spec. introduces the concept of a Factory Finder
which is capable to find proper factories. The Naming
- Service is used as lookup-mechanism. A reference to
+ Service is used as lookup-mechanism. A reference to
the Factory_Finder is passed as parameter of any copy
- or move request.
+ or move request.
-Generic_Factory: This process supports the object Quoter_Generic_Factory (QGF).
+Generic_Factory: This process supports the object Quoter_Generic_Factory (QGF).
The QGF supports the GenericFactory interface introduced by
- the COS specification. It forwards create_object requests to
- more concrete factories, e.g. the Quoter_Factory. The
+ the COS specification. It forwards create_object requests to
+ more concrete factories, e.g. the Quoter_Factory. The
concrete factories are found via the Naming Service.
-Life_Cycle_Service: This process is very similar to the Generic_Factory
+Life_Cycle_Service: This process is very similar to the Generic_Factory
proocess. It also supports an Object, which conforms to
the GenericFactory interface. The Quoter_Life_Cycle_Service
- conforms to the idea of a life cycle service as it is
+ conforms to the idea of a life cycle service as it is
introduced by the COS specification. The Quoter_Life_Cycle_Service
is neutral against the Quoter example. It is not dependent
on it. Only interfaces defined by the CosLifeCycle.idl file
- are used. The implemenation uses the COS Trading Service
- manage registered Generic Factories, as the Quoter_Generic_Factory
- for example. A lookup on the Trading Service is performed
+ are used. The implemenation uses the COS Trading Service
+ manage registered Generic Factories, as the Quoter_Generic_Factory
+ for example. A lookup on the Trading Service is performed
when a create_object request is invoked on it.
client: Creates one Quoter through using the Quoter_Factory_Finder. After that
diff --git a/TAO/examples/RTScheduling/MIF_Scheduler/README b/TAO/examples/RTScheduling/MIF_Scheduler/README
index f01d0702f54..8ede5f9dba3 100644
--- a/TAO/examples/RTScheduling/MIF_Scheduler/README
+++ b/TAO/examples/RTScheduling/MIF_Scheduler/README
@@ -1,3 +1,5 @@
+$Id$
+
Most Important First (MIF) Scheduler
====================================
@@ -16,7 +18,7 @@ any given time the scheduler dequeues the DT at the head of the queue,
that corresponds to the most important thread, and signals the thread
to activate it. The service context is used to send the importance and
GUID of the DT across hosts it spans so the DT can be scheduled
-accordingly on the remote host.
+accordingly on the remote host.
In this experiment we show how dynamic scheduling is done using the
Dynamic Scheduling framework with the MIF Scheduler as the pluggable
diff --git a/TAO/examples/Simulator/README b/TAO/examples/Simulator/README
index 6467abb9637..d463a1b622b 100644
--- a/TAO/examples/Simulator/README
+++ b/TAO/examples/Simulator/README
@@ -3,7 +3,7 @@ $Id$
Documentation for the Simulator/DOVE demo
Purposes: 1) To show how the event service can be used to as a medium to
- transport monitoring events including data.
+ transport monitoring events including data.
2) To show how objects implemented in Java can access/can be accessed
by TAO objects.
3) To show the feasability of the DOVE framework as mentioned in
@@ -19,7 +19,7 @@ Application: Using the Event Service as distribution media, event
consumers. Filtering can be activated.
The mapping to DOVE is the following:
Event Channel - DOVE Agent
- DOVE Browser
+ DOVE Browser
and/or DOVE MIB - Monitor/Event Consumer
Event Supplier - Monitored Application (here
an object supplying recorded scheduling
@@ -40,7 +40,7 @@ Implementation:
They are generated by the monitored application and are
consumed by the DOVE Browser, a JAVA applet or application running
on a different machine and/or location. The collected metrics are
- displayed in the Browser.
+ displayed in the Browser.
When an event arrives from the event supplier, a consumer inspects
the data field in the event. The field is a CORBA::Any, so
@@ -80,7 +80,7 @@ Requirements:
make realclean - updates idl files, does a make clean
make vb - makes the browser, using visibroker for java
make vbjava - remakes only the java files using vbjc
- make jdk - makes the browser, using jdk
+ make jdk - makes the browser, using jdk
(the demo does not currently work with jdk)
Parts of the Demo:
@@ -107,7 +107,7 @@ Files:
NavWeap.idl - IDL definition of the Weapons and Navigation structures
README - this readme file
- Event Supplier:
+ Event Supplier:
(in directory $TAO_ROOT/orbsvcs/tests/Simulator/Event_Supplier/)
DualEC_Sup.cpp - Dual EC Event Supplier
DualEC_Sup.h - Dual EC Event Supplier class definitions
@@ -119,31 +119,31 @@ Files:
(Event_Con.cpp, Event_Con.h - Event Consumer for testing)
svc.conf - helper file
- DOVEBrowswer:
+ DOVEBrowswer:
(in directory $TAO_ROOT/orbsvcs/tests/Simulator/DOVEBrowser/)
AnswerEvent.java - Having my own Events
AnswerListener.java - Listener for these Events
DataHandler.java - Base class for all Data Handlers
- DemoCore.java - Core of the Demo to connect Observables
+ DemoCore.java - Core of the Demo to connect Observables
with Observers
DemoObservable.java - Base class for Observables
- DoubleVisComp.java - Visualization Component
+ DoubleVisComp.java - Visualization Component
(will be a JavaBean) for Doubles
DOVEBrowser.java - Wrapper around DemoCore
DOVEBrowserApplet.java - Applet wrapper around DemoCore
MTQueue.java - synchronized queue for multi-threaded access
- MTDataHandlerAdapter.java - uses the Adapter and Active Object
+ MTDataHandlerAdapter.java - uses the Adapter and Active Object
patterns to provide early demuxing
of ORB upcalls onto multiple
synchronized queues managed by
data handler threads
- NS_Resolve.java - Resolving the inital reference
+ NS_Resolve.java - Resolving the inital reference
to the Naming Service
- NavWeapDataHandler.java - Specialized Data Handler for
+ NavWeapDataHandler.java - Specialized Data Handler for
Navigation and Weapons data
- NavigationVisComp.java - Visualization Component
+ NavigationVisComp.java - Visualization Component
(... JavaBean) for Navigation data
- ObservablesDialog.java - Dialog window for connecting
+ ObservablesDialog.java - Dialog window for connecting
Observables with OBservers
Properties.java - constant definitons
PushConsumer.java - Event Service Push Consumer
@@ -153,15 +153,15 @@ Files:
WeaponsVisComp.java - Visualization Component for Weapons
- DOVE MIB:
+ DOVE MIB:
(in directory $TAO_ROOT/orbsvcs/tests/Simulator/DOVEMIB/)
- DOVEMIB.[cpp,h] - Core routines, connection to the
+ DOVEMIB.[cpp,h] - Core routines, connection to the
Event Channel
Node.[cpp,h] - Nodes used by the AnyAnalyser
- AnyAnalyser.[cpp,h] - Anaylser for CORBA anys, storing the
+ AnyAnalyser.[cpp,h] - Anaylser for CORBA anys, storing the
contained types in persistent storage
NodeVisitor.h - base class definition of a Visitor
- PrintVisistor.[cpp,h] - Able to print a given tree
+ PrintVisistor.[cpp,h] - Able to print a given tree
of type nodes, which is
generated by the Any analyser
@@ -191,11 +191,11 @@ Compiling:
NT:
Open the Event_Sup.dsw workspace found in the
- Event_Supplier directory in MSVC++ 5.0+ and build
+ Event_Supplier directory in MSVC++ 5.0+ and build
the Event_Sup, Logging_Sup, and DualEC_Sup projects.
- From a console window, change to the DOVEBrowser
- directory and run "make vb" to build the browser using
+ From a console window, change to the DOVEBrowser
+ directory and run "make vb" to build the browser using
Visibroker. The first time you do this, you will need to
run "make setup" to copy the correct files into the directory.
You may also want to do a "make realclean" each time you
@@ -232,7 +232,7 @@ Starting:
DOVE Browser:
vbj DOVEBrowser
- (also supported: vbj DOVEBrowser -nameserviceior <IOR>
+ (also supported: vbj DOVEBrowser -nameserviceior <IOR>
vbj DOVEBrowser -nameserviceport <port>
vbj DOVEBrowser -dualECdemo)
diff --git a/TAO/examples/TypeCode_Creation/README b/TAO/examples/TypeCode_Creation/README
index 04cd9437900..ec5cc1926c4 100644
--- a/TAO/examples/TypeCode_Creation/README
+++ b/TAO/examples/TypeCode_Creation/README
@@ -1,20 +1,20 @@
$Id$
-Normally, a typecode is created at compile time by the
-IDL compiler for each declaration in an IDL file, or at
-runtime by the Interface Repository. However, in some
-situations, such as a bridge between ORBs, a typecode may
+Normally, a typecode is created at compile time by the
+IDL compiler for each declaration in an IDL file, or at
+runtime by the Interface Repository. However, in some
+situations, such as a bridge between ORBs, a typecode may
have to be created without any knowledge of the IDL and outside
any Interface Repository. In such cases, the typecode
creation methods of the pseudo-object CORBA::ORB are used.
-This program is a simple example of the use of the
+This program is a simple example of the use of the
CORBA::ORB::create_*_tc methods. It does not require any
queries to the Interface Repository (although for a more
elaborate example, this may be necessary). It does, however,
use IFR data types, so the program must be linked to the
IFR_Client library. For examples of queries on the Interface
-Repository, see the tests in
+Repository, see the tests in
ACE_ROOT/TAO/orbsvcs/tests/InterfaceRepo/IFR_Test and
ACE_ROOT/TAO/orbsvcs/tests/InterfaceRepo/Application_Test.
@@ -22,7 +22,7 @@ These typecode creation methods make use of the functions
of the same name in TypeCodeFactory (proposed, but not yet
officially part of the CORBA spec). The TAO_TypeCodeFactory
library is found in ACE_ROOT/TAO/tao/TypeCodeFactory, and
-is used by the Interface Repository as well as by
+is used by the Interface Repository as well as by
CORBA::ORB::create_*_tc to create typecodes. This library
may be compiled, but, to keep dependencies and footprint
to a minimum, it is not linked and loaded automatically.
diff --git a/TAO/examples/ior_corbaloc/README b/TAO/examples/ior_corbaloc/README
index a2214a052ae..2474569b1d8 100644
--- a/TAO/examples/ior_corbaloc/README
+++ b/TAO/examples/ior_corbaloc/README
@@ -1,6 +1,6 @@
-// $Id$
+$Id$
-This is test to exercise the corbaloc: and corbaname: style URL.
+This is test to exercise the corbaloc: and corbaname: style URL.
The simple way to test is to run the run_test.pl.
The corbaloc: and corbaname: URL syntax is documented in Chapter
@@ -23,12 +23,12 @@ To test manually:
3. The client takes in a corbaloc style URL for the CORBA
- Naming Service on the command-line. The client will
+ Naming Service on the command-line. The client will
first try to resolve the Naming Service using this corbaloc URL.
- The client will then try to resolve a corbaloc::Status object
+ The client will then try to resolve a corbaloc::Status object
(see corbaloc.idl) named "STATUS".
The client will then try to invoke the print_status() method
- on this object.
+ on this object.
Run the client as one of these:
@@ -76,6 +76,6 @@ To test manually:
c) ./corbaname_client corbaname:rir:#STATUS
- - Determine where the Naming Service is by using
+ - Determine where the Naming Service is by using
orb->resolve_initial_references("NameService")
- Resolve an object reference of name "STATUS"
diff --git a/TAO/examples/mfc/README b/TAO/examples/mfc/README
index 73dd767c5d4..a2ce4f60a29 100644
--- a/TAO/examples/mfc/README
+++ b/TAO/examples/mfc/README
@@ -1,3 +1,5 @@
+$Id$
+
This is an short example to show how to integrate TAO and MFC base GUI
applications. The server is an MFC-based GUI application, which
spawns an additional thread to invoke the ORBs event queue. The
@@ -9,13 +11,13 @@ and TAO by adding an additional thread for the ORB:
Step 1: Creating a MFC-Application wizard-based project
Step 2: Set the following project settings
-
+
- C++ Settings / Preprocessor
-
+
ACE_HAS_DLL=1, ACE_HAS_MFC=1
-
+
- Use the MFC-based librarys of ACE & TAO
-
+
e.g. link acemfcd.lib TAOmfcd.lib for the Debug-version!
Step 3: Add a threadfunction for the ORB
@@ -26,15 +28,15 @@ Step 3: Add a threadfunction for the ORB
the necessary stuff to start an ORB!
Step 4: Add the thread invocation in the Application
-
+
- Initialize ACE
-
+
- Spawn the thread for the ORB
-
+
At first you have to initialize ACE by calling
-
+
ACE::init()
-
+
as soon as possible in your application. Good places are in
the constructor or in the InitInstance() memberfunction of the
application-calls. In addition you have to spawn the thread
@@ -42,15 +44,15 @@ Step 4: Add the thread invocation in the Application
(spawn_my_orb_thread);
Step 5: Overwrite the default destructor of the Application-Class
-
+
- Get a reference to the ORB use in the thread
-
+
- Shut down the ORB
-
+
- Wait for the shutdown of the ORB-thread
-
+
- Call ACE::fini() to close the ACE::init()-call
-
+
To shut down the ORB in it's separate thread you need to call
the ORB::shutdown() method of the ORB references in the
thread. To get an reference to this special ORB create an