To bind a name is to create a name binding in a given context. Querying a value using a name determines the value associated with the name in a given context. Note that a name is always bound relative to a context. Thus, there are no absolute names.
The following are the key classes in the ACE Naming Service:
This is the main class ``entry point'' into the Naming Service. It is used both by client processes and by server process. It manages access to the appropriate Name/Binding database (that is the file where Name/Bindings are stored) and it also manages the communication between a client process and the server (by using class Name_Proxy, which is a private member of Naming_Context). If a client process runs on the same host as the server no IPC is necessary because the Naming_Context uses shared memory.
The Name_Acceptor allocates in its handle_input() routine a new instance of class Name_Handler on the heap, and accepts connections into this Name_Handler.
The class Name_Handler represents the server side of communication between client and server. It interprets incoming requests to the Net_Local namespace and delegates the requests to its own Naming_Context (which is the Net_Local namespace on the current host). For communication it uses the helper classes Name_Request and Name_Reply.
The ACE Naming Service uses ACE_WString String classes since it must handle wide character strings in order to support internationalization.
Configuring a Name_Server server or client requires specifying all or some of the following parameters. These parameters can be passed in to main through command line as follows:
Option | Description | Default value |
-c <naming context> |
Naming Context to use. Can be either "PROC_LOCAL" or "NODE_LOCAL" or
"NET_LOCAL" |
PROC_LOCAL |
-h <hostname> | Specify the server hostname (needed by Name Server clients for PROC_LOCAL naming context) | ACE_DEFAULT_SERVER_HOST |
-p <nameserver port> |
Port number where the server process expects requests |
ACE_DEFAULT_SERVER_PORT |
-l <namespace dir> |
Directory that holds the NameBinding databases |
ACE_DEFAULT_NAMESPACE_DIR |
-P <process name> |
Name of the client process | argv[0] |
-s <database name> |
Name of the database. NameBindings for the appropriate naming context are stored in file <namespace_dir>/<database name>. | null |
-d <debug> | Turn debugging on/off | 0 (off) |
-T <trace> | Turn tracing on/off | 0 (off) |
-v <verbose> | Turn verbose on/off | 0 (off) |
dynamic Naming_Service Service_Object *
../lib/netsvcs:_make_ACE_Name_Acceptor()
"-p 20222 -c NET_LOCAL -l /tmp -s MYDATABASE"
dynamic Naming_Service_Client Service_Object *
../lib/netsvcs:_make_Client_Test()
"-h tango.cs.wustl.edu -p 20222"
The following are the key classes in the ACE Time Service:
TS_Server_Handler represents the server side of communication between
clerk and server. It interprets incoming requests for time updates,
gets the system time, creates a reply in response to the request and
then sends the reply to the clerk from which it received the request.
For communication it uses the helper class Time_Request.
TS_Server_Acceptor allocates in its handle_input routine a new instance
of class TS_Server_Handler on the heap, and accepts connections into this
TS_Server_Handler.
TS_Clerk_Handler represents the clerk side of communication between
clerk and server. It generates requests for time updates every timeout
period and then sends these requests to all the servers it is
connected to asynchronously. It receives the replies to these requests
from the servers through its handle_input method and then adjusts the
time using the roundtrip estimate. It caches this time, which is
subsequently retrieved by TS_Clerk_Processor.
TS_Clerk_Processor creates a new instance of TS_Clerk_Handler for
every server connection it needs to create. It periodically calls
send_request() of every TS_Clerk_Handler to send a request for time
update to all the servers. In the process, it retrieves the latest
time cached by each TS_Clerk_Handler and then uses it to compute its
notion of the local system time.
Currently, updating the system time involves taking the average of all
the times received from the servers.
Configuring a server requires specifying the port number of the
server. This can be specified as a command line argument as follows:
-p <port number>
A clerk communicates with one or more server processes. To communicate
with the server process, a client needs to know the INET_Addr, where
the server offers its service. The configuration parameters namely the
server port and server host are passed as command line arguments when
starting up the clerk service as follows:
-h <server host1>:<server port1> -h <server host2>:<server port2> ...
Note that multiple servers can be specified in this manner for the
clerk to connect to when it starts up. The server name and the port
number need to be concatenated and separated by a ":". In addition,
the timeout value can also be specified as a command line argument as
follows:
-t timeout
The timeout value specifies the time interval at which the clerk
should query the servers for time updates.
By default a Clerk does a non-blocking connect to a server. This can
be overridden and a Clerk can be made to do a blocking connect by
using the -b flag.
This class is a more general-purpose synchronization mechanism
than SunOS 5.x mutexes. For example, it implements "recursive
mutex" semantics, where a thread that owns the token can
reacquire it without deadlocking. In addition, threads that
are blocked awaiting the token are serviced in strict FIFO
order as other threads release the token (SunOS 5.x mutexes
don't strictly enforce an acquisition order). Lastly,
ACE_Local_Mutex performs deadlock detection on acquire
calls.
This is the remote equivalent to ACE_Local_Mutex. The
Remote_Mutex class offers methods for acquiring, renewing, and
releasing a distributed synchronization mutex. Similar to
ACE_Local_Mutex, ACE_Remote_Token_Proxy offers recursive
acquisition, FIFO waiter ordering, and deadlock detection. It
depends on the Token Server for its distributed synchronization
semantics.
This class implements the reader interface to canonical
readers/writer locks. Multiple readers can hold the lock
simultaneously when no writers have the lock. Alternatively,
when a writer holds the lock, no other participants (readers or
writers) may hold the lock. This class is a more
general-purpose synchronization mechanism than SunOS 5.x
RLocks. For example, it implements "recursive RLock"
semantics, where a thread that owns the token can reacquire it
without deadlocking. In addition, threads that are blocked
awaiting the token are serviced in strict FIFO order as other
threads release the token (SunOS 5.x RLockes don't strictly
enforce an acquisition order).
This class implements the writer interface to canonical
readers/writer locks. Multiple readers can hold the lock
simultaneously when no writers have the lock. Alternatively,
when a writer holds the lock, no other participants (readers or
writers) may hold the lock. This class is a more
general-purpose synchronization mechanism than SunOS 5.x WLock.
For example, it implements "recursive WLock" semantics, where a
thread that owns the token can reacquire it without
deadlocking. In addition, threads that are blocked awaiting
the token are serviced in strict FIFO order as other threads
release the token (SunOS 5.x WLocks don't strictly enforce an
acquisition order).
This is the remote equivalent to ACE_Local_RLock. Multiple
readers can hold the lock simultaneously when no writers have
the lock. Alternatively, when a writer holds the lock, no
other participants (readers or writers) may hold the lock.
ACE_Remote_RLock depends on the ACE Token Server for its
distributed synchronization semantics.
This is the remote equivalent to ACE_Local_WLock.
The Token_Acceptor is a Token_Handler factory. It accepts
connections and passes the service responsibilities off to a
new Token_Handler.
This class is the main class ``entry point'' of the ACE Token service. It
receives token operation requests from remote clients and turns
them into calls on local tokens (acquire, release, renew, and
remove). In OMG CORBA terminology, it is an ``Object Adapter.'' It also
schedules and handles timeouts that are used to support "timed
waits." Clients used timed waits to bound the amount of time
they block trying to get a token.
The only parameter that the Token Server takes is a listen port
number. You can specify a port number by passing a "-p
Here is an example svc.conf entry that dynamically loads the
Token Server specifying port number to listen on for client
connections:
The following are the key classes in the Server Logging Service:
The Server_Logging_Handler class is a parameterized type that is
responsible for processing logging records sent to the Server from
participating client hosts. When logging records arrive from the
client host associated with a particular Logging Handler object, the
handle_input() method of the Server_Logging_Handler class is called
which in turn formats and displays the records on one or more output
devices (such as the printer, persistent storage, and/or console
devices.
The class Server_Logging_Acceptor allocates in its handle_input()
routine a new instance of class Server_Logging_Handler on the heap,
and accepts connections into this Server_Logging_Handler.
The only parameter that the Logging Server takes is a listen
port number. You can specify a port number by passing a "-p
Here is an example svc.conf entry that dynamically loads the
Logging Server specifying port number to listen on for client
connections:
The Client_Logging_Handler class is a parameterized type that is
responsible for setting up a named pipe and using it to communicate
with different user processes on the same host. Once logging records
arrive from these processes, the handler reads these records in
priority order, performs network-byte order conversions on
multiple-header fields, and then transmits these records to the Server
Logging daemon across the network.
The class Client_Logging_Connector connects to the Server Logging
daemon and then in its handle_input() routine it allocates a new
instance of the Client_Logging_Handler on the heap.
Configuring a Logging Client requires specifying all or some of the
following parameters. These parameters can be passed in to main
through command line as follows:
Here is an example svc.conf entry that dynamically loads the
Logging Client specifying host name and port number of the
Logging Server:
The following describes how to configure the Logging Strategy
Service:
Here are the command line arguments that can be given to the Logging
Strategy Service:
-f <flag1>|<flag2>|<flag3> (etc...)
where a flag can be any of the following:
Note: If more than one flag is specified, the flags need to be 'OR'ed
as above syntax shows. Make sure there is no space in between the flag
and '|'.
-s filename
If the OSTREAM flag is set, this can be used to specify the filename
where the output should be directed. Note that if the OSTREAM flag is
set and no filename is specified, ACE_DEFAULT_LOGFILE will be used to
write the output to.
Here is an example svc.conf entry that dynamically loads the
Logging Strategy Service specifying that the output be sent
to STDERR:
"-f STDERR"
"-f STDERR|OSTREAM -s mylog"
Back to the
ACE home page.
Overview of Time Service
Time Service provides accurate, fault-tolerant clock synchronization
for computers collaborating in local area networks and wide area
networks. Synchronized time services are important in distributed
systems that require multiple hosts to maintain accurate global
time. The architecture of the distributed time service contains the
following Time Server, Clerk, and Client components:
The following is a description of how to configure the Time Server
clerk and server services:
Note:
dynamic Time_Service Service_Object *
../lib/netsvcs:_make_ACE_TS_Server_Acceptor()
"-p 20202"
dynamic Time_Server_test Service_Object *
../lib/netsvcs:_make_ACE_TS_Clerk_Connector ()
"-h tango:20202 -h lambada:20202 -t 4"
Token Service
The ACE Token Service provides local and remote mutexes and
readers/writer locks. For information regarding the deadlock
detection algorithm, check out ACE_Token_Manager.h. For information
about an implementation of the Composite Pattern for Tokens, check out
Token_Collection.h. The classes which implement the local and remote
synchronization primitives are listed below:
The Token Server provides distributed mutex and readers/writer lock
semantics to the ACE Token library. ACE_Remote_Mutex,
ACE_Remote_RLock, and ACE_Remote_WLock, are proxies to the Token
Server. The following are the key classes in the ACE Token
Server:
The following describes how to configure the Token Server:
Note:
dynamic Token_Service Service_Object *
../lib/netsvcs:_make_ACE_Token_Acceptor()
"-p 10202"
Overview of Server Logging Service
The Server Logging Service provides a concurrent, multi-service daemon
that processes logging records received from one or more client hosts
simultaneously. The object-oriented design of the Server Logging
Service is decomposed into several modular components that perform
well-defined tasks.
The following describes how to configure the Logging Server:
Note:
dynamic Server_Logging_Service Service_Object *
../lib/netsvcs:_make_ACE_Server_Logging_Acceptor()
"-p 10202"
Overview of Client Logging Service
The Client Logging Service multiplexes messages recevied from
different applications to the Server Logging Daemon running on a
designated host in a network/internetwork.
The following are the key classes in the Client Logging Service:
The following describes how to configure the Logging Client:
Note:
Option
Description
Default value
-h <hostname>
Hostname of the Server Logging Daemon
ACE_DEFAULT_SERVER_HOST
-p <port number>
Port number of the Server Logging Daemon
ACE_DEFAULT_LOGGING_SERVER_PORT
-p <rendezvous key>
Rendezvous key used to create named pipe
ACE_DEFAULT_RENDEZVOUS
dynamic Client_Logging_Service Service_Object *
../lib/netsvcs:_make_ACE_Client_Logging_Connector()
"-h tango.cs.wustl.edu -p 10202"
Overview of Logging Strategy Service
The Logging Strategy Service can be used to control the output of all the
network services. It can be invoked with certain flags that determine
where the output of all the services should go. The Logging Strategy
Service sets the flags in ACE_Log_Msg, which controls all the streams
through macros such as ACE_DEBUG, ACE_ERROR, and ACE_ERROR_RETURN. If
default behavior is required, the Logging Strategy Service need not be
invoked or it can be invoked with no parameters.
Flags
Description
STDERR
Write messages to stderr.
LOGGER
Write messages to the local client logger deamon.
OSTREAM
Write messages to the ostream that gets created by specifying a
filename (see below)
VERBOSE
Display messages in a verbose manner
SILENT
Do not print messages at all
dynamic Logging_Strategy_Service Service_Object *
../lib/netsvcs:_make_ACE_Logging_Strategy()
"-f STDERR"
Note: