diff options
Diffstat (limited to 'ACE/TAO/docs/releasenotes/rtc10_sched.html')
-rw-r--r-- | ACE/TAO/docs/releasenotes/rtc10_sched.html | 290 |
1 files changed, 290 insertions, 0 deletions
diff --git a/ACE/TAO/docs/releasenotes/rtc10_sched.html b/ACE/TAO/docs/releasenotes/rtc10_sched.html new file mode 100644 index 00000000000..526290b7a8f --- /dev/null +++ b/ACE/TAO/docs/releasenotes/rtc10_sched.html @@ -0,0 +1,290 @@ +<!doctype html public "-//w3c//dtd html 4.0 transitional//en"> +<!-- $Id$ --> +<html> +<head> + <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> + <meta name="Author" content="Venkita Subramonian"> + <meta name="GENERATOR" content="Mozilla/4.79 [en] (Windows NT 5.0; U) [Netscape]"> + <title>RTCORBA 1.0 Scheduling Service</title> +</head> +<body> + +<center> +<h2> +RTCORBA 1.0 Scheduling Service</h2></center> + +<p><br>Matt Murphy <murphym@cs.uri.edu> +<br>University of Rhode Island +<p>This is an implementation of the RTCORBA 1.0 Scheduling Service. Per +section 3 of the RTCORBA 1.0 specification (OMG), the scheduling service +is comprised of two local interfaces, a ClientScheduler and a ServerScheduler. +<br> +<h3> +Build Issues:</h3> +Run tao_idl -I $TAO_ROOT/ RTCosScheduling.pidl. +Run make -f Makefile.RTCosScheduling from ../ +<br> +<h3> +Synopsis:</h3> +The RTCosScheduler allows clients to schedule tasks according to scheduling +information determined a priori. This scheduling information is stored +in a config file so that both the client and the server has access to it. +(If the client and server exists on different nodes then place a copy of +the config file on each node.) +<p>Per the RTCORBA 1.0 spec, clients use a ClientScheduler object and servers +use a ServerScheduler object to schedule activities on the system. +Since each may or may not use its scheduler, there are four possible scenarios +in which the system may run. These are: +<p>1. Client uses ClientScheduler, Server uses ServerScheduler. In +this case the system follows the rules set forth in the "Scheduling Service" +section of this document below. +<p>2. Client uses ClientScheduler, Server does not use ServerScheduler. +In this case activities are scheduled on the client and run at the mapped +Real Time priority set forth in the config file while executing on +the client. However, any activity on the server does not run at a +real time priority. This means that Multiprocessor Priority Ceiling +Protocol does not manage activities on the server. Currently, the +client has no way of knowing that activity on the server did not follow +the MPCP protocol. Future enhancements to the RTCORBA 1.0 scheduling +service should notify the client (perhaps through a flag to a client interceptor) +that the server did not use MPCP. Please note that this +scenario is generally not recommended as there is a strong possibility +for priority inversion or unexpected blocking in this situation since +any and all server activity that uses the ServerScheduler will run +at a higher priority that server activity that does not. Use +scenario 1 above. Here, the server's priority lowers from RTCORBA::maxPriority +to RTCORBA::minPriority and things will execute on a best effort +basis. +<p>3. Client does not use ClientScheduler, Server uses ServerScheduler. +In this case the client does not use priorities set forth in the config +file. The ServerScheduler, on the other hand, does use MPCP to schedule +execution on the server. It uses the priority sent to the server +by the client, which is the default priority that the +<br> client ran at (since the client priority was not changed +by schedule_activity(). This follows the scenario of the ServerScheduler +set forth below. Please note that it is recommended +that you use +scenario 1, above, instead so that the client sends appropriate priorities +to the server. +<p>4. Client does not use ClientScheduler, server does not use ServerScheduler. +In this case neither the client nor the server take advantage of +the RTCORBA 1.0 Scheduler. +<br> +<h3> +Scheduling Service:</h3> +ClientScheduler: +<br>Clients wishing to use the ClientScheduler to schedule activities +<br>must first create a local ClientScheduler object reference. The +<br>ClientScheduler is declared as: +<p>RTCosScheduling_ClientScheduler_i ( +<br> CORBA::ORB_var orb, /// Orb reference +<br> char* node, +/// Node the client resides on +<br> char* file); +/// Config file holding scheduling information +<br> +<p>The ClientScheduler constructor parses the config file and populates +an ACE_MAP with the activity/priority associations for the node on which +the client resides. It also constructs a ClientScheduler_Interceptor +that adds a service context the send_request interceptor that contains +the priority the client is running at when the call is made. +<p>Once initialized, calls to the ClientScheduler schedule_activity(const +char * activity_name) method will match the activity_name parameter to +the CORBA priority value in the ACE_Map. It linearly maps CORBA priority +to a local OS priority and sets the local OS priority using RT Current. +If the activity name provided is not valid (i.e. not found in the config +file), a RTCosScheduling::UnknownName exception is thrown. +<p>The ClientScheduler also registers an client side interceptor with the +orb. This ClientScheduler_Interceptor finds the CORBA priority that +the client is running at when the remote method call is made and adds this +priority to a service context for the ServerScheduler_Interceptor to use. +Initial tests find that this interceptor adds 0.00015 seconds of execution +on an Intel 3.0 GHz processor. +<br> +<h3> +ServerScheduler:</h3> +Servers that contain local objects that will accept CORBA calls must create +a local ServerScheduler object. The ServerScheduler uses TAO's PortableInterceptors +to intercept incoming client requests and schedule execution on the server. +These interceptors are registered by the ORB_Core as explained in the create_POA +method below. The ServerScheduler is defined as: +<p> RTCosScheduling_ServerScheduler_i ( +<br> char *node, /// Node the ServerScheduler +resides on +<br> char *file, /// Config file holding +scheduling information +<br> char *shared_file, /// File used for shared +memory +<br> int numthreads); /// Number of threads +to create in the threadpool +<p>During initialization, the ServerScheduler finds the appropriate node +information in the config file and stores resources (key) on the node and +the appropriate priority ceiling (value) in a map. It also reads +in the base priority for the resource. +<p>The ServerScheduler constructor then registers the PortableInterceptors +necessary to scheduler execution on the server. It also set up the +linear mapping policy and a reference to the RT Current object, both of +which are used for adjusting the server's local OS priority when using +the priority ceiling control protocol. +<p>Once the ServerScheduler object is constructed, users may create an +orb and establish any non real time POA policies they wish to install by +calling the ServerScheduler's create_POA method. +<p>ServerScheduler's create_POA method creates a real time POA that will +set and enforce all non-real time policies. This method also sets +the real time POA to enforce the Server Declared Priority Model Policy +and creates a threadpool responsible for executing calls to the server. +Server Declared Priority Model is used so that the server threads may run +at a high enough priority to intercept requests as soon as they come in. +If Client Propagated Priority Ceilings were used, incoming requests would +not be intercepted until all existing servant execution is completed. +This is because MPCP elevates the priority of servant execution to be higher +than the client priorities. +<p>Recall that the number of threads in the threadpool was supplied by +the ServerScheduler constructor. The create_POA method is defined +as: +<p> virtual ::PortableServer::POA_ptr create_POA ( +<br> PortableServer::POA_ptr parent, +/// Non RT POA parent +<br> const char * adapter_name, /// +Name for the POA +<br> PortableServer::POAManager_ptr a_POAManager, +/// Manager for the POA +<br> const CORBA::PolicyList & policies /// +List of non RT policies +<br> ACE_ENV_ARG_DECL) +<br> ACE_THROW_SPEC (( +<br> CORBA::SystemException +<br> , PortableServer::POA::AdapterAlreadyExists +<br> , PortableServer::POA::InvalidPolicy +<br> )); +<br> +<p>Once a RT POA has been created, schedule_object is called to store CORBA +Object references (key) with a name (value) in an ACE_MAP. An +<br>RTCosScheduling::UnknownName exception is thrown if the schedule_object +name parameter is not found in the resource map (i.e. it was not in the +<br>config file.) The schedule_object method is declared as: +<p> virtual void schedule_object ( +<br> CORBA::Object_ptr obj, /// A CORBA object +reference +<br> const char * name /// Name to +associate with obj +<br> ACE_ENV_ARG_DECL) +<br> ACE_THROW_SPEC (( +<br> CORBA::SystemException +<br> , RTCosScheduling::UnknownName +<br> )); +<br> +<p>Once all objects that will receive client requests have been scheduled +using schedule_object, clients are free to make calls on those objects. +The scheduling service interceptors catch these calls and perform the necessary +priority ceiling control measures to ensure that the calls are executed +in the appropriate order. The ServerScheduler_Interceptor receive_request +method intercepts all incoming request immediately since it is set to run +at RTCORBA::maxPriority (the highest priority on the server OS). +It then gets the client priority sent in the service context as well as +the resource ceiling for the object and the base priority for the server. +<br>Initial tests indicate that the receive_request interceptor takes around +0.002 seconds to complete on an Intel 3.0 GHz processor. +<p>Given these values it is able to use the Multiprocessor Priority Ceiling +Protocol to schedule execution on the server to handle the request. +MPCP schedules all global critical sections at a higher priority than tasks +on the local processor by adding the client priority to the base priority +of the servant, then adding the resource ceiling of the resource to the +base priority to find the appropriate priority ceiling. For more information +about MPCP, please refer to the book "Real Time Systems", By Jane Liu (2000). +<p>Please not that the locking mechanisms are stored in shared memory on +the server. This means that the locks cannot be stored in linked +lists and are therefore manipulated using memory offsets. The total +number of locks that may be stored in shared memory is currently set at +1024. +<p>When remote execution is complete the send_reply interceptor resets +the thread to listen at RTCORBA::maxPriority and removes the task form +the Invocation +<br>list. Initial test indicate that the send_reply interceptor takes +0.000075 seconds to complete on an Intel 3.0 GHz processor. +<br> +<h3> +Scheduling Service Config File:</h3> +The scheduling service config file holds the information necessary to schedule +the system. Task and resource ceiling information is stored for each +of the nodes as follows: +<p>Node 1 /// The node name is 1 +<p>Resources: +<br>BP 6000 /// The base priority for +the resource +<br>Server1 1000 /// A list of resources and their priority +ceiling +<br>Server2 2000 +<br>END /// The end of the resource list +<p>Tasks: /// A list of tasks that will execute on the +node +<br>Client1 1000 +<br>Client2 3000 +<br>Client3 5000 +<br>END /// The end of the task list. +<p>Please note that these associations are tab delimited. Please +do not include comments in the scheduling service config file. The +priorities associated +<br>with each task and resource are considered to be CORBA priorities, +and will be mapped to local OS level priorities using the Linear Mapping +<br>model. Per the OMG RT CORBA spec, CORBA priorities have a valid +range up to 32767, where a larger value indicates a higher priority. +The current +<br>config file assumes that the Multiprocessor Priority Ceiling Protocol +is used. +<br> +<h3> +Known Issues:</h3> +TAO does not currently support request buffering, and there are no immediate +plans to do so. Consequently, the RT CORBA 1.0 Scheduling Service +is +<br>limited in that it will only function properly in systems that do not +require request buffering on the servant side. +<p>There is a bug in TAO in which mapped priorities are mapped a second +time when using Client Propagated Priority Ceiling Protocol. This +in effect +<br>lowers the priority that the servant receives. This happens to +each priority, so there should be no effect on the system. +<p>The config file assumes CORBA priorities in the range of 0 to 32767. +The Linear Priority Mapping Manager will map these to valid local OS priorities. +Take care though, in determining the priority range in the config file, +as low numbers or numbers very close in value may produce priority inversion +and other issues. For example, if the CORBA priorities used for three +tasks are 100 200 300, these will all map to OS priority 1 in on some real +time Linux systems. Please take this into +<br>account when determining the CORBA priority range to use. +<p>The 1.0 Scheduling service currently works with one orb and one POA. +If someone tries to install more than one scheduling service (client or +server side) on a single POA, then it should not add a second interceptor. +Please use a single scheduling service per POA. Furthermore, there +is a bug when more than one orb is created, an invalid policy exception +is thrown during the second call to create_POA. This bug is actively +being investigated. In the meantime please use the scheduling service +with one ORB. +<br> +<h3> +Future Enhancements:</h3> +ACE_XML +<br>The current RT CORBA 1.0 Scheduling Service uses a private method to +read the config file. This will soon be replaced with a new XML based +config +<br>file using ACE_XML to parse the config file. +<p>Priority Lanes +<br>Although not currently implemented, Priority Lanes and Thread Borrowing +may increase performance as they would help to prevent lower priority tasks +from exhausting all threads. This is considered a possible future +enhancement. +<p>Client Interceptor +<br>A client interceptor that sends a flag to notify the server interceptor +if schedule_activity() was used to set the client priority. If schedule_activity() +was not used, then the server should probably not try and schedule server +execution using MPCP. Doing so adds competition to other method calls +by other client requests that were scheduled with schedule_activity(). +<br> +<h3> +References</h3> +The Object Management Group, Real Time CORBA 1.0 Specification, www.omg.org +<br>Liu, Jane, Real Time Systems, Prentice Hall, 2000 +<br> +</body> +</html> |