From 831c4d99ee0a47f53224f853894f1dc4671337d6 Mon Sep 17 00:00:00 2001
From: schmidt
- ACE Tutorial 001 The purpose of this tutorial is to show you how to create a very simple
server capable of handling multiple client connections. Unlike a "traditional"
server application, this one handles all requests in one process. Issues
@@ -101,5 +97,6 @@ If all of this is gibberish and makes you think that ACE is way to hard to
learn, don't worry. We'll go into all the details and explain as we go.
I only went into all of this so that it can kick around in the back of your
mind until you need it later.
+
- ACE Tutorial 001 From here, we to move on to the main program loop. In a way, we're
starting at the final product when we do this, but it is a very simple
piece of code and a good place to start.
@@ -69,7 +66,7 @@ The READ_MASK is also defined in the ACE_Event_Handler class. It's
used to inform the Reactor that you want to register an event handler
to "read" data from an established connection.
-
- ACE Tutorial 001 Now we begin to look at the acceptor object.
@@ -189,8 +186,7 @@ protected:
#endif /* _CLIENT_ACCEPTOR_H */
ACE Tutorial 001
- Now we begin to look at the logger
object.
-
The comments really should tell the story. The really
interesting stuff is in handle_input(). Everything
else is just housekeeping.
diff --git a/docs/tutorials/001/page05.html b/docs/tutorials/001/page05.html
index 53bb40efb18..8725e1f410f 100644
--- a/docs/tutorials/001/page05.html
+++ b/docs/tutorials/001/page05.html
@@ -1,21 +1,18 @@
+
ACE Tutorial 001
- This concludes the first tutorial on using ACE. We've learned how to
create a simple server without knowing very much about network programming.
diff --git a/docs/tutorials/002/page03.html b/docs/tutorials/002/page03.html
index f5af30830d0..e2243550505 100644
--- a/docs/tutorials/002/page03.html
+++ b/docs/tutorials/002/page03.html
@@ -21,7 +21,6 @@
Note that here all the Client_Handler objects aren't registered with the
reactor. The Reactor is only used to accept client connections. Once a
-thread has been dedicated per connection, the Client Handler object
+thread has been deicated per connection, the Client Handler object
reponsible for that client connection now takes up the job of the
reactor and handles future events.
diff --git a/docs/tutorials/006/page03.html b/docs/tutorials/006/page03.html
index cf164c5b4d8..08e5b70c78b 100644
--- a/docs/tutorials/006/page03.html
+++ b/docs/tutorials/006/page03.html
@@ -26,20 +26,15 @@ to the Tutorial 5 version of this file.
For a quick look back:
Now we'll move on and examine the server counter-part of our client.
diff --git a/docs/tutorials/015/page06.html b/docs/tutorials/015/page06.html
index 5fc8f8eca24..82b28a3f572 100644
--- a/docs/tutorials/015/page06.html
+++ b/docs/tutorials/015/page06.html
@@ -22,7 +22,6 @@ only be one Server instance but since you can't provide a TCP/IP port,
that's probably a valid assumption!
+
+A Beginners Guide to Using the ACE Toolkit
+
+A Beginners Guide to Using the ACE Toolkit
+
// $Id$
@@ -176,5 +173,6 @@ Enter an infinite loop to let the reactor handle the events
On the next page, we will take a look at the acceptor and how it responds
to new connection requests.
+
-
+
+A Beginners Guide to Using the ACE Toolkit
-
+
It is important to notice here that we have done very little application-specifc
code in developing this object. In fact, if we take out the progress information,
the only app-specific code is when we create the new Logging_Handler
diff --git a/docs/tutorials/001/page04.html b/docs/tutorials/001/page04.html
index 1ed51dbac02..079947a3b3a 100644
--- a/docs/tutorials/001/page04.html
+++ b/docs/tutorials/001/page04.html
@@ -1,26 +1,23 @@
+
-
-
+
+A Beginners Guide to Using the ACE Toolkit
-
+
+
// $Id$
@@ -200,7 +197,9 @@ protected:
#endif /* _CLIENT_HANDLER_H */
-
+
+
+
+A Beginners Guide to Using the ACE Toolkit
-
+
-
// $Id$
#ifndef LOGGING_HANDLER_H
@@ -36,157 +35,157 @@
#include "ace/SOCK_Stream.h"
#include "ace/Reactor.h"
-/*
- Since we used the template to create the acceptor, we don't know if there is a
- way to get to the reactor it uses. We'll take the easy way out and grab the
- global pointer. (There is a way to get back to the acceptor's reactor that
- we'll see later on.)
- */
+/* Since we used the template to create the acceptor, we don't know if
+ there is a way to get to the reactor it uses. We'll take the easy
+ way out and grab the global pointer. (There is a way to get back to
+ the acceptor's reactor that we'll see later on.) */
extern ACE_Reactor * g_reactor;
-/*
- This time we're deriving from ACE_Svc_Handler instead of ACE_Event_Handler.
- The big reason for this is because it already knows how to contain a SOCK_Stream
- and provides all of the method calls needed by the reactor. The second template
- parameter is for some advanced stuff we'll do with later servers. For now, just
- use it as is...
- */
-class Logging_Handler : public ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH >
+/* This time we're deriving from ACE_Svc_Handler instead of
+ ACE_Event_Handler. The big reason for this is because it already
+ knows how to contain a SOCK_Stream and provides all of the method
+ calls needed by the reactor. The second template parameter is for
+ some advanced stuff we'll do with later servers. For now, just use
+ it as is... */
+class Logging_Handler : public ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH>
{
-
public:
- /*
- The Acceptor<> template will open() us when there is a new client connection.
- */
+ /* The Acceptor<> template will open() us when there is a new client
+ connection. */
virtual int open (void *)
{
ACE_INET_Addr addr;
- /*
- Ask the peer() (held in our baseclass) to tell us the address of the cient
- which has connected. There may be valid reasons for this to fail where we
- wouldn't want to drop the connection but I can't think of one.
- */
+ /* Ask the peer() (held in our baseclass) to tell us the address
+ of the cient which has connected. There may be valid reasons
+ for this to fail where we wouldn't want to drop the connection
+ but I can't think of one. */
if (this->peer ().get_remote_addr (addr) == -1)
return -1;
- /*
- The Acceptor<> won't register us with it's reactor, so we have to do so
- ourselves. This is where we have to grab that global pointer. Notice
- that we again use the READ_MASK so that handle_input() will be called
- when the client does something.
- */
- if (g_reactor->register_handler (this, ACE_Event_Handler::READ_MASK) == -1)
- ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) can't register with reactor\n"), -1);
-
- /*
- Here's another new treat. We schedule a timer event. This particular one
- will fire in two seconds and then every three seconds after that. It doesn't
- serve any useful purpose in our application other than to show you how it
- is done.
- */
- else if (g_reactor->schedule_timer (this, 0, ACE_Time_Value (2), ACE_Time_Value (3)) == -1)
- ACE_ERROR_RETURN ((LM_ERROR, "can'(%P|%t) t register with reactor\n"), -1);
-
- ACE_DEBUG ((LM_DEBUG, "(%P|%t) connected with %s\n", addr.get_host_name() ));
+ /* The Acceptor<> won't register us with it's reactor, so we have
+ to do so ourselves. This is where we have to grab that global
+ pointer. Notice that we again use the READ_MASK so that
+ handle_input() will be called when the client does something. */
+ if (g_reactor->register_handler (this,
+ ACE_Event_Handler::READ_MASK) == -1)
+ ACE_ERROR_RETURN ((LM_ERROR,
+ "(%P|%t) can't register with reactor\n"),
+ -1);
+
+ /* Here's another new treat. We schedule a timer event. This
+ particular one will fire in two seconds and then every three
+ seconds after that. It doesn't serve any useful purpose in our
+ application other than to show you how it is done. */
+ else if (g_reactor->schedule_timer (this,
+ 0,
+ ACE_Time_Value (2),
+ ACE_Time_Value (3)) == -1)
+ ACE_ERROR_RETURN ((LM_ERROR,
+ "can'(%P|%t) t register with reactor\n"),
+ -1);
+
+ ACE_DEBUG ((LM_DEBUG,
+ "(%P|%t) connected with %s\n",
+ addr.get_host_name ()));
return 0;
}
- /*
- This is a matter of style & maybe taste. Instead of putting all of this stuff
- into a destructor, we put it here and request that everyone call destroy()
- instead of 'delete'.
- */
+ /* This is a matter of style & maybe taste. Instead of putting all
+ of this stuff into a destructor, we put it here and request that
+ everyone call destroy() instead of 'delete'. */
virtual void destroy (void)
{
- /*
- Remove ourselves from the reactor
- */
- g_reactor->remove_handler(this,ACE_Event_Handler::READ_MASK|ACE_Event_Handler::DONT_CALL);
+ /* Remove ourselves from the reactor */
+ g_reactor->remove_handler
+ (this,
+ ACE_Event_Handler::READ_MASK | ACE_Event_Handler::DONT_CALL);
- /*
- Cancel that timer we scheduled in open()
- */
+ /* Cancel that timer we scheduled in open() */
g_reactor->cancel_timer (this);
- /*
- Shut down the connection to the client.
- */
+ /* Shut down the connection to the client. */
this->peer ().close ();
- /*
- Free our memory.
- */
+ /* Free our memory. */
delete this;
}
- /*
- If somebody doesn't like us, they will close() us. Actually, if our open() method
- returns -1, the Acceptor<> will invoke close() on us for cleanup.
- */
- virtual int close (u_long _flags = 0)
+ /* If somebody doesn't like us, they will close() us. Actually, if
+ our open() method returns -1, the Acceptor<> will invoke close()
+ on us for cleanup. */
+ virtual int close (u_long flags = 0)
{
- /*
- The ACE_Svc_Handler baseclass requires the _flags parameter. We don't
- use it here though, so we mark it as UNUSED. You can accomplish the
- same thing with a signature like handle_input's below.
- */
- ACE_UNUSED_ARG(_flags);
+ /* The ACE_Svc_Handler baseclass requires the <flags> parameter.
+ We don't use it here though, so we mark it as UNUSED. You can
+ accomplish the same thing with a signature like handle_input's
+ below. */
+ ACE_UNUSED_ARG (flags);
/*
Clean up and go away.
- */
+ */
this->destroy ();
return 0;
}
protected:
- /*
- Respond to input just like Tutorial 1.
- */
+ /* Respond to input just like Tutorial 1. */
virtual int handle_input (ACE_HANDLE)
{
char buf[128];
- memset (buf, 0, sizeof (buf));
-
- switch (this->peer ().recv (buf, sizeof buf))
- {
- case -1:
- ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) %p bad read\n", "client logger"), -1);
- case 0:
- ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) closing log daemon (fd = %d)\n", this->get_handle ()), -1);
- default:
- ACE_DEBUG ((LM_DEBUG, "(%P|%t) from client: %s", buf));
- }
+ ACE_OS::memset (buf, 0, sizeof (buf));
+
+ switch (this->peer ().recv (buf,
+ sizeof buf))
+ {
+ case -1:
+ ACE_ERROR_RETURN ((LM_ERROR,
+ "(%P|%t) %p bad read\n",
+ "client logger"),
+ -1);
+ case 0:
+ ACE_ERROR_RETURN ((LM_ERROR,
+ "(%P|%t) closing log daemon (fd = %d)\n",
+ this->get_handle ()),
+ -1);
+ default:
+ ACE_DEBUG ((LM_DEBUG,
+ "(%P|%t) from client: %s",
+ buf));
+ }
return 0;
}
- /*
- When the timer expires, handle_timeout() will be called. The 'arg' is the value passed
- after 'this' in the schedule_timer() call. You can pass in anything there that you can
- cast to a void*.
- */
- virtual int handle_timeout (const ACE_Time_Value & tv, const void *arg)
+ /* When the timer expires, handle_timeout() will be called. The
+ 'arg' is the value passed after 'this' in the schedule_timer()
+ call. You can pass in anything there that you can cast to a
+ void*. */
+ virtual int handle_timeout (const ACE_Time_Value &tv,
+ const void *arg)
{
- ACE_DEBUG ((LM_DEBUG, "(%P|%t) handling timeout from this = %u\n", this));
+ ACE_DEBUG ((LM_DEBUG,
+ "(%P|%t) handling timeout from this = %u\n",
+ this));
return 0;
}
/*
Clean ourselves up when handle_input() (or handle_timer()) returns -1
- */
- virtual int handle_close(ACE_HANDLE, ACE_Reactor_Mask _mask)
+ */
+ virtual int handle_close (ACE_HANDLE,
+ ACE_Reactor_Mask)
{
- this->destroy();
+ this->destroy ();
return 0;
}
};
-#endif // LOGGING_HANDLER_H
+#endif /* LOGGING_HANDLER_H */
-
// $Id$
#ifndef CLIENT_ACCEPTOR_H
#define CLIENT_ACCEPTOR_H
-/*
- The ACE_Acceptor<> template lives in the ace/Acceptor.h header file. You'll
- find a very consitent naming convention between the ACE objects and the
- headers where they can be found. In general, the ACE object ACE_Foobar will
- be found in ace/Foobar.h.
- */
+/* The ACE_Acceptor<> template lives in the ace/Acceptor.h header
+ file. You'll find a very consitent naming convention between the
+ ACE objects and the headers where they can be found. In general,
+ the ACE object ACE_Foobar will be found in ace/Foobar.h. */
#include "ace/Acceptor.h"
@@ -49,30 +46,26 @@ with an object type to instantiate when a new connection arrives.
# pragma once
#endif /* ACE_LACKS_PRAGMA_ONCE */
-/*
- Since we want to work with sockets, we'll need a SOCK_Acceptor to allow the
- clients to connect to us.
- */
+/* Since we want to work with sockets, we'll need a SOCK_Acceptor to
+ allow the clients to connect to us. */
#include "ace/SOCK_Acceptor.h"
-/*
- The Client_Handler object we develop will be used to handle clients once
- they're connected. The ACE_Acceptor<> template's first parameter requires
- such an object. In some cases, you can get by with just a forward
- declaration on the class, in others you have to have the whole thing.
- */
+/* The Client_Handler object we develop will be used to handle clients
+ once they're connected. The ACE_Acceptor<> template's first
+ parameter requires such an object. In some cases, you can get by
+ with just a forward declaration on the class, in others you have to
+ have the whole thing. */
#include "client_handler.h"
-/*
- Parameterize the ACE_Acceptor<> such that it will listen for socket
- connection attempts and create Client_Handler objects when they happen. In
- Tutorial 001, we wrote the basic acceptor logic on our own before we
- realized that ACE_Acceptor<> was available. You'll get spoiled using the
- ACE templates because they take away a lot of the tedious details!
- */
-typedef ACE_Acceptor < Client_Handler, ACE_SOCK_ACCEPTOR > Client_Acceptor;
+/* Parameterize the ACE_Acceptor<> such that it will listen for socket
+ connection attempts and create Client_Handler objects when they
+ happen. In Tutorial 001, we wrote the basic acceptor logic on our
+ own before we realized that ACE_Acceptor<> was available. You'll
+ get spoiled using the ACE templates because they take away a lot of
+ the tedious details! */
+typedef ACE_Acceptor <Client_Handler, ACE_SOCK_ACCEPTOR> Client_Acceptor;
-#endif // CLIENT_ACCEPTOR_H
+#endif /* CLIENT_ACCEPTOR_H */
-
// $Id$
#ifndef CLIENT_HANDLER_H
#define CLIENT_HANDLER_H
-/*
- Our client handler must exist somewhere in the ACE_Event_Handler object
- hierarchy. This is a requirement of the ACE_Reactor because it maintains
- ACE_Event_Handler pointers for each registered event handler. You could
- derive our Client_Handler directly from ACE_Event_Handler but you still have
- to have an ACE_SOCK_Stream for the actual connection. With a direct
- derivative of ACE_Event_Handler, you'll have to contain and maintain an
- ACE_SOCK_Stream instance yourself. With ACE_Svc_Handler (which is a
- derivative of ACE_Event_Handler) some of those details are handled for you.
-
+/* Our client handler must exist somewhere in the ACE_Event_Handler
+ object hierarchy. This is a requirement of the ACE_Reactor because
+ it maintains ACE_Event_Handler pointers for each registered event
+ handler. You could derive our Client_Handler directly from
+ ACE_Event_Handler but you still have to have an ACE_SOCK_Stream for
+ the actual connection. With a direct derivative of
+ ACE_Event_Handler, you'll have to contain and maintain an
+ ACE_SOCK_Stream instance yourself. With ACE_Svc_Handler (which is
+ a derivative of ACE_Event_Handler) some of those details are
+ handled for you.
*/
#include "ace/Svc_Handler.h"
@@ -54,91 +53,82 @@ the definition where all of the real work of the application takes place.
#include "ace/SOCK_Stream.h"
-/*
- Another feature of ACE_Svc_Handler is it's ability to present the ACE_Task<>
- interface as well. That's what the ACE_NULL_SYNCH parameter below is all
- about. That's beyond our scope here but we'll come back to it in the next
- tutorial when we start looking at concurrency options.
- */
-class Client_Handler : public ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH >
+/* Another feature of ACE_Svc_Handler is it's ability to present the
+ ACE_Task<> interface as well. That's what the ACE_NULL_SYNCH
+ parameter below is all about. That's beyond our scope here but
+ we'll come back to it in the next tutorial when we start looking at
+ concurrency options. */
+class Client_Handler : public ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH>
{
public:
-
// Constructor...
Client_Handler (void);
- /*
- The destroy() method is our preferred method of destruction. We could
- have overloaded the delete operator but that is neither easy nor
- intuitive (at least to me). Instead, we provide a new method of
- destruction and we make our destructor protected so that only ourselves,
- our derivatives and our friends can delete us. It's a nice
- compromise.
- */
+ /* The destroy() method is our preferred method of destruction. We
+ could have overloaded the delete operator but that is neither easy
+ nor intuitive (at least to me). Instead, we provide a new method
+ of destruction and we make our destructor protected so that only
+ ourselves, our derivatives and our friends can delete us. It's a
+ nice compromise. */
void destroy (void);
- /*
- Most ACE objects have an open() method. That's how you make them ready
- to do work. ACE_Event_Handler has a virtual open() method which allows us
- to create an override. ACE_Acceptor<> will invoke this method after
- creating a new Client_Handler when a client connects. Notice that the
- parameter to open() is a void*. It just so happens that the pointer
- points to the acceptor which created us. You would like for the parameter
- to be an ACE_Acceptor<>* but since ACE_Event_Handler is generic, that
- would tie it too closely to the ACE_Acceptor<> set of objects. In our
- definition of open() you'll see how we get around that.
- */
- int open (void *_acceptor);
-
- /*
- When there is activity on a registered handler, the handle_input() method
- of the handler will be invoked. If that method returns an error code (eg
- -- -1) then the reactor will invoke handle_close() to allow the object to
- clean itself up. Since an event handler can be registered for more than
- one type of callback, the callback mask is provided to inform
- handle_close() exactly which method failed. That way, you don't have to
- maintain state information between your handle_* method calls. The _handle
- parameter is explained below...
- As a side-effect, the reactor will also invoke remove_handler()
- for the object on the mask that caused the -1 return. This means
- that we don't have to do that ourselves!
- */
- int handle_close (ACE_HANDLE _handle, ACE_Reactor_Mask _mask);
+ /* Most ACE objects have an open() method. That's how you make them
+ ready to do work. ACE_Event_Handler has a virtual open() method
+ which allows us to create an override. ACE_Acceptor<> will invoke
+ this method after creating a new Client_Handler when a client
+ connects. Notice that the parameter to open() is a void*. It just
+ so happens that the pointer points to the acceptor which created
+ us. You would like for the parameter to be an ACE_Acceptor<>* but
+ since ACE_Event_Handler is generic, that would tie it too closely
+ to the ACE_Acceptor<> set of objects. In our definition of open()
+ you'll see how we get around that. */
+ int open (void *acceptor);
+
+ /* When there is activity on a registered handler, the
+ handle_input() method of the handler will be invoked. If that
+ method returns an error code (eg -- -1) then the reactor will
+ invoke handle_close() to allow the object to clean itself
+ up. Since an event handler can be registered for more than one
+ type of callback, the callback mask is provided to inform
+ handle_close() exactly which method failed. That way, you don't
+ have to maintain state information between your handle_* method
+ calls. The <handle> parameter is explained below... As a
+ side-effect, the reactor will also invoke remove_handler() for the
+ object on the mask that caused the -1 return. This means that we
+ don't have to do that ourselves! */
+ int handle_close (ACE_HANDLE handle,
+ ACE_Reactor_Mask mask);
protected:
- /*
- When we register with the reactor, we're going to tell it that we want to
- be notified of READ events. When the reactor sees that there is read
- activity for us, our handle_input() will be invoked. The _handle
- provided is the handle (file descriptor in Unix) of the actual connection
- causing the activity. Since we're derived from ACE_Svc_Handler<> and it
- maintains its own peer (ACE_SOCK_Stream) object, this is redundant for
- us. However, if we had been derived directly from ACE_Event_Handler, we
- may have chosen not to contain the peer. In that case, the _handle
- would be important to us for reading the client's data.
- */
- int handle_input (ACE_HANDLE _handle);
-
- /*
- This has nothing at all to do with ACE. I've added this here as a worker
- function which I will call from handle_input(). That allows me to
- introduce concurrency in later tutorials with no changes to the worker
- function. You can think of process() as application-level code and
- everything else as application-framework code.
- */
- int process (char *_rdbuf, int _rdbuf_len);
-
- /*
- We don't really do anything in our destructor but we've declared it to be
- protected to prevent casual deletion of this object. As I said above, I
- really would prefer that everyone goes through the destroy() method to get
- rid of us.
- */
- ~Client_Handler (void);
+ /* When we register with the reactor, we're going to tell it that we
+ want to be notified of READ events. When the reactor sees that
+ there is read activity for us, our handle_input() will be
+ invoked. The _handle provided is the handle (file descriptor in
+ Unix) of the actual connection causing the activity. Since we're
+ derived from ACE_Svc_Handler<> and it maintains its own peer
+ (ACE_SOCK_Stream) object, this is redundant for us. However, if
+ we had been derived directly from ACE_Event_Handler, we may have
+ chosen not to contain the peer. In that case, the <handle> would
+ be important to us for reading the client's data. */
+ int handle_input (ACE_HANDLE handle);
+
+ /* This has nothing at all to do with ACE. I've added this here as
+ a worker function which I will call from handle_input(). That
+ allows me to introduce concurrency in later tutorials with no
+ changes to the worker function. You can think of process() as
+ application-level code and everything else as
+ application-framework code. */
+ int process (char *rdbuf, int rdbuf_len);
+
+ /* We don't really do anything in our destructor but we've declared
+ it to be protected to prevent casual deletion of this object. As
+ I said above, I really would prefer that everyone goes through the
+ destroy() method to get rid of us. */
+ ~Client_Handler (void);
};
-#endif // CLIENT_HANDLER_H
+#endif /* CLIENT_HANDLER_H */
-
// $Id$
#ifndef CLIENT_ACCEPTOR_H
#define CLIENT_ACCEPTOR_H
-/*
- The ACE_Acceptor<> template lives in the ace/Acceptor.h header file. You'll
- find a very consistent naming convention between the ACE objects and the
- headers where they can be found. In general, the ACE object ACE_Foobar will
-
-
- be found in ace/Foobar.h.
- */
+/* The ACE_Acceptor<> template lives in the ace/Acceptor.h header
+ file. You'll find a very consistent naming convention between the
+ ACE objects and the headers where they can be found. In general,
+ the ACE object ACE_Foobar will be found in ace/Foobar.h. */
#include "ace/Acceptor.h"
@@ -47,70 +42,64 @@ to the Tutorial 5 version of this file.
# pragma once
#endif /* ACE_LACKS_PRAGMA_ONCE */
-/*
- Since we want to work with sockets, we'll need a SOCK_Acceptor to allow the
- clients to connect to us.
- */
+/* Since we want to work with sockets, we'll need a SOCK_Acceptor to
+ allow the clients to connect to us. */
#include "ace/SOCK_Acceptor.h"
-/*
- The Client_Handler object we develop will be used to handle clients once
- they're connected. The ACE_Acceptor<> template's first parameter requires
- such an object. In some cases, you can get by with just a forward
- declaration on the class, in others you have to have the whole thing.
- */
+/* The Client_Handler object we develop will be used to handle clients
+ once they're connected. The ACE_Acceptor<> template's first
+ parameter requires such an object. In some cases, you can get by
+ with just a forward declaration on the class, in others you have to
+ have the whole thing. */
#include "client_handler.h"
-/*
- Parameterize the ACE_Acceptor<> such that it will listen for socket
- connection attempts and create Client_Handler objects when they happen. In
- Tutorial 001, we wrote the basic acceptor logic on our own before we
- realized that ACE_Acceptor<> was available. You'll get spoiled using the
- ACE templates because they take away a lot of the tedious details!
- */
-typedef ACE_Acceptor < Client_Handler, ACE_SOCK_ACCEPTOR > Client_Acceptor_Base;
-
-/*
- Here, we use the parameterized ACE_Acceptor<> as a baseclass for our customized
- Client_Acceptor object. I've done this so that we can provide it with our choice
- of concurrency strategies when the object is created. Each Client_Handler it
- creates will use this information to determine how to act. If we were going
- to create a system that was always thread-per-connection, we would not have
- bothered to extend Client_Acceptor.
- */
+/* Parameterize the ACE_Acceptor<> such that it will listen for socket
+ connection attempts and create Client_Handler objects when they
+ happen. In Tutorial 001, we wrote the basic acceptor logic on our
+ own before we realized that ACE_Acceptor<> was available. You'll
+ get spoiled using the ACE templates because they take away a lot of
+ the tedious details! */
+typedef ACE_Acceptor <Client_Handler, ACE_SOCK_ACCEPTOR> Client_Acceptor_Base;
+
+/* Here, we use the parameterized ACE_Acceptor<> as a baseclass for
+ our customized Client_Acceptor object. I've done this so that we
+ can provide it with our choice of concurrency strategies when the
+ object is created. Each Client_Handler it creates will use this
+ information to determine how to act. If we were going to create a
+ system that was always thread-per-connection, we would not have
+ bothered to extend Client_Acceptor. */
class Client_Acceptor : public Client_Acceptor_Base
{
public:
- /*
- This is always a good idea. If nothing else, it makes your code more
- orthogonal no matter what baseclasses your objects have.
- */
- typedef Client_Acceptor_Base inherited;
-
- /*
- Construct the object with the concurrency strategy. Since this tutorial
- is focused on thread-per-connection, we make that the default. We could
- have chosen to omitt the default and populate it in main() instead.
- */
- Client_Acceptor( int _thread_per_connection = 1 )
- : thread_per_connection_(_thread_per_connection)
- {
- }
-
- /*
- Return the value of our strategy flag. This is used by the Client_Handler
- to decide how to act. If 'true' then the handler will behave in a
- thread-per-connection manner.
- */
- int thread_per_connection(void)
- { return this->thread_per_connection_; }
+ /*
+ This is always a good idea. If nothing else, it makes your code more
+ orthogonal no matter what baseclasses your objects have.
+ */
+ typedef Client_Acceptor_Base inherited;
+
+ /*
+ Construct the object with the concurrency strategy. Since this tutorial
+ is focused on thread-per-connection, we make that the default. We could
+ have chosen to omitt the default and populate it in main() instead.
+ */
+ Client_Acceptor (int thread_per_connection = 1)
+ : thread_per_connection_ (thread_per_connection)
+ {
+ }
+
+ /* Return the value of our strategy flag. This is used by the
+ Client_Handler to decide how to act. If 'true' then the handler
+ will behave in a thread-per-connection manner. */
+ int thread_per_connection (void)
+ {
+ return this->thread_per_connection_;
+ }
protected:
- int thread_per_connection_;
-
+ int thread_per_connection_;
};
-#endif // CLIENT_ACCEPTOR_H
+#endif /* CLIENT_ACCEPTOR_H */
diff --git a/docs/tutorials/006/page04.html b/docs/tutorials/006/page04.html
index 130c264cec7..a580702f0cd 100644
--- a/docs/tutorials/006/page04.html
+++ b/docs/tutorials/006/page04.html
@@ -23,21 +23,21 @@ exist.
-
// $Id$
#ifndef CLIENT_HANDLER_H
#define CLIENT_HANDLER_H
-/*
- Our client handler must exist somewhere in the ACE_Event_Handler object
- hierarchy. This is a requirement of the ACE_Reactor because it maintains
- ACE_Event_Handler pointers for each registered event handler. You could
- derive our Client_Handler directly from ACE_Event_Handler but you still have
- to have an ACE_SOCK_Stream for the actually connection. With a direct
- derivative of ACE_Event_Handler, you'll have to contain and maintain an
- ACE_SOCK_Stream instance yourself. With ACE_Svc_Handler (which is a
- derivative of ACE_Event_Handler) some of those details are handled for you.
+/* Our client handler must exist somewhere in the ACE_Event_Handler
+ object hierarchy. This is a requirement of the ACE_Reactor because
+ it maintains ACE_Event_Handler pointers for each registered event
+ handler. You could derive our Client_Handler directly from
+ ACE_Event_Handler but you still have to have an ACE_SOCK_Stream for
+ the actually connection. With a direct derivative of
+ ACE_Event_Handler, you'll have to contain and maintain an
+ ACE_SOCK_Stream instance yourself. With ACE_Svc_Handler (which is
+ a derivative of ACE_Event_Handler) some of those details are
+ handled for you.
*/
@@ -49,110 +49,98 @@ exist.
#include "ace/SOCK_Stream.h"
-/*
- Another feature of ACE_Svc_Handler is it's ability to present the ACE_Task<>
- interface as well. That's what the ACE_NULL_SYNCH parameter below is all
- about. If our Client_Acceptor has chosen thread-per-connection then our
- open() method will activate us into a thread. At that point, our svc()
- method will execute. We still don't take advantage of the things
- ACE_NULL_SYNCH exists for but stick around for Tutorial 7 and pay special
- attention to the Thread_Pool object there for an explanation.
- */
-class Client_Handler : public ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH >
+/* Another feature of ACE_Svc_Handler is it's ability to present the
+ ACE_Task<> interface as well. That's what the ACE_NULL_SYNCH
+ parameter below is all about. If our Client_Acceptor has chosen
+ thread-per-connection then our open() method will activate us into
+ a thread. At that point, our svc() method will execute. We still
+ don't take advantage of the things ACE_NULL_SYNCH exists for but
+ stick around for Tutorial 7 and pay special attention to the
+ Thread_Pool object there for an explanation. */
+class Client_Handler : public ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH>
{
public:
- typedef ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH > inherited;
+ typedef ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH> inherited;
// Constructor...
Client_Handler (void);
- /*
- The destroy() method is our preferred method of destruction. We could
- have overloaded the delete operator but that is neither easy nor
- intuitive (at least to me). Instead, we provide a new method of
- destruction and we make our destructor protected so that only ourselves,
- our derivatives and our friends can delete us. It's a nice
- compromise.
- */
+ /* The destroy() method is our preferred method of destruction. We
+ could have overloaded the delete operator but that is neither easy
+ nor intuitive (at least to me). Instead, we provide a new method
+ of destruction and we make our destructor protected so that only
+ ourselves, our derivatives and our friends can delete us. It's a
+ nice compromise. */
void destroy (void);
- /*
- Most ACE objects have an open() method. That's how you make them ready
- to do work. ACE_Event_Handler has a virtual open() method which allows us
- to create this overrride. ACE_Acceptor<> will invoke this method after
- creating a new Client_Handler when a client connects. Notice that the
- parameter to open() is a void*. It just so happens that the pointer
- points to the acceptor which created us. You would like for the parameter
- to be an ACE_Acceptor<>* but since ACE_Event_Handler is generic, that
- would tie it too closely to the ACE_Acceptor<> set of objects. In our
- definition of open() you'll see how we get around that.
- */
- int open (void *_acceptor);
-
- /*
- When an ACE_Task<> object falls out of the svc() method, the framework
- will call the close() method. That's where we want to cleanup ourselves
- if we're running in either thread-per-connection or thread-pool mode.
- */
- int close(u_long flags = 0);
-
- /*
- When there is activity on a registered handler, the handle_input() method
- of the handler will be invoked. If that method returns an error code (eg
- -- -1) then the reactor will invoke handle_close() to allow the object to
- clean itself up. Since an event handler can be registered for more than
- one type of callback, the callback mask is provided to inform
- handle_close() exactly which method failed. That way, you don't have to
- maintain state information between your handle_* method calls. The _handle
- parameter is explained below...
- As a side-effect, the reactor will also invoke remove_handler()
- for the object on the mask that caused the -1 return. This means
- that we don't have to do that ourselves!
- */
- virtual int handle_close (ACE_HANDLE _handle = ACE_INVALID_HANDLE,
- ACE_Reactor_Mask _mask = ACE_Event_Handler::ALL_EVENTS_MASK );
+ /* Most ACE objects have an open() method. That's how you make them
+ ready to do work. ACE_Event_Handler has a virtual open() method
+ which allows us to create this overrride. ACE_Acceptor<> will
+ invoke this method after creating a new Client_Handler when a
+ client connects. Notice that the parameter to open() is a void*.
+ It just so happens that the pointer points to the acceptor which
+ created us. You would like for the parameter to be an
+ ACE_Acceptor<>* but since ACE_Event_Handler is generic, that would
+ tie it too closely to the ACE_Acceptor<> set of objects. In our
+ definition of open() you'll see how we get around that. */
+ int open (void *acceptor);
+
+ /* When an ACE_Task<> object falls out of the svc() method, the
+ framework will call the close() method. That's where we want to
+ cleanup ourselves if we're running in either thread-per-connection
+ or thread-pool mode. */
+ int close (u_long flags = 0);
+
+ /* When there is activity on a registered handler, the
+ handle_input() method of the handler will be invoked. If that
+ method returns an error code (eg -- -1) then the reactor will
+ invoke handle_close() to allow the object to clean itself
+ up. Since an event handler can be registered for more than one
+ type of callback, the callback mask is provided to inform
+ handle_close() exactly which method failed. That way, you don't
+ have to maintain state information between your handle_* method
+ calls. The <handle> parameter is explained below... As a
+ side-effect, the reactor will also invoke remove_handler() for the
+ object on the mask that caused the -1 return. This means that we
+ don't have to do that ourselves! */
+ virtual int handle_close (ACE_HANDLE handle = ACE_INVALID_HANDLE,
+ ACE_Reactor_Mask mask = ACE_Event_Handler::ALL_EVENTS_MASK );
protected:
- /*
- If the Client_Acceptor which created us has chosen a thread-per-connection
- strategy then our open() method will activate us into a dedicate thread.
- The svc() method will then execute in that thread performing some of the
- functions we used to leave up to the reactor.
- */
- int svc(void);
-
- /*
- When we register with the reactor, we're going to tell it that we want to
- be notified of READ events. When the reactor sees that there is read
- activity for us, our handle_input() will be invoked. The _handleg
- provided is the handle (file descriptor in Unix) of the actual connection
- causing the activity. Since we're derived from ACE_Svc_Handler<> and it
- maintains it's own peer (ACE_SOCK_Stream) object, this is redundant for
- us. However, if we had been derived directly from ACE_Event_Handler, we
- may have chosen not to contain the peer. In that case, the _handleg
- would be important to us for reading the client's data.
- */
- int handle_input (ACE_HANDLE _handle);
-
- /*
- This has nothing at all to do with ACE. I've added this here as a worker
- function which I will call from handle_input(). As promised in Tutorial 5
- I will use this now to make it easier to switch between our two possible
- concurrency strategies.
- */
- int process (char *_rdbuf, int _rdbuf_len);
-
- /*
- We don't really do anything in our destructor but we've declared it to be
- protected to prevent casual deletion of this object. As I said above, I
- really would prefer that everyone goes through the destroy() method to get
- rid of us.
- */
- ~Client_Handler (void);
+ /* If the Client_Acceptor which created us has chosen a
+ thread-per-connection strategy then our open() method will
+ activate us into a dedicate thread. The svc() method will then
+ execute in that thread performing some of the functions we used to
+ leave up to the reactor. */
+ int svc (void);
+
+ /* When we register with the reactor, we're going to tell it that we
+ want to be notified of READ events. When the reactor sees that
+ there is read activity for us, our handle_input() will be
+ invoked. The _handleg provided is the handle (file descriptor in
+ Unix) of the actual connection causing the activity. Since we're
+ derived from ACE_Svc_Handler<> and it maintains it's own peer
+ (ACE_SOCK_Stream) object, this is redundant for us. However, if
+ we had been derived directly from ACE_Event_Handler, we may have
+ chosen not to contain the peer. In that case, the <handle> would
+ be important to us for reading the client's data. */
+ int handle_input (ACE_HANDLE handle);
+
+ /* This has nothing at all to do with ACE. I've added this here as
+ a worker function which I will call from handle_input(). As
+ promised in Tutorial 5 I will use this now to make it easier to
+ switch between our two possible concurrency strategies. */
+ int process (char *rdbuf, int rdbuf_len);
+
+ /* We don't really do anything in our destructor but we've declared
+ it to be protected to prevent casual deletion of this object. As
+ I said above, I really would prefer that everyone goes through the
+ destroy() method to get rid of us. */
+ ~Client_Handler (void);
};
-#endif // CLIENT_HANDLER_H
+#endif /* CLIENT_HANDLER_H */
diff --git a/docs/tutorials/007/page01.html b/docs/tutorials/007/page01.html
index e2585c7b3de..a1cd7ceac79 100644
--- a/docs/tutorials/007/page01.html
+++ b/docs/tutorials/007/page01.html
@@ -77,5 +77,6 @@ which provides an OO approach to thread-creation and implementation.
ACE_Message_Queue which is discussed in depth in
Tutorial 10. Feel free to read ahead
if you get lost in the message queue stuff.
-
+
+
-
// $Id$
#ifndef CLIENT_ACCEPTOR_H
#define CLIENT_ACCEPTOR_H
-/*
- The ACE_Acceptor<> template lives in the ace/Acceptor.h header file. You'll
- find a very consitent naming convention between the ACE objects and the
- headers where they can be found. In general, the ACE object ACE_Foobar will
- be found in ace/Foobar.h.
- */
+/* The ACE_Acceptor<> template lives in the ace/Acceptor.h header
+ file. You'll find a very consitent naming convention between the
+ ACE objects and the headers where they can be found. In general,
+ the ACE object ACE_Foobar will be found in ace/Foobar.h. */
#include "ace/Acceptor.h"
@@ -37,128 +34,115 @@
# pragma once
#endif /* ACE_LACKS_PRAGMA_ONCE */
-/*
- Since we want to work with sockets, we'll need a SOCK_Acceptor to allow the
- clients to connect to us.
- */
+/* Since we want to work with sockets, we'll need a SOCK_Acceptor to
+ allow the clients to connect to us. */
#include "ace/SOCK_Acceptor.h"
-/*
- The Client_Handler object we develop will be used to handle clients once
- they're connected. The ACE_Acceptor<> template's first parameter requires
- such an object. In some cases, you can get by with just a forward
- declaration on the class, in others you have to have the whole thing.
- */
+/* The Client_Handler object we develop will be used to handle clients
+ once they're connected. The ACE_Acceptor<> template's first
+ parameter requires such an object. In some cases, you can get by
+ with just a forward declaration on the class, in others you have to
+ have the whole thing. */
#include "client_handler.h"
-/*
- Parameterize the ACE_Acceptor<> such that it will listen for socket
- connection attempts and create Client_Handler objects when they happen. In
- Tutorial 001, we wrote the basic acceptor logic on our own before we
- realized that ACE_Acceptor<> was available. You'll get spoiled using the
- ACE templates because they take away a lot of the tedious details!
- */
-typedef ACE_Acceptor < Client_Handler, ACE_SOCK_ACCEPTOR > Client_Acceptor_Base;
+/* Parameterize the ACE_Acceptor<> such that it will listen for socket
+ connection attempts and create Client_Handler objects when they
+ happen. In Tutorial 001, we wrote the basic acceptor logic on our
+ own before we realized that ACE_Acceptor<> was available. You'll
+ get spoiled using the ACE templates because they take away a lot of
+ the tedious details! */
+typedef ACE_Acceptor <Client_Handler, ACE_SOCK_ACCEPTOR> Client_Acceptor_Base;
#include "thread_pool.h"
-/*
- This time we've added quite a bit more to our acceptor. In addition to
- providing a choice of concurrency strategies, we also maintain a Thread_Pool
- object in case that strategy is chosen. The object still isn't very complex
- but it's come a long way from the simple typedef we had in Tutorial 5.
-
- Why keep the thread pool as a member? If we go back to the inetd concept
- you'll recall that we need several acceptors to make that work. We may have
- a situation in which our different client types requre different resources.
- That is, we may need a large thread pool for some client types and a smaller
- one for others. We could share a pool but then the client types may have
- undesirable impact on one another.
-
- Just in case you do want to share a single thread pool, there is a constructor
- below that will let you do that.
- */
+/* This time we've added quite a bit more to our acceptor. In
+ addition to providing a choice of concurrency strategies, we also
+ maintain a Thread_Pool object in case that strategy is chosen. The
+ object still isn't very complex but it's come a long way from the
+ simple typedef we had in Tutorial 5.
+
+ Why keep the thread pool as a member? If we go back to the inetd
+ concept you'll recall that we need several acceptors to make that
+ work. We may have a situation in which our different client types
+ requre different resources. That is, we may need a large thread
+ pool for some client types and a smaller one for others. We could
+ share a pool but then the client types may have undesirable impact
+ on one another.
+
+ Just in case you do want to share a single thread pool, there is a
+ constructor below that will let you do that. */
class Client_Acceptor : public Client_Acceptor_Base
{
public:
- typedef Client_Acceptor_Base inherited;
-
- /*
- Now that we have more than two strategies, we need more than a boolean
- to tell us what we're using. A set of enums is a good choice because
- it allows us to use named values. Another option would be a set of
- static const integers.
- */
- enum concurrency_t
- {
- single_threaded_,
- thread_per_connection_,
- thread_pool_
- };
-
- /*
- The default constructor allows the programmer to choose the concurrency
- strategy. Since we want to focus on thread-pool, that's what we'll use
- if nothing is specified.
- */
- Client_Acceptor( int _concurrency = thread_pool_ );
-
- /*
- Another option is to construct the object with an existing thread pool.
- The concurrency strategy is pretty obvious at that point.
- */
- Client_Acceptor( Thread_Pool & _thread_pool );
-
- /*
- Our destructor will take care of shutting down the thread-pool
- if applicable.
- */
- ~Client_Acceptor( void );
-
- /*
- Open ourselves and register with the given reactor. The thread pool size
- can be specified here if you want to use that concurrency strategy.
- */
- int open( const ACE_INET_Addr & _addr, ACE_Reactor * _reactor,
- int _pool_size = Thread_Pool::default_pool_size_ );
-
- /*
- Close ourselves and our thread pool if applicable
- */
- int close(void);
-
- /*
- What is our concurrency strategy?
- */
- int concurrency(void)
- { return this->concurrency_; }
-
- /*
- Give back a pointer to our thread pool. Our Client_Handler objects
- will need this so that their handle_input() methods can put themselves
- into the pool. Another alternative would be a globally accessible
- thread pool. ACE_Singleton<> is a way to achieve that.
- */
- Thread_Pool * thread_pool(void)
- { return & this->the_thread_pool_; }
-
- /*
- Since we can be constructed with a Thread_Pool reference, there are times
- when we need to know if the thread pool we're using is ours or if we're
- just borrowing it from somebody else.
- */
- int thread_pool_is_private(void)
- { return &the_thread_pool_ == &private_thread_pool_; }
+ typedef Client_Acceptor_Base inherited;
+
+ /* Now that we have more than two strategies, we need more than a
+ boolean to tell us what we're using. A set of enums is a good
+ choice because it allows us to use named values. Another option
+ would be a set of static const integers. */
+ enum concurrency_t
+ {
+ single_threaded_,
+ thread_per_connection_,
+ thread_pool_
+ };
+
+ /* The default constructor allows the programmer to choose the
+ concurrency strategy. Since we want to focus on thread-pool,
+ that's what we'll use if nothing is specified. */
+ Client_Acceptor (int concurrency = thread_pool_);
+
+ /* Another option is to construct the object with an existing thread
+ pool. The concurrency strategy is pretty obvious at that point. */
+ Client_Acceptor (Thread_Pool &thread_pool);
+
+ /* Our destructor will take care of shutting down the thread-pool if
+ applicable. */
+ ~Client_Acceptor (void);
+
+ /* Open ourselves and register with the given reactor. The thread
+ pool size can be specified here if you want to use that
+ concurrency strategy. */
+ int open (const ACE_INET_Addr &addr,
+ ACE_Reactor *reactor,
+ int pool_size = Thread_Pool::default_pool_size_);
+
+ /* Close ourselves and our thread pool if applicable */
+ int close (void);
+
+ /* What is our concurrency strategy? */
+ int concurrency (void)
+ {
+ return this->concurrency_;
+ }
+
+ /* Give back a pointer to our thread pool. Our Client_Handler
+ objects will need this so that their handle_input() methods can
+ put themselves into the pool. Another alternative would be a
+ globally accessible thread pool. ACE_Singleton<> is a way to
+ achieve that. */
+ Thread_Pool *thread_pool (void)
+ {
+ return &this->the_thread_pool_;
+ }
+
+ /* Since we can be constructed with a Thread_Pool reference, there
+ are times when we need to know if the thread pool we're using is
+ ours or if we're just borrowing it from somebody else. */
+ int thread_pool_is_private (void)
+ {
+ return &the_thread_pool_ == &private_thread_pool_;
+ }
protected:
- int concurrency_;
+ int concurrency_;
- Thread_Pool private_thread_pool_;
+ Thread_Pool private_thread_pool_;
- Thread_Pool & the_thread_pool_;
+ Thread_Pool &the_thread_pool_;
};
-#endif // CLIENT_ACCEPTOR_H
+#endif /* CLIENT_ACCEPTOR_H */
diff --git a/docs/tutorials/007/page05.html b/docs/tutorials/007/page05.html
index f3b27749232..a416cb32505 100644
--- a/docs/tutorials/007/page05.html
+++ b/docs/tutorials/007/page05.html
@@ -19,22 +19,21 @@ is next.
-
// $Id$
#ifndef CLIENT_HANDLER_H
#define CLIENT_HANDLER_H
-/*
- Our client handler must exist somewhere in the ACE_Event_Handler object
- hierarchy. This is a requirement of the ACE_Reactor because it maintains
- ACE_Event_Handler pointers for each registered event handler. You could
- derive our Client_Handler directly from ACE_Event_Handler but you still have
- to have an ACE_SOCK_Stream for the actually connection. With a direct
- derivative of ACE_Event_Handler, you'll have to contain and maintain an
- ACE_SOCK_Stream instance yourself. With ACE_Svc_Handler (which is a
- derivative of ACE_Event_Handler) some of those details are handled for you.
- */
+/* Our client handler must exist somewhere in the ACE_Event_Handler
+ object hierarchy. This is a requirement of the ACE_Reactor because
+ it maintains ACE_Event_Handler pointers for each registered event
+ handler. You could derive our Client_Handler directly from
+ ACE_Event_Handler but you still have to have an ACE_SOCK_Stream for
+ the actually connection. With a direct derivative of
+ ACE_Event_Handler, you'll have to contain and maintain an
+ ACE_SOCK_Stream instance yourself. With ACE_Svc_Handler (which is
+ a derivative of ACE_Event_Handler) some of those details are
+ handled for you. */
#include "ace/Svc_Handler.h"
@@ -47,150 +46,135 @@ is next.
class Client_Acceptor;
class Thread_Pool;
-/*
- Another feature of ACE_Svc_Handler is it's ability to present the ACE_Task<>
- interface as well. That's what the ACE_NULL_SYNCH parameter below is all
- about. That's beyond our scope here but we'll come back to it in the next
- tutorial when we start looking at concurrency options.
- */
-class Client_Handler : public ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH >
+/* Another feature of ACE_Svc_Handler is it's ability to present the
+ ACE_Task<> interface as well. That's what the ACE_NULL_SYNCH
+ parameter below is all about. That's beyond our scope here but
+ we'll come back to it in the next tutorial when we start looking at
+ concurrency options. */
+class Client_Handler : public ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH>
{
public:
- typedef ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH > inherited;
+ typedef ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH> inherited;
// Constructor...
Client_Handler (void);
- /*
- The destroy() method is our preferred method of destruction. We could
- have overloaded the delete operator but that is neither easy nor
- intuitive (at least to me). Instead, we provide a new method of
- destruction and we make our destructor protected so that only ourselves,
- our derivatives and our friends can delete us. It's a nice
- compromise.
- */
+ /* The destroy() method is our preferred method of destruction. We
+ could have overloaded the delete operator but that is neither easy
+ nor intuitive (at least to me). Instead, we provide a new method
+ of destruction and we make our destructor protected so that only
+ ourselves, our derivatives and our friends can delete us. It's a
+ nice compromise. */
void destroy (void);
- /*
- Most ACE objects have an open() method. That's how you make them ready
- to do work. ACE_Event_Handler has a virtual open() method which allows us
- to create this overrride. ACE_Acceptor<> will invoke this method after
- creating a new Client_Handler when a client connects. Notice that the
- parameter to open() is a void*. It just so happens that the pointer
- points to the acceptor which created us. You would like for the parameter
- to be an ACE_Acceptor<>* but since ACE_Event_Handler is generic, that
- would tie it too closely to the ACE_Acceptor<> set of objects. In our
- definition of open() you'll see how we get around that.
- */
- int open (void *_acceptor);
-
- /*
- When an ACE_Task<> object falls out of the svc() method, the framework
- will call the close() method. That's where we want to cleanup ourselves
- if we're running in either thread-per-connection or thread-pool mode.
- */
- int close(u_long flags = 0);
-
- /*
- When there is activity on a registered handler, the handle_input() method
- of the handler will be invoked. If that method returns an error code (eg
- -- -1) then the reactor will invoke handle_close() to allow the object to
- clean itself up. Since an event handler can be registered for more than
- one type of callback, the callback mask is provided to inform
- handle_close() exactly which method failed. That way, you don't have to
- maintain state information between your handle_* method calls. The _handle
- parameter is explained below...
- As a side-effect, the reactor will also invoke remove_handler()
- for the object on the mask that caused the -1 return. This means
- that we don't have to do that ourselves!
- */
- int handle_close (ACE_HANDLE _handle, ACE_Reactor_Mask _mask);
-
- /*
- When we register with the reactor, we're going to tell it that we want to
- be notified of READ events. When the reactor sees that there is read
- activity for us, our handle_input() will be invoked. The _handleg
- provided is the handle (file descriptor in Unix) of the actual connection
- causing the activity. Since we're derived from ACE_Svc_Handler<> and it
- maintains it's own peer (ACE_SOCK_Stream) object, this is redundant for
- us. However, if we had been derived directly from ACE_Event_Handler, we
- may have chosen not to contain the peer. In that case, the _handleg
- would be important to us for reading the client's data.
- */
- int handle_input (ACE_HANDLE _handle);
+ /* Most ACE objects have an open() method. That's how you make them
+ ready to do work. ACE_Event_Handler has a virtual open() method
+ which allows us to create this overrride. ACE_Acceptor<> will
+ invoke this method after creating a new Client_Handler when a
+ client connects. Notice that the parameter to open() is a void*.
+ It just so happens that the pointer points to the acceptor which
+ created us. You would like for the parameter to be an
+ ACE_Acceptor<>* but since ACE_Event_Handler is generic, that would
+ tie it too closely to the ACE_Acceptor<> set of objects. In our
+ definition of open() you'll see how we get around that. */
+ int open (void *acceptor);
+
+ /* When an ACE_Task<> object falls out of the svc() method, the
+ framework will call the close() method. That's where we want to
+ cleanup ourselves if we're running in either thread-per-connection
+ or thread-pool mode. */
+ int close (u_long flags = 0);
+
+ /* When there is activity on a registered handler, the
+ handle_input() method of the handler will be invoked. If that
+ method returns an error code (eg -- -1) then the reactor will
+ invoke handle_close() to allow the object to clean itself
+ up. Since an event handler can be registered for more than one
+ type of callback, the callback mask is provided to inform
+ handle_close() exactly which method failed. That way, you don't
+ have to maintain state information between your handle_* method
+ calls. The <handle> parameter is explained below... As a
+ side-effect, the reactor will also invoke remove_handler() for the
+ object on the mask that caused the -1 return. This means that we
+ don't have to do that ourselves! */
+ int handle_close (ACE_HANDLE handle,
+ ACE_Reactor_Mask mask);
+
+ /* When we register with the reactor, we're going to tell it that we
+ want to be notified of READ events. When the reactor sees that
+ there is read activity for us, our handle_input() will be
+ invoked. The _handleg provided is the handle (file descriptor in
+ Unix) of the actual connection causing the activity. Since we're
+ derived from ACE_Svc_Handler<> and it maintains it's own peer
+ (ACE_SOCK_Stream) object, this is redundant for us. However, if
+ we had been derived directly from ACE_Event_Handler, we may have
+ chosen not to contain the peer. In that case, the <handle> would
+ be important to us for reading the client's data. */
+ int handle_input (ACE_HANDLE handle);
protected:
- /*
- If the Client_Acceptor which created us has chosen a thread-per-connection
- strategy then our open() method will activate us into a dedicate thread.
- The svc() method will then execute in that thread performing some of the
- functions we used to leave up to the reactor.
- */
- int svc(void);
-
- /*
- This has nothing at all to do with ACE. I've added this here as a worker
- function which I will call from handle_input(). That allows me to
- introduce concurrencly in later tutorials with a no changes to the worker
- function. You can think of process() as application-level code and
- everything elase as application-framework code.
- */
- int process (char *_rdbuf, int _rdbuf_len);
-
- /*
- We don't really do anything in our destructor but we've declared it to be
- protected to prevent casual deletion of this object. As I said above, I
- really would prefer that everyone goes through the destroy() method to get
- rid of us.
- */
- ~Client_Handler (void);
-
- /*
- When we get to the definition of Client_Handler we'll see that there are
- several places where we go back to the Client_Acceptor for information.
- It is generally a good idea to do that through an accesor rather than
- using the member variable directly.
- */
- Client_Acceptor * client_acceptor( void )
- { return this->client_acceptor_; }
-
- /*
- And since you shouldn't access a member variable directly, neither should you
- set (mutate) it. Although it might seem silly to do it this way, you'll thank
- yourself for it later.
- */
- void client_acceptor( Client_Acceptor * _client_acceptor )
- { this->client_acceptor_ = _client_acceptor; }
-
- /*
- The concurrency() accessor tells us the current concurrency strategy. It actually
- queries the Client_Acceptor for it but by having the accessor in place, we could
- change our implementation without affecting everything that needs to know.
- */
- int concurrency(void);
-
- /*
- Likewise for access to the Thread_Pool that we belong to.
- */
- Thread_Pool * thread_pool(void);
-
-
- Client_Acceptor * client_acceptor_;
-
- /*
- For some reason I didn't create accessor/mutator methods for this. So much for
- consistency....
-
- This variable is used to remember the thread in which we were created: the "creator"
- thread in other words. handle_input() needs to know if it is operating in the
- main reactor thread (which is the one that created us) or if it is operating in
- one of the thread pool threads. More on this when we get to handle_input().
- */
- ACE_thread_t creator_;
+ /* If the Client_Acceptor which created us has chosen a
+ thread-per-connection strategy then our open() method will
+ activate us into a dedicate thread. The svc() method will then
+ execute in that thread performing some of the functions we used to
+ leave up to the reactor. */
+ int svc (void);
+
+ /* This has nothing at all to do with ACE. I've added this here as
+ a worker function which I will call from handle_input(). That
+ allows me to introduce concurrencly in later tutorials with a no
+ changes to the worker function. You can think of process() as
+ application-level code and everything elase as
+ application-framework code. */
+ int process (char *rdbuf, int rdbuf_len);
+
+ /* We don't really do anything in our destructor but we've declared
+ it to be protected to prevent casual deletion of this object. As
+ I said above, I really would prefer that everyone goes through the
+ destroy() method to get rid of us. */
+ ~Client_Handler (void);
+
+ /* When we get to the definition of Client_Handler we'll see that
+ there are several places where we go back to the Client_Acceptor
+ for information. It is generally a good idea to do that through
+ an accesor rather than using the member variable directly. */
+ Client_Acceptor *client_acceptor (void)
+ {
+ return this->client_acceptor_;
+ }
+
+ /* And since you shouldn't access a member variable directly,
+ neither should you set (mutate) it. Although it might seem silly
+ to do it this way, you'll thank yourself for it later. */
+ void client_acceptor (Client_Acceptor *client_acceptor)
+ {
+ this->client_acceptor_ = _client_acceptor;
+ }
+
+ /* The concurrency() accessor tells us the current concurrency
+ strategy. It actually queries the Client_Acceptor for it but by
+ having the accessor in place, we could change our implementation
+ without affecting everything that needs to know. */
+ int concurrency (void);
+
+ /* Likewise for access to the Thread_Pool that we belong to. */
+ Thread_Pool * thread_pool (void);
+
+ Client_Acceptor *client_acceptor_;
+
+ /* For some reason I didn't create accessor/mutator methods for
+ this. So much for consistency....
+
+ This variable is used to remember the thread in which we were
+ created: the "creator" thread in other words. handle_input()
+ needs to know if it is operating in the main reactor thread (which
+ is the one that created us) or if it is operating in one of the
+ thread pool threads. More on this when we get to handle_input(). */
+ ACE_thread_t creator_;
};
-#endif // CLIENT_HANDLER_H
+#endif /* CLIENT_HANDLER_H */
diff --git a/docs/tutorials/007/page07.html b/docs/tutorials/007/page07.html
index d200d598791..6da001738a2 100644
--- a/docs/tutorials/007/page07.html
+++ b/docs/tutorials/007/page07.html
@@ -21,108 +21,95 @@ to make so few changes to the rest of the code.
-
// $Id$
#ifndef THREAD_POOL_H
#define THREAD_POOL_H
-/*
- In order to implement a thread pool, we have to have an object that can create
- a thread. The ACE_Task<> is the basis for doing just such a thing.
- */
+/* In order to implement a thread pool, we have to have an object that
+ can create a thread. The ACE_Task<> is the basis for doing just
+ such a thing. */
#include "ace/Task.h"
#if !defined (ACE_LACKS_PRAGMA_ONCE)
# pragma once
#endif /* ACE_LACKS_PRAGMA_ONCE */
-/*
- We need a forward reference for ACE_Event_Handler so that our enqueue() method
- can accept a pointer to one.
- */
+/* We need a forward reference for ACE_Event_Handler so that our
+ enqueue() method can accept a pointer to one. */
class ACE_Event_Handler;
-/*
- Although we modified the rest of our program to make use of the thread pool
- implementation, if you look closely you'll see that the changes were rather
- minor. The "ACE way" is generally to create a helper object that abstracts
- away the details not relevant to your application. That's what I'm trying
- to do here by creating the Thread_Pool object.
- */
+/* Although we modified the rest of our program to make use of the
+ thread pool implementation, if you look closely you'll see that the
+ changes were rather minor. The "ACE way" is generally to create a
+ helper object that abstracts away the details not relevant to your
+ application. That's what I'm trying to do here by creating the
+ Thread_Pool object. */
class Thread_Pool : public ACE_Task<ACE_MT_SYNCH>
{
public:
-
typedef ACE_Task<ACE_MT_SYNCH> inherited;
- /*
- Provide an enumeration for the default pool size. By doing this, other objects
- can use the value when they want a default.
- */
- enum size_t
- {
- default_pool_size_ = 5
- };
-
- // Basic constructor
- Thread_Pool(void);
-
- /*
- Opening the thread pool causes one or more threads to be activated. When activated,
- they all execute the svc() method declared below.
- */
- int open( int _pool_size = default_pool_size_ );
-
- /*
- Some compilers will complain that our open() above attempts to
- override a virtual function in the baseclass. We have no
- intention of overriding that method but in order to keep the
- compiler quiet we have to add this method as a pass-thru to the
- baseclass method.
- */
- virtual int open(void * _void_data)
- { return inherited::open(_void_data); }
-
- /*
- */
- int close( u_long flags = 0 );
-
- /*
- To use the thread pool, you have to put some unit of work into it. Since we're
- dealing with event handlers (or at least their derivatives), I've chosen to provide
- an enqueue() method that takes a pointer to an ACE_Event_Handler. The handler's
- handle_input() method will be called, so your object has to know when it is being
- called by the thread pool.
- */
- int enqueue( ACE_Event_Handler * _handler );
-
- /*
- Another handy ACE template is ACE_Atomic_Op<>. When parameterized, this allows
- is to have a thread-safe counting object. The typical arithmetic operators are
- all internally thread-safe so that you can share it across threads without worrying
- about any contention issues.
- */
- typedef ACE_Atomic_Op<ACE_Mutex,int> counter_t;
+ /* Provide an enumeration for the default pool size. By doing this,
+ other objects can use the value when they want a default. */
+ enum size_t
+ {
+ default_pool_size_ = 5
+ };
+
+ // Basic constructor
+ Thread_Pool (void);
+
+ /* Opening the thread pool causes one or more threads to be
+ activated. When activated, they all execute the svc() method
+ declared below. */
+ int open (int pool_size = default_pool_size);
+
+ /* Some compilers will complain that our open() above attempts to
+ override a virtual function in the baseclass. We have no
+ intention of overriding that method but in order to keep the
+ compiler quiet we have to add this method as a pass-thru to the
+ baseclass method. */
+ virtual int open (void *void_data)
+ {
+ return inherited::open (void_data);
+ }
+
+ /*
+ */
+ virtual int close (u_long flags = 0);
+
+ /* To use the thread pool, you have to put some unit of work into
+ it. Since we're dealing with event handlers (or at least their
+ derivatives), I've chosen to provide an enqueue() method that
+ takes a pointer to an ACE_Event_Handler. The handler's
+ handle_input() method will be called, so your object has to know
+ when it is being called by the thread pool. */
+ int enqueue (ACE_Event_Handler *handler);
+
+ /* Another handy ACE template is ACE_Atomic_Op<>. When
+ parameterized, this allows is to have a thread-safe counting
+ object. The typical arithmetic operators are all internally
+ thread-safe so that you can share it across threads without
+ worrying about any contention issues. */
+ typedef ACE_Atomic_Op<ACE_Mutex, int> counter_t;
protected:
- /*
- Our svc() method will dequeue the enqueued event handler objects and invoke the
- handle_input() method on each. Since we're likely running in more than one thread,
- idle threads can take work from the queue while other threads are busy executing
- handle_input() on some object.
- */
- int svc(void);
-
- /*
- We use the atomic op to keep a count of the number of threads in which our svc()
- method is running. This is particularly important when we want to close() it down!
- */
- counter_t active_threads_;
+ /* Our svc() method will dequeue the enqueued event handler objects
+ and invoke the handle_input() method on each. Since we're likely
+ running in more than one thread, idle threads can take work from
+ the queue while other threads are busy executing handle_input() on
+ some object. */
+ int svc (void);
+
+ /* We use the atomic op to keep a count of the number of threads in
+ which our svc() method is running. This is particularly important
+ when we want to close() it down! */
+ counter_t active_threads_;
};
-#endif // THREAD_POOL_H
+#endif /* THREAD_POOL_H */
diff --git a/docs/tutorials/014/page02.html b/docs/tutorials/014/page02.html
index 452f18f49df..72259699d29 100644
--- a/docs/tutorials/014/page02.html
+++ b/docs/tutorials/014/page02.html
@@ -19,7 +19,6 @@ You find pretty soon that anytime you work with ACE_Task<> you
-
// $Id$
// Task.h
@@ -27,8 +26,6 @@ You find pretty soon that anytime you work with ACE_Task<> you
// Tutorial regarding a way to use ACE_Stream.
//
// written by bob mcwhirter (bob@netwrench.com)
-//
-//
#ifndef TASK_H
#define TASK_H
@@ -42,51 +39,45 @@ typedef ACE_Task<ACE_MT_SYNCH> Task_Base;
class Task : public Task_Base
{
-
public:
-
typedef Task_Base inherited;
// This is just good form.
- Task(const char *nameOfTask,
- int numberOfThreads);
- // Initialize our Task with a name,
- // and number of threads to spawn.
+ Task (const char *nameOfTask,
+ int numberOfThreads);
+ // Initialize our Task with a name, and number of threads to spawn.
- virtual ~Task(void);
+ virtual ~Task (void);
- virtual int open(void *arg);
- // This is provided to prevent compiler complaints
- // about hidden virtual functions.
+ virtual int open (void *arg);
+ // This is provided to prevent compiler complaints about hidden
+ // virtual functions.
- virtual int close(u_long flags);
+ virtual int close (u_long flags);
// This closes down the Task and all service threads.
- virtual int put(ACE_Message_Block *message,
- ACE_Time_Value *timeout);
- // This is the interface that ACE_Stream uses to
- // communicate with our Task.
+ virtual int put (ACE_Message_Block *message,
+ ACE_Time_Value *timeout);
+ // This is the interface that ACE_Stream uses to communicate with
+ // our Task.
- virtual int svc(void);
- // This is the actual service loop each of the service
- // threads iterates through.
+ virtual int svc (void);
+ // This is the actual service loop each of the service threads
+ // iterates through.
- const char *nameOfTask(void) const;
+ const char *nameOfTask (void) const;
// Returns the name of this Task.
private:
-
int d_numberOfThreads;
char d_nameOfTask[64];
ACE_Barrier d_barrier;
- // Simple Barrier to make sure all of our service
- // threads have entered their loop before accepting
- // any messages.
+ // Simple Barrier to make sure all of our service threads have
+ // entered their loop before accepting any messages.
};
-
-#endif // TASK_H
+#endif /* TASK_H */
-
// $Id$
// EndTask.h
@@ -56,54 +55,64 @@ Read on...
// All this Task does is release the Message_Block
// and return 0. It's a suitable black-hole.
-
class EndTask : public Task
{
-
public:
-
typedef Task inherited;
- EndTask(const char *nameOfTask) :
- inherited(nameOfTask, 0) {
-
- // when we get open()'d, it with 0 threads
- // since there is actually no processing to do.
+ EndTask (const char *nameOfTask): inherited (nameOfTask, 0)
+ {
+ // when we get open()'d, it with 0 threads since there is actually
+ // no processing to do.
- cerr << __LINE__ << " " << __FILE__ << endl;
- };
+ ACE_DEBUG ((LM_INFO,
+ "(%P|%t) Line: %d, File: %s\n",
+ __LINE__,
+ __FILE__));
+ }
- virtual int open(void *)
+ virtual int open (void *)
{
- cerr << __LINE__ << " " << __FILE__ << endl;
- return 0;
+ ACE_DEBUG ((LM_INFO,
+ "(%P|%t) Line: %d, File: %s\n",
+ __LINE__,
+ __FILE__));
+ return 0;
}
- virtual int open(void)
+ virtual int open (void)
{
- cerr << __LINE__ << " " << __FILE__ << endl;
- return 0;
+ ACE_DEBUG ((LM_INFO,
+ "(%P|%t) Line: %d, File: %s\n",
+ __LINE__,
+ __FILE__));
+ return 0;
}
- virtual ~EndTask(void) {
- };
-
- virtual int put(ACE_Message_Block *message,
- ACE_Time_Value *timeout) {
-
- cerr << __LINE__ << " " << __FILE__ << endl;
- ACE_UNUSED_ARG(timeout);
+ virtual ~EndTask(void)
+ {
+ }
- // we don't have anything to do, so
- // release() the message.
- ACE_DEBUG ((LM_DEBUG, "(%P|%t) %s EndTask::put() -- releasing Message_Block\n", this->nameOfTask()));
- message->release();
+ virtual int put (ACE_Message_Block *message,
+ ACE_Time_Value *timeout)
+ {
+ ACE_DEBUG ((LM_INFO,
+ "(%P|%t) Line: %d, File: %s\n",
+ __LINE__,
+ __FILE__));
+ ACE_UNUSED_ARG (timeout);
+
+ // we don't have anything to do, so release() the message.
+ ACE_DEBUG ((LM_DEBUG,
+ "(%P|%t) %s EndTask::put() -- releasing Message_Block\n",
+ this->nameOfTask ()));
+ message->release ();
return 0;
- };
+ }
};
-#endif // ENDTASK_H
+#endif /* ENDTASK_H */
-
// $Id$
#ifndef CLIENT_H
@@ -41,58 +40,58 @@ class ACE_Message_Block;
class Client
{
public:
- // Provide the server information when constructing the
- // object. This could (and probably should) be moved to the
- // open() method.
- Client( u_short _port, const char * _server );
+ // Provide the server information when constructing the
+ // object. This could (and probably should) be moved to the
+ // open() method.
+ Client (u_short port,
+ const char *server);
- // Cleanup...
- ~Client(void);
+ // Cleanup...
+ ~Client (void);
- // Open the connection to the server.
- int open(void);
+ // Open the connection to the server.
+ int open (void);
- // Close the connection to the server. Be sure to do this
- // before you let the Client go out of scope.
- int close(void);
+ // Close the connection to the server. Be sure to do this
+ // before you let the Client go out of scope.
+ int close (void);
- // Put a message to the server. The Client assumes ownership
- // of _message at that point and will release() it when done.
- // Do not use _message after passing it to put().
- int put( ACE_Message_Block * _message );
+ // Put a message to the server. The Client assumes ownership of
+ // <message> at that point and will release() it when done. Do not
+ // use <message> after passing it to put().
+ int put (ACE_Message_Block *message);
- // Get a response from the server. The caller becomes the
- // owner of _response after this call and is responsible for
- // invoking release() when done.
- int get( ACE_Message_Block * & _response );
+ // Get a response from the server. The caller becomes the owner of
+ // <response> after this call and is responsible for invoking
+ // release() when done.
+ int get (ACE_Message_Block *&response);
private:
- // Protocol_Stream hides the protocol conformance details from
- // us.
- Protocol_Stream stream_;
+ // Protocol_Stream hides the protocol conformance details from us.
+ Protocol_Stream stream_;
- // We create a connection on the peer_ and then pass ownership
- // of it to the protocol stream.
- ACE_SOCK_Stream peer_;
+ // We create a connection on the peer_ and then pass ownership of it
+ // to the protocol stream.
+ ACE_SOCK_Stream peer_;
- // Endpoing information saved by the constructor for use by open().
- u_short port_;
- const char * server_;
+ // Endpoing information saved by the constructor for use by open().
+ u_short port_;
+ const char *server_;
- // Accessors for the complex member variables.
+ // Accessors for the complex member variables.
- Protocol_Stream & stream(void)
- {
- return this->stream_;
- }
+ Protocol_Stream &stream (void)
+ {
+ return this->stream_;
+ }
- ACE_SOCK_Stream & peer(void)
- {
- return this->peer_;
- }
+ ACE_SOCK_Stream &peer (void)
+ {
+ return this->peer_;
+ }
};
-#endif // CLIENT_H
+#endif /* CLIENT_H */
-
// $Id$
#ifndef SERVER_H
@@ -40,40 +39,39 @@ that's probably a valid assumption!
/* Anytime I have templates I try to remember to create a typedef for
the parameterized object. It makes for much less typing later!
*/
-typedef ACE_Acceptor < Handler, ACE_SOCK_ACCEPTOR > Acceptor;
+typedef ACE_Acceptor <Handler, ACE_SOCK_ACCEPTOR> Acceptor;
class Server
{
public:
- // Our simple constructor takes no parameters. To make the
- // server a bit more useful, you may want to pass in the
- // TCP/IP port to be used by the acceptor.
- Server(void);
- ~Server(void);
+ // Our simple constructor takes no parameters. To make the
+ // server a bit more useful, you may want to pass in the
+ // TCP/IP port to be used by the acceptor.
+ Server (void);
+ ~Server (void);
- // Open the server for business
- int open(void);
+ // Open the server for business
+ int open (void);
- // Close all server instances by setting the finished_ flag.
- // Actually, the way this class is written, you can only have
- // one instance.
- static int close(void);
+ // Close all server instances by setting the finished_ flag.
+ // Actually, the way this class is written, you can only have
+ // one instance.
+ static int close (void);
- // Run the server's main loop. The use of the gloabl
- // ACE_Reactor by this method is what limits us to one Server
- // instance.
- int run(void);
+ // Run the server's main loop. The use of the gloabl ACE_Reactor by
+ // this method is what limits us to one Server instance.
+ int run (void);
private:
- // This will accept client connection requests and instantiate
- // a Handler object for each new connection.
- Acceptor acceptor_;
+ // This will accept client connection requests and instantiate a
+ // Handler object for each new connection.
+ Acceptor acceptor_;
- // Our shutdown flag
- static sig_atomic_t finished_;
+ // Our shutdown flag
+ static sig_atomic_t finished_;
};
-#endif // SERVER_H
+#endif /* SERVER_H */
-
// $Id$
#ifndef HANDLER_H
@@ -41,54 +40,50 @@ processing. Again, keep it simple and delegate authority.
/* Just your basic event handler. We use ACE_Svc_Handler<> as a
baseclass so that it can maintain the peer() and other details for
us. We're not going to activate() this object, so we can get away
- with the NULL synch choice.
-*/
-class Handler : public ACE_Svc_Handler < ACE_SOCK_STREAM, ACE_NULL_SYNCH >
+ with the NULL synch choice. */
+class Handler : public ACE_Svc_Handler <ACE_SOCK_STREAM, ACE_NULL_SYNCH>
{
public:
+ Handler (void);
+ ~Handler (void);
- Handler(void);
- ~Handler(void);
-
- // Called by the acceptor when we're created in response to a
- // client connection.
- int open (void *);
+ // Called by the acceptor when we're created in response to a client
+ // connection.
+ int open (void *);
- // Called when it's time for us to be deleted. We take care
- // of removing ourselves from the reactor and shutting down
- // the peer() connectin.
- void destroy (void);
+ // Called when it's time for us to be deleted. We take care of
+ // removing ourselves from the reactor and shutting down the peer()
+ // connectin.
+ void destroy (void);
- // Called when it's time for us to go away. There are subtle
- // differences between destroy() and close() so don't try to
- // use either for all cases.
- int close (u_long);
+ // Called when it's time for us to go away. There are subtle
+ // differences between destroy() and close() so don't try to use
+ // either for all cases.
+ int close (u_long);
protected:
+ // Respond to peer() activity.
+ int handle_input (ACE_HANDLE);
- // Respond to peer() activity.
- int handle_input (ACE_HANDLE);
-
- // This will be called when handle_input() returns a failure
- // code. That's our signal that it's time to begin the
- // shutdown process.
- int handle_close(ACE_HANDLE, ACE_Reactor_Mask _mask);
-
+ // This will be called when handle_input() returns a failure code.
+ // That's our signal that it's time to begin the shutdown process.
+ int handle_close (ACE_HANDLE,
+ ACE_Reactor_Mask mask);
private:
- // Like the Client, we have to abide by the protocol
- // requirements. We use a local Protocol_Stream object to
- // take care of those details. For us, I/O then just becomes
- // a matter of interacting with the stream.
- Protocol_Stream stream_;
+ // Like the Client, we have to abide by the protocol requirements.
+ // We use a local Protocol_Stream object to take care of those
+ // details. For us, I/O then just becomes a matter of interacting
+ // with the stream.
+ Protocol_Stream stream_;
- Protocol_Stream & stream(void)
- {
- return this->stream_;
- }
+ Protocol_Stream &stream (void)
+ {
+ return this->stream_;
+ }
};
-#endif // HANDLER_H
+#endif /* HANDLER_H */
+
diff --git a/docs/tutorials/015/page10.html b/docs/tutorials/015/page10.html
index 476dfe3c7a0..5dc3711f0ba 100644
--- a/docs/tutorials/015/page10.html
+++ b/docs/tutorials/015/page10.html
@@ -35,7 +35,6 @@ going on here.
- // $Id$ #ifndef PROTOCOL_STREAM_H @@ -63,64 +62,65 @@ class Protocol_Task; class Protocol_Stream { public: - Protocol_Stream(void); - ~Protocol_Stream(void); - - // Provide the stream with an ACE_SOCK_Stream on which it can - // communicate. If _reader is non-null, it will be added as - // the reader task just below the stream head so that it can - // process data read from the peer. - int open( ACE_SOCK_Stream & _peer, Protocol_Task * _reader = 0 ); - - // Close the stream. All of the tasks & modules will also be closed. - int close(void); - - // putting data onto the stream will pass it through all - // protocol levels and send it to the peer. - int put( ACE_Message_Block * & _message, ACE_Time_Value * - _timeout = 0 ); - - // get will cause the Recv task (at the tail of the stream) to - // read some data from the peer and pass it upstream. The - // message block is then taken from the stream reader task's - // message queue. - int get( ACE_Message_Block * & _response, ACE_Time_Value * - _timeout = 0 ); - - // Tell the Recv task to read some data and send it upstream. - // The data will pass through the protocol tasks and be queued - // into the stream head reader task's message queue. If - // you've installed a _reader in open() then that task's - // recv() method will see the message and may consume it - // instead of passing it to the stream head for queueing. - int get(void); - - ACE_SOCK_Stream & peer(void) - { - return this->peer_; - } + Protocol_Stream (void); + ~Protocol_Stream (void); + + // Provide the stream with an ACE_SOCK_Stream on which it can + // communicate. If _reader is non-null, it will be added as the + // reader task just below the stream head so that it can process + // data read from the peer. + int open (ACE_SOCK_Stream &peer, + Protocol_Task *reader = 0); + + // Close the stream. All of the tasks & modules will also be + // closed. + int close (void); + + // putting data onto the stream will pass it through all protocol + // levels and send it to the peer. + int put (ACE_Message_Block *&message, + ACE_Time_Value *timeout = 0); + + // get will cause the Recv task (at the tail of the stream) to read + // some data from the peer and pass it upstream. The message block + // is then taken from the stream reader task's message queue. + int get (ACE_Message_Block *&response, + ACE_Time_Value *timeout = 0); + + // Tell the Recv task to read some data and send it upstream. The + // data will pass through the protocol tasks and be queued into the + // stream head reader task's message queue. If you've installed a + // _reader in open() then that task's recv() method will see the + // message and may consume it instead of passing it to the stream + // head for queueing. + int get (void); + + ACE_SOCK_Stream &peer (void) + { + return this->peer_; + } private: - // Our peer connection - ACE_SOCK_Stream peer_; + // Our peer connection + ACE_SOCK_Stream peer_; - // The stream managing the various protocol tasks - Stream stream_; + // The stream managing the various protocol tasks + Stream stream_; - // A task which is capable of receiving data on a socket. - // Note that this is only useful by client-side applications. - Recv * recv_; + // A task which is capable of receiving data on a socket. + // Note that this is only useful by client-side applications. + Recv *recv_; - Stream & stream(void) - { - return this->stream_; - } + Stream &stream (void) + { + return this->stream_; + } - // Install the protocol tasks into the stream. - int open(void); + // Install the protocol tasks into the stream. + int open (void); }; -#endif // PROTOCOL_STREAM_H +#endif /* PROTOCOL_STREAM_H */
- // $Id$ #include "Protocol_Stream.h" @@ -46,14 +45,12 @@ typedef ACE_Thru_Task<ACE_MT_SYNCH> Thru_Task; /* Do-nothing constructor and destructor */ -Protocol_Stream::Protocol_Stream( void ) +Protocol_Stream::Protocol_Stream (void) { - ; } -Protocol_Stream::~Protocol_Stream( void ) +Protocol_Stream::~Protocol_Stream (void) { - ; } /* Even opening the stream is rather simple. The important thing to @@ -61,134 +58,151 @@ typedef ACE_Thru_Task<ACE_MT_SYNCH> Thru_Task; at the tail (eg -- most downstream) end of things when you're done. */ -int Protocol_Stream::open( ACE_SOCK_Stream & _peer, Protocol_Task * _reader ) +int +Protocol_Stream::open (ACE_SOCK_Stream &peer, + Protocol_Task *reader) { - // Initialize our peer() to read/write the socket we're given - peer_.set_handle( _peer.get_handle() ); - - // Construct (and remember) the Recv object so that we can - // read from the peer(). - recv_ = new Recv( peer() ); - - // Add the transmit and receive tasks to the head of the - // stream. As we add more modules these will get pushed - // downstream and end up nearest the tail by the time we're - // done. - if( stream().push( new Module( "Xmit/Recv", new Xmit( peer() ), recv_ ) ) == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "stream().push( xmit/recv )"), -1); - } - - // Add any other protocol tasks to the stream. Each one is - // added at the head. The net result is that Xmit/Recv are at - // the tail. - if( this->open() == -1 ) - { - return(-1); - } - - // If a reader task was provided then push that in as the - // upstream side of the next-to-head module. Any data read - // from the peer() will be sent through here last. Server - // applications will typically use this task to do the actual - // processing of data. - // Note the use of Thru_Task. Since a module must always have - // a pair of tasks we use this on the writter side as a no-op. - if( _reader ) + // Initialize our peer() to read/write the socket we're given + peer_.set_handle (peer.get_handle ()); + + // Construct (and remember) the Recv object so that we can read from + // the peer(). + ACE_NEW_RETURN (recv_, + Recv (peer ()), + -1); + + // Add the transmit and receive tasks to the head of the stream. As + // we add more modules these will get pushed downstream and end up + // nearest the tail by the time we're done. + if (stream ().push (new Module ("Xmit/Recv", + new Xmit (peer ()), + recv_)) == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "%p\n", + "stream().push(xmit/recv)"), + -1); + + // Add any other protocol tasks to the stream. Each one is added at + // the head. The net result is that Xmit/Recv are at the tail. + if (this->open () == -1) + return -1; + + // If a reader task was provided then push that in as the upstream + // side of the next-to-head module. Any data read from the peer() + // will be sent through here last. Server applications will + // typically use this task to do the actual processing of data. + // Note the use of Thru_Task. Since a module must always have a + // pair of tasks we use this on the writter side as a no-op. + if (reader) { - if( stream().push( new Module( "Reader", new Thru_Task(), _reader ) ) == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "stream().push( reader )"), -1); - } + if (stream ().push (new Module ("Reader", + new Thru_Task (), + reader)) == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "%p\n", + "stream().push(reader)"), + -1); } - return(0); + return 0; } /* Add the necessary protocol objects to the stream. The way we're pushing things on we will encrypt the data before compressing it. */ -int Protocol_Stream::open(void) +int +Protocol_Stream::open (void) { -#if defined(ENABLE_COMPRESSION) - if( stream().push( new Module( "compress", new Compressor(), new Compressor() ) ) == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "stream().push( comprssor )"), -1); - } -#endif // ENABLE_COMPRESSION +#if defined (ENABLE_COMPRESSION) + if (stream ().push (new Module ("compress", + new Compressor (), + new Compressor ())) == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "%p\n", + "stream().push(comprssor)"), + -1); +#endif /* ENABLE_COMPRESSION */ -#if defined(ENABLE_ENCRYPTION) - if( stream().push( new Module( "crypt", new Crypt(), new Crypt() ) ) == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "stream().push( crypt )"), -1); - } -#endif // ENABLE_ENCRYPTION - return( 0 ); +#if defined (ENABLE_ENCRYPTION) + if (stream ().push (new Module ("crypt", + new Crypt (), + new Crypt ())) == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "%p\n", + "stream().push(crypt)"), + -1); +#endif /* ENABLE_ENCRYPTION */ + return 0; } // Closing the Protocol_Stream is as simple as closing the ACE_Stream. -int Protocol_Stream::close(void) +int +Protocol_Stream::close (void) { - return stream().close(); + return stream ().close (); } // Simply pass the data directly to the ACE_Stream. -int Protocol_Stream::put(ACE_Message_Block * & _message, ACE_Time_Value * _timeout ) +int +Protocol_Stream::put (ACE_Message_Block *&message, + ACE_Time_Value *timeout) { - return stream().put(_message,_timeout); + return stream ().put (message, + timeout); } /* Tell the Recv module to read some data from the peer and pass it upstream. Servers will typically use this method in a - handle_input() method to tell the stream to get a client's request. -*/ -int Protocol_Stream::get(void) -{ - // If there is no Recv module, we're in big trouble! - if( ! recv_ ) - { - ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) No Recv object!\n"), -1); - } - - // This tells the Recv module to go to it's peer() and read - // some data. Once read, that data will be pushed upstream. - // If there is a reader object then it will have a chance to - // process the data. If not, the received data will be - // available in the message queue of the stream head's reader - // object (eg -- stream().head()->reader()->msg_queue()) and - // can be read with our other get() method below. - if( recv_->get() == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) Cannot queue read request\n"), -1); - } - - // For flexibility I've added an error() method to tell us if - // something bad has happened to the Recv object. - if( recv_->error() ) - { - ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) Recv object error!\n"), -1); - } + handle_input() method to tell the stream to get a client's request. */ - return(0); +int +Protocol_Stream::get(void) +{ + // If there is no Recv module, we're in big trouble! + if (recv_ == 0) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P|%t) No Recv object!\n"), + -1); + + // This tells the Recv module to go to it's peer() and read some + // data. Once read, that data will be pushed upstream. If there is + // a reader object then it will have a chance to process the data. + // If not, the received data will be available in the message queue + // of the stream head's reader object (eg -- + // stream().head()->reader()->msg_queue()) and can be read with our + // other get() method below. + if (recv_->get () == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P|%t) Cannot queue read request\n"), + -1); + + // For flexibility I've added an error() method to tell us if + // something bad has happened to the Recv object. + if (recv_->error ()) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P|%t) Recv object error!\n"), + -1); + + return 0; } -/* Take a message block off of the stream head reader's message - queue. If the queue is empty, use get() to read from the peer. - This is most often used by client applications. Servers will - generaly insert a reader that will prevent the data from getting - all the way upstream to the head. -*/ -int Protocol_Stream::get(ACE_Message_Block * & _response, ACE_Time_Value * _timeout ) +/* Take a message block off of the stream head reader's message queue. + If the queue is empty, use get() to read from the peer. This is + most often used by client applications. Servers will generaly + insert a reader that will prevent the data from getting all the way + upstream to the head. */ +int +Protocol_Stream::get (ACE_Message_Block *&response, + ACE_Time_Value *timeout ) { - if( stream().head()->reader()->msg_queue()->is_empty() ) - { - if( this->get() == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "(%P|%t) Cannot get data into the stream.\n"), -1); - } - } + if (stream ().head ()->reader ()->msg_queue ()->is_empty () + && this->get () == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P|%t) Cannot get data into the stream.\n"), + -1); - return stream().head()->reader()->getq(_response,_timeout); + return stream ().head ()->reader ()->getq (response, + timeout); }
- // $Id$ #ifndef PROTOCOL_TASK_H @@ -37,62 +36,59 @@ concern in this file is to get everything in the correct order! class Protocol_Task : public ACE_Task<ACE_MT_SYNCH> { public: + typedef ACE_Task<ACE_MT_SYNCH> inherited; - typedef ACE_Task<ACE_MT_SYNCH> inherited; - - // A choice of concurrency strategies is offered by the - // constructor. In most cases it makes sense to set this to - // zero and let things proceed serially. You might have a - // need, however, for some of your tasks to have their own thread. - Protocol_Task( int _thr_count ); + // A choice of concurrency strategies is offered by the constructor. + // In most cases it makes sense to set this to zero and let things + // proceed serially. You might have a need, however, for some of + // your tasks to have their own thread. + Protocol_Task (int thr_count); - ~Protocol_Task(void); + ~Protocol_Task (void); - // open() is invoked when the task is inserted into the stream. - virtual int open(void *arg); + // open() is invoked when the task is inserted into the stream. + virtual int open (void *arg); - // close() is invoked when the stream is closed (flags will be - // set to '1') and when the svc() method exits (flags will be - // '0'). - virtual int close(u_long flags); + // close() is invoked when the stream is closed (flags will be set + // to '1') and when the svc() method exits (flags will be '0'). + virtual int close (u_long flags); - // As data travels through the stream, the put() method of - // each task is invoked to keep the data moving along. - virtual int put(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // As data travels through the stream, the put() method of each task + // is invoked to keep the data moving along. + virtual int put (ACE_Message_Block *message, + ACE_Time_Value *timeout); - // If you choose to activate the task then this method will be - // doing all of the work. - virtual int svc(void); + // If you choose to activate the task then this method will be doing + // all of the work. + virtual int svc (void); protected: - // Called by put() or svc() as necessary to process a block of - // data. - int process(ACE_Message_Block * message, ACE_Time_Value *timeout); + // Called by put() or svc() as necessary to process a block of data. + int process (ACE_Message_Block *message, + ACE_Time_Value *timeout); - // Just let us know if we're active or not. - int is_active(void) - { - return this->thr_count() != 0; - } + // Just let us know if we're active or not. + int is_active (void) + { + return this->thr_count () != 0; + } - // Tasks on the writter (downstream) side of the stream - // are called upon to send() data that will ultimately go to - // the peer. - virtual int send(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // Tasks on the writter (downstream) side of the stream are called + // upon to send() data that will ultimately go to the peer. + virtual int send (ACE_Message_Block *message, + ACE_Time_Value *timeout); - // Tasks on the reader (upstream) side will be receiving data - // that came from the peer. - virtual int recv(ACE_Message_Block * message, - ACE_Time_Value *timeout); + // Tasks on the reader (upstream) side will be receiving data that + // came from the peer. + virtual int recv (ACE_Message_Block *message, + ACE_Time_Value *timeout); private: - int desired_thr_count_; + int desired_thr_count_; }; -#endif // PROTOCOL_TASK_H +#endif /* PROTOCOL_TASK_H */
-Note that close() must decide if it's being called when the stream is -shutdown or when it's svc() method exits. Since we tell the baseclass -not to use any threads it's a safe bet that flags will always be -non-zero. Still, it's good practice to plan for the future by -checking the value. -
-Note also that when we send the data we prefix it with the data size. -This let's our sibling Recv ensure that an entire block is received -together. This can be very important for compression and encryption -processes which typically work better with blocks of data instead of -streams of data. +The only thing you might want to do is combine it with Recv. Why? +As you'll realize in a page or two, the Xmit and Recv objects must +interact if you're going to ensure a safe transit. By having a single +object it's easier to coordinate and maintain the interaction.
- // $Id$ #ifndef XMIT_H @@ -46,38 +39,36 @@ class ACE_SOCK_Stream; class Xmit : public Protocol_Task { public: + typedef Protocol_Task inherited; - typedef Protocol_Task inherited; - - // We must be given a valid peer when constructed. Without that - // we don't know who to send data to. - Xmit( ACE_SOCK_Stream & _peer ); - - ~Xmit(void); + // We must be given a valid peer when constructed. Without that we + // don't know who to send data to. + Xmit (ACE_SOCK_Stream &peer); + ~Xmit (void); - // As you know, close() will be called in a couple of ways by the - // ACE framework. We use that opportunity to terminate the - // connection to the peer. - int close(u_long flags); + // As you know, close() will be called in a couple of ways by the + // ACE framework. We use that opportunity to terminate the + // connection to the peer. + int close (u_long flags); protected: - ACE_SOCK_Stream & peer(void) - { - return this->peer_; - } + ACE_SOCK_Stream &peer (void) + { + return this->peer_; + } - // Send the data to the peer. By now it will have been - // completely protocol-ized by other tasks in the stream. - int send(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // Send the data to the peer. By now it will have been completely + // protocol-ized by other tasks in the stream. + int send (ACE_Message_Block *message, + ACE_Time_Value *timeout); private: - // A representation of the peer we're talking to. - ACE_SOCK_Stream & peer_; + // A representation of the peer we're talking to. + ACE_SOCK_Stream &peer_; }; -#endif // XMIT_H +#endif /* XMIT_H */
-An ACE_Stream is designed to handle downstream traffic very -well. You put() data into it and it flows along towards the tail. -However, there doesn't seem to be a way to put data in such that it -will travel upstream. To get around that, I've added a get() method -to Recv that will trigger a read on the socket. Recv will then put -the data to the next upstream module and we're on our way. As noted -earlier, that data will eventually show up either in the reader -(if installed on the stream open()) or the stream head reader task's -message queue. +Note that close() must decide if it's being called when the stream is +shutdown or when it's svc() method exits. Since we tell the baseclass +not to use any threads it's a safe bet that flags will always be +non-zero. Still, it's good practice to plan for the future by +checking the value. +
+Note also that when we send the data we prefix it with the data size. +This let's our sibling Recv ensure that an entire block is received +together. This can be very important for compression and encryption +processes which typically work better with blocks of data instead of +streams of data.
diff --git a/docs/tutorials/015/page16.html b/docs/tutorials/015/page16.html index aa57e8fb2da..e372f6d68bf 100644 --- a/docs/tutorials/015/page16.html +++ b/docs/tutorials/015/page16.html @@ -12,13 +12,20 @@
-The Recv implementation is nearly as simple as Xmit. There's -opportunity for error when we get the message size and we have to -manage the lifetime of the tickler but other than that it's pretty -basic stuff. +Recv is the sibling to Xmit. Again, they could be combined into a +single object if you want. ++An ACE_Stream is designed to handle downstream traffic very +well. You put() data into it and it flows along towards the tail. +However, there doesn't seem to be a way to put data in such that it +will travel upstream. To get around that, I've added a get() method +to Recv that will trigger a read on the socket. Recv will then put +the data to the next upstream module and we're on our way. As noted +earlier, that data will eventually show up either in the reader +(if installed on the stream open()) or the stream head reader task's +message queue.
- // $Id$ #ifndef RECV_H @@ -34,52 +41,49 @@ class ACE_SOCK_Stream; class Recv : public Protocol_Task { public: + typedef Protocol_Task inherited; - typedef Protocol_Task inherited; - - // Give it someone to talk to... - Recv( ACE_SOCK_Stream & _peer ); - - ~Recv(void); + // Give it someone to talk to... + Recv (ACE_SOCK_Stream &peer); + ~Recv (void); - // Trigger a read from the socket - int get(void); + // Trigger a read from the socket + int get (void); - // In some cases it might be easier to check the "state" of the - // Recv object than to rely on return codes filtering back to - // you. - int error(void) - { - return this->error_; - } + // In some cases it might be easier to check the "state" of the Recv + // object than to rely on return codes filtering back to you. + int error (void) + { + return this->error_; + } protected: - ACE_SOCK_Stream & peer(void) - { - return this->peer_; - } + ACE_SOCK_Stream &peer (void) + { + return this->peer_; + } - // The baseclass will trigger this when our get() method is - // called. A message block of the appropriate size is created, - // filled and passed up the stream. - int recv(ACE_Message_Block * message, - ACE_Time_Value *timeout = 0); + // The baseclass will trigger this when our get() method is called. + // A message block of the appropriate size is created, filled and + // passed up the stream. + int recv (ACE_Message_Block *message, + ACE_Time_Value *timeout = 0); private: - // Our endpoint - ACE_SOCK_Stream & peer_; + // Our endpoint + ACE_SOCK_Stream &peer_; - // get() uses a bogus message block to cause the baseclass to - // invoke recv(). To avoid memory thrashing, we create that - // bogus message once and reuse it for the life of Recv. - ACE_Message_Block * tickler_; + // get() uses a bogus message block to cause the baseclass to invoke + // recv(). To avoid memory thrashing, we create that bogus message + // once and reuse it for the life of Recv. + ACE_Message_Block *tickler_; - // Our error flag (duh) - int error_; + // Our error flag (duh) + int error_; }; -#endif // RECV_H +#endif /* RECV_H */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/015/page17.html b/docs/tutorials/015/page17.html index 8d3aaf660d0..de868086604 100644 --- a/docs/tutorials/015/page17.html +++ b/docs/tutorials/015/page17.html @@ -12,8 +12,10 @@
-This and the next three pages present the protocol objects that -provide compression and encryption. If you were hoping to +The Recv implementation is nearly as simple as Xmit. There's +opportunity for error when we get the message size and we have to +manage the lifetime of the tickler but other than that it's pretty +basic stuff.
diff --git a/docs/tutorials/015/page18.html b/docs/tutorials/015/page18.html index ef67934b597..c19a1559750 100644 --- a/docs/tutorials/015/page18.html +++ b/docs/tutorials/015/page18.html @@ -20,7 +20,6 @@ stuff though and if anyone wants to integrate one of them into the tutorial I'll be glad to take it!
- // $Id$ #ifndef COMPRESSOR_H @@ -35,34 +34,34 @@ class Compressor : public Protocol_Task { public: - typedef Protocol_Task inherited; + typedef Protocol_Task inherited; - // I've given you the option of creating this task derivative - // with a number of threads. In retro-spect that really isn't - // a good idea. Most client/server systems rely on requests - // and responses happening in a predicatable order. Introduce - // a thread pool and message queue and that ordering goes - // right out the window. In other words: Don't ever use the - // constructor parameter! - Compressor( int _thr_count = 0 ); + // I've given you the option of creating this task derivative + // with a number of threads. In retro-spect that really isn't + // a good idea. Most client/server systems rely on requests + // and responses happening in a predicatable order. Introduce + // a thread pool and message queue and that ordering goes + // right out the window. In other words: Don't ever use the + // constructor parameter! + Compressor (int thr_count = 0); - ~Compressor(void); + ~Compressor (void); protected: - // This is called when the compressor is on the downstream - // side. We'll take the message, compress it and move it - // along to the next module. - int send(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // This is called when the compressor is on the downstream side. + // We'll take the message, compress it and move it along to the next + // module. + int send (ACE_Message_Block *message, + ACE_Time_Value *timeout); - // This one is called on the upstream side. No surprise: we - // decompress the data and send it on up the stream. - int recv(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // This one is called on the upstream side. No surprise: we + // decompress the data and send it on up the stream. + int recv (ACE_Message_Block *message, + ACE_Time_Value *timeout); }; -#endif // COMPRESSOR_H +#endif /* COMPRESSOR_H */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/015/page20.html b/docs/tutorials/015/page20.html index 641399cb4e2..6b8917150e7 100644 --- a/docs/tutorials/015/page20.html +++ b/docs/tutorials/015/page20.html @@ -21,7 +21,6 @@ show you the hooks and entry points and let someone else contribute an encryptor.
- // $Id$ #ifndef CRYPT_H @@ -36,26 +35,26 @@ class Crypt : public Protocol_Task { public: - typedef Protocol_Task inherited; + typedef Protocol_Task inherited; - // Again we have the option of multiple threads and again I - // regret tempting folks to use it. - Crypt( int _thr_count = 0 ); + // Again we have the option of multiple threads and again I + // regret tempting folks to use it. + Crypt (int thr_count = 0); - ~Crypt(void); + ~Crypt (void); protected: - // Moving downstream will encrypt the data - int send(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // Moving downstream will encrypt the data + int send (ACE_Message_Block *message, + ACE_Time_Value *timeout); - // And moving upstream will decrypt it. - int recv(ACE_Message_Block *message, - ACE_Time_Value *timeout); + // And moving upstream will decrypt it. + int recv (ACE_Message_Block *message, + ACE_Time_Value *timeout); }; -#endif // CRYPT_H +#endif /* CRYPT_H */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/016/page02.html b/docs/tutorials/016/page02.html index fdcfc4e8c1b..70e58f57e3f 100644 --- a/docs/tutorials/016/page02.html +++ b/docs/tutorials/016/page02.html @@ -56,8 +56,8 @@ To help out with that, I've created the class below to encapsulate the three elements necessary for the condition to work. I've then added methods for manipulation of the condition variable and waiting for the condition to occur. -- +
+// $Id$ #ifndef CONDITION_H @@ -65,157 +65,153 @@ condition to occur. #include "ace/Synch.h" -/** A wrapper for ACE_Condition<>. - When you're using an ACE_Condition<> you have to have three things: - - Some variable that embodies the condition you're looking for - - A mutex to prevent simultaneous access to that variable from different threads - - An ACE_Condition<> that enables blocking on state changes in the variable - The class I create here will contain those three things. For the - actual condition variable I've chosen an integer. You could - easily turn this clas into a template parameterized on the - condition variable's data type if 'int' isn't what you want. - */ +/** A wrapper for ACE_Condition<>. When you're using an + ACE_Condition<> you have to have three things: - Some variable + that embodies the condition you're looking for - A mutex to + prevent simultaneous access to that variable from different + threads - An ACE_Condition<> that enables blocking on state + changes in the variable The class I create here will contain those + three things. For the actual condition variable I've chosen an + integer. You could easily turn this clas into a template + parameterized on the condition variable's data type if 'int' isn't + what you want. */ class Condition { public: - // From here on I'll use value_t instead of 'int' to make any - // future upgrades easier. - typedef int value_t; + // From here on I'll use value_t instead of 'int' to make any + // future upgrades easier. + typedef int value_t; - // Initialize the condition variable - Condition(value_t _value = 0); - ~Condition(void); - - /* I've created a number of arithmetic operators on the class - that pass their operation on to the variable. If you turn - this into a template then some of these may not be - appropriate... - For the ones that take a parameter, I've stuck with 'int' - instead of 'value_t' to reinforce the fact that you'll need - a close look at these if you choose to change the 'value_t' - typedef. - */ - - // Increment & decrement - Condition & operator++(void); - Condition & operator--(void); - - // Increase & decrease - Condition & operator+=(int _inc); - Condition & operator-=(int _inc); - - // Just to be complete - Condition & operator*=(int _inc); - Condition & operator/=(int _inc); - Condition & operator%=(int _inc); - - // Set/Reset the condition variable's value - Condition & operator=( value_t _value ); - - /* These four operators perform the actual waiting. For - instance: - - operator!=(int _value) - - is implemented as: - - Guard guard(mutex_) - while( value_ != _value ) - condition_.wait(); - - This is the "typical" use for condition mutexes. Each of - the operators below behaves this way for their respective - comparisions. - - To use one of these in code, you would simply do: - - Condition mycondition; - ... - // Wait until the condition variable has the value 42 - mycondition != 42 - ... - */ - - // As long as the condition variable is NOT EQUAL TO _value, we wait - int operator!=( value_t _value ); - // As long as the condition variable is EXACTLY EQUAL TO _value, we wait - int operator==( value_t _value ); - // As long as the condition variable is LESS THAN OR EQUAL TO _value, we wait - int operator<=( value_t _value ); - // As long as the condition variable is GREATER THAN OR EQUAL TO _value, we wait - int operator>=( value_t _value ); - - // Return the value of the condition variable - operator value_t (void); - - /* In addition to the four ways of waiting above, I've also - create a method that will invoke a function object for each - iteration of the while() loop. - Derive yourself an object from Condition::Compare and - overload operator()(value_t) to take advantage of this. Have - the function return non-zero when you consider the condition - to be met. - */ - class Compare - { - public: - virtual int operator() ( value_t _value ) = 0; - }; - - /* Wait on the condition until _compare(value) returns - non-zero. This is a little odd since we're not really testing - equality. Just be sure that _compare(value_) will return - non-zero when you consider the condition to be met. - */ - int operator==( Compare & _compare ); + // Initialize the condition variable + Condition (value_t value = 0); + ~Condition (void); + + /* I've created a number of arithmetic operators on the class that + pass their operation on to the variable. If you turn this into a + template then some of these may not be appropriate... For the + ones that take a parameter, I've stuck with 'int' instead of + 'value_t' to reinforce the fact that you'll need a close look at + these if you choose to change the 'value_t' typedef. */ + + // Increment & decrement + Condition &operator++ (void); + Condition &operator-- (void); + + // Increase & decrease + Condition &operator+= (int inc); + Condition &operator-= (int inc); + + // Just to be complete + Condition &operator*= (int inc); + Condition &operator/= (int inc); + Condition &operator%= (int inc); + + // Set/Reset the condition variable's value + Condition &operator= (value_t value); + + /* These four operators perform the actual waiting. For instance: + + operator!=(int _value) + + is implemented as: + + Guard guard(mutex_) + while( value_ != _value ) + condition_.wait(); + + This is the "typical" use for condition mutexes. Each of the + operators below behaves this way for their respective + comparisions. + + To use one of these in code, you would simply do: + + Condition mycondition; + ... + // Wait until the condition variable has the value 42 + mycondition != 42 + ... */ + + // As long as the condition variable is NOT EQUAL TO <value>, we wait + int operator!= (value_t value); + + // As long as the condition variable is EXACTLY EQUAL TO <value>, we + // wait + int operator== (value_t value); + + // As long as the condition variable is LESS THAN OR EQUAL TO + // <value>, we wait + int operator<= (value_t value); + + // As long as the condition variable is GREATER THAN OR EQUAL TO + // <value>, we wait + int operator>= (value_t value); + + // Return the value of the condition variable + operator value_t (void); + + /* In addition to the four ways of waiting above, I've also create a + method that will invoke a function object for each iteration of + the while() loop. Derive yourself an object from + Condition::Compare and overload operator()(value_t) to take + advantage of this. Have the function return non-zero when you + consider the condition to be met. */ + class Compare + { + public: + virtual int operator() (value_t value) = 0; + }; + + /* Wait on the condition until _compare(value) returns non-zero. + This is a little odd since we're not really testing equality. + Just be sure that _compare(value_) will return non-zero when you + consider the condition to be met. */ + int operator== (Compare & compare); private: - // Prevent copy construction and assignment. - Condition( const Condition & _condition ); - Condition & operator= ( const Condition & _condition ); - - /* Typedefs make things easier to change later. - ACE_Condition_Thread_Mutex is used as a shorthand for - ACE_Condition<ACE_Thread_Mutex> and also because it may - provide optimizations we can use. - */ - typedef ACE_Thread_Mutex mutex_t; - typedef ACE_Condition_Thread_Mutex condition_t; - typedef ACE_Guard<mutex_t> guard_t; - - // The mutex that keeps the data save - mutex_t mutex_; - - // The condition mutex that makes waiting on the condition - // easier. - condition_t * condition_; - - // The acutal variable that embodies the condition we're - // waiting for. - value_t value_; - - // Accessors for the two mutexes. - mutex_t & mutex(void) - { - return this->mutex_; - } - - condition_t & condition(void) - { - return *(this->condition_); - } - - // This particular accessor will make things much easier if we - // decide that 'int' isn't the correct datatype for value_. - // Note that we keep this private and force clients of the class - // to use the cast operator to get a copy of the value. - value_t & value(void) - { - return this->value_; - } + // Prevent copy construction and assignment. + Condition (const Condition &condition); + Condition &operator= (const Condition &condition); + + /* Typedefs make things easier to change later. + ACE_Condition_Thread_Mutex is used as a shorthand for + ACE_Condition<ACE_Thread_Mutex> and also because it may provide + optimizations we can use. */ + typedef ACE_Thread_Mutex mutex_t; + typedef ACE_Condition_Thread_Mutex condition_t; + typedef ACE_Guard<mutex_t> guard_t; + + // The mutex that keeps the data save + mutex_t mutex_; + + // The condition mutex that makes waiting on the condition easier. + condition_t *condition_; + + // The acutal variable that embodies the condition we're waiting + // for. + value_t value_; + + // Accessors for the two mutexes. + mutex_t &mutex (void) + { + return this->mutex_; + } + + condition_t &condition (void) + { + return *this->condition_; + } + + // This particular accessor will make things much easier if we + // decide that 'int' isn't the correct datatype for value_. Note + // that we keep this private and force clients of the class to use + // the cast operator to get a copy of the value. + value_t &value (void) + { + return this->value_; + } }; -#endif // CONDITION_H +#endif /* CONDITION_H */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/016/page03.html b/docs/tutorials/016/page03.html index 43e65f5001b..4c4077a415a 100644 --- a/docs/tutorials/016/page03.html +++ b/docs/tutorials/016/page03.html @@ -45,7 +45,7 @@ include the mess I've got below! to clients of the class. It also allows us to use a private method for getting a reference to the value when we need to modify it. */ -Condition::operator value_t (void) +Condition::operator Condition::value_t (void) { // Place a guard around the variable so that it won't change as // we're copying it back to the client. diff --git a/docs/tutorials/016/page04.html b/docs/tutorials/016/page04.html index 75c953961cc..b94fe2e48af 100644 --- a/docs/tutorials/016/page04.html +++ b/docs/tutorials/016/page04.html @@ -17,7 +17,8 @@ derivative that will serve as a baseclass for other objects that test specific functions of the Condition class. Notice how easy it is to integrate a Condition into the application without keeping track of three related member variables. -+
+// $Id$ diff --git a/docs/tutorials/016/page05.html b/docs/tutorials/016/page05.html index c8aeb988a5e..baf3eb23a4a 100644 --- a/docs/tutorials/016/page05.html +++ b/docs/tutorials/016/page05.html @@ -26,5 +26,6 @@ create a more useful class for your application.
- // $Id$ #ifndef BARRIER_H @@ -42,49 +41,48 @@ the Barrier object almost as a "synchronization guard". class Barrier { public: - // Basic constructor and destructor. If you only need to - // synch the start of your threads, you can safely delete your - // Barrier object after invoking done(). Of course, you - // should be careful to only delete the object once! - Barrier(void); - ~Barrier(void); + // Basic constructor and destructor. If you only need to synch the + // start of your threads, you can safely delete your Barrier object + // after invoking done(). Of course, you should be careful to only + // delete the object once! + Barrier (void); + ~Barrier (void); - // Set and get the number of threads that the barrier will - // manage. If you add or remove threads to your application - // at run-time you can use the mutator to reflect that - // change. Note, however, that you can only do that from the - // thread which first created the Barrier. (This is a - // limitation of my Barrier object, not the ACE_Barrier.) - // The optional _wait parameter will cause wait() to be - // invoked if there is already a valid threads value. - int threads( u_int _threads, int _wait = 0); - u_int threads(void); + // Set and get the number of threads that the barrier will manage. + // If you add or remove threads to your application at run-time you + // can use the mutator to reflect that change. Note, however, that + // you can only do that from the thread which first created the + // Barrier. (This is a limitation of my Barrier object, not the + // ACE_Barrier.) The optional _wait parameter will cause wait() to + // be invoked if there is already a valid threads value. + int threads (u_int threads, int wait = 0); + u_int threads (void); - // Wait for all threads to reach the point where this is - // invoked. Because of the snappy way in which ACE_Barrier is - // implemented, you can invoke these back-to-back with no ill-effects. - int wait(void); + // Wait for all threads to reach the point where this is invoked. + // Because of the snappy way in which ACE_Barrier is implemented, + // you can invoke these back-to-back with no ill-effects. + int wait (void); - // done() will invoke wait(). Before returning though, it - // will delete the barrier_ pointer below to reclaim some memory. - int done(void); + // done() will invoke wait(). Before returning though, it will + // delete the barrier_ pointer below to reclaim some memory. + int done (void); protected: - // The number of threads we're synching - ACE_Atomic_Op<ACE_Mutex,u_int> threads_; + // The number of threads we're synching + ACE_Atomic_Op<ACE_Mutex,u_int> threads_; - // The ACE_Barrier that does all of the work - ACE_Barrier * barrier_; + // The ACE_Barrier that does all of the work + ACE_Barrier *barrier_; - // The thread which created the Barrier in the first place. - // Only this thread can change the threads_ value. - ACE_thread_t owner_; + // The thread which created the Barrier in the first place. Only + // this thread can change the threads_ value. + ACE_thread_t owner_; - // An internal method that constructs the barrier_ as needed. - int make_barrier( int _wait ); + // An internal method that constructs the barrier_ as needed. + int make_barrier (int wait); }; -#endif // BARRIER_H +#endif /* BARRIER_H */
- // $Id$ #ifndef TEST_T_H @@ -55,49 +54,52 @@ yourself a lot of time! templated class. Generally, there is a non-templated class defined also such as foobar.h that would be included instead of foobar_T.h. */ + template <class MUTEX> class Test_T : public ACE_Task<ACE_MT_SYNCH> { public: - // Allow our derivative to name the class so that we can tell - // the user what's going on as we test the lock. - Test_T( const char * _name ); - - // This will run the entire test. open() will be called to - // activate the task's threads. We then add a number of - // messages to the queue for svc() to process. - int run(void); + // Allow our derivative to name the class so that we can tell the + // user what's going on as we test the lock. + Test_T (const char *name); + + // This will run the entire test. open() will be called to activate + // the task's threads. We then add a number of messages to the + // queue for svc() to process. + int run (void); protected: - // Activate a few threads - int open( void * _arg = 0 ); - // Read some things from the message queue and exercise the - // lock. - int svc( void ); - // Send a message block to svc(). If _message is 0 then send - // a shutdown request (e.g., MB_HANGUP) - int send( ACE_Message_Block * _message = 0 ); - - // The object's name. Typically provided by a derivative. - const char * name_; - // We want to barrier the svc() methods to give all of the - // threads a fair chance - ACE_Barrier barrier_; - - // As each thread enters svc() it will increment this. While - // we have a thread id available to us, I wanted a simple - // value to display in debug messages. - ACE_Atomic_Op<ACE_Mutex,int> thread_num_; - - // Set our mutex type based on the template parameter. We - // then build a guard type based on that type. - typedef MUTEX mutex_t; - typedef ACE_Guard<mutex_t> guard_t; - - // Our mutex. We'll use this in svc() to protect imaginary - // shared resources. - mutex_t mutex_; + // Activate a few threads + int open (void *arg = 0); + + // Read some things from the message queue and exercise the lock. + int svc (void); + + // Send a message block to svc(). If _message is 0 then send a + // shutdown request (e.g., MB_HANGUP) + int send (ACE_Message_Block * message = 0); + + // The object's name. Typically provided by a derivative. + const char *name_; + + // We want to barrier the svc() methods to give all of the threads a + // fair chance + ACE_Barrier barrier_; + + // As each thread enters svc() it will increment this. While we + // have a thread id available to us, I wanted a simple value to + // display in debug messages. + ACE_Atomic_Op<ACE_Mutex,int> thread_num_; + + // Set our mutex type based on the template parameter. We then + // build a guard type based on that type. + typedef MUTEX mutex_t; + typedef ACE_Guard<mutex_t> guard_t; + + // Our mutex. We'll use this in svc() to protect imaginary shared + // resources. + mutex_t mutex_; }; /* Although different compilers differ in their details, almost all of @@ -115,7 +117,7 @@ protected: #pragma implementation ("Test_T.cpp") #endif /* ACE_TEMPLATES_REQUIRE_PRAGMA */ -#endif // TEST_T_H +#endif /* TEST_T_H */
- // $Id$ /* This is something new... Since we're included by the header, we @@ -48,175 +47,177 @@ resources that the threads might clobber. creation to make the output more readable. */ template <class MUTEX> -Test_T<MUTEX>::Test_T( const char * _name ) - : ACE_Task<ACE_MT_SYNCH>() - ,name_(_name) - ,barrier_(TEST_THREAD_COUNT) +Test_T<MUTEX>::Test_T (const char *name) + : ACE_Task<ACE_MT_SYNCH>(), + name_ (name), + barrier_ (TEST_THREAD_COUNT) { - ACE_DEBUG ((LM_INFO, "(%P|%t|%T)\tTest_T (%s) created\n", _name )); + ACE_DEBUG ((LM_INFO, + "(%P|%t|%T)\tTest_T (%s) created\n", + name)); } /* Activate the threads and create some test data... */ -template <class MUTEX> -int Test_T<MUTEX>::run(void) +template <class MUTEX> int +Test_T<MUTEX>::run (void) { - // Try to activate the set of threads that will test the mutex - if( this->open() == -1 ) - { - return -1; - } + // Try to activate the set of threads that will test the mutex + if (this->open () == -1) + return -1; - // Create a set of messages. I chose twice the thread count - // so that we can see how they get distributed. - for( int i = 0 ; i < TEST_THREAD_COUNT*2 ; ++i ) + // Create a set of messages. I chose twice the thread count so that + // we can see how they get distributed. + for (int i = 0; i < TEST_THREAD_COUNT*2; ++i) { - // A message block big enough for a simple message. - ACE_Message_Block * message = new ACE_Message_Block(64); - - // Put some text into the message block so that we can - // know what's going on when we get to svc() - sprintf(message->wr_ptr(),"Message Number %d",i); - message->wr_ptr( strlen(message->rd_ptr())+1 ); - - // Send the message to the thread pool - if( this->send(message) == -1 ) - { - break; - } + // A message block big enough for a simple message. + ACE_Message_Block *message; + + ACE_NEW_RETURN (message, + ACE_Message_Block (64), + -1); + + // Put some text into the message block so that we can know + // what's going on when we get to svc() + sprintf (message->wr_ptr (), + "Message Number %d", + i); + message->wr_ptr (ACE_OS::strlen (message->rd_ptr ()) + 1); + + // Send the message to the thread pool + if (this->send (message) == -1) + break; } - // Send a hangup to the thread pool so that we can exit. - if( this->send() == -1 ) - { - return -1; - } + // Send a hangup to the thread pool so that we can exit. + if (this->send () == -1) + return -1; - // Wait for all of the threads to exit and then return to the client. - return this->wait(); + // Wait for all of the threads to exit and then return to the client. + return this->wait (); } /* Send a message to the thread pool */ -template <class MUTEX> -int Test_T<MUTEX>::send( ACE_Message_Block * _message ) +template <class MUTEX> int +Test_T<MUTEX>::send (ACE_Message_Block *message) { - // If no message was provided, create a hangup message. - if( ! _message ) + // If no message was provided, create a hangup message. + if (message == 0) + ACE_NEW_RETURN (message, + ACE_Message_Block (0, + ACE_Message_Block::MB_HANGUP), + -1); + + // Use the duplicate() method when sending the message. For this + // simple application, that may be overkill but it's a good habit. + // duplicate() will increment the reference count so that each user + // of the message can release() it when done. The last user to call + // release() will cause the data to be deleted. + if (this->putq (message->duplicate ()) == -1) { - _message = new - ACE_Message_Block(0,ACE_Message_Block::MB_HANGUP); + // Error? release() the message block and return failure. + message->release (); + return -1; } - // Use the duplicate() method when sending the message. For - // this simple application, that may be overkill but it's a - // good habit. duplicate() will increment the reference count - // so that each user of the message can release() it when - // done. The last user to call release() will cause the data - // to be deleted. - if( this->putq(_message->duplicate()) == -1 ) - { - // Error? release() the message block and return failure. - _message->release(); - return -1; - } + // release() the data to prevent memory leaks. + message->release(); - // release() the data to prevent memory leaks. - _message->release(); - - return 0; + return 0; } /* A farily typical open(). Just activate the set of threads and return. */ -template <class MUTEX> -int Test_T<MUTEX>::open( void * _arg ) +template <class MUTEX> int +Test_T<MUTEX>::open (void *arg) { - ACE_UNUSED_ARG(_arg); - return this->activate(THR_NEW_LWP, TEST_THREAD_COUNT); + ACE_UNUSED_ARG(_arg); + return this->activate (THR_NEW_LWP, + TEST_THREAD_COUNT); } /* svc() is also fairly typical. The new part is the use of the guard to simulate protection of shared resources. */ -template <class MUTEX> -int Test_T<MUTEX>::svc(void) +template <class MUTEX> int +Test_T<MUTEX>::svc (void) { - // Keep a simple thread identifier. We could always use the - // thread id but this is a nice, simple number. - int my_number = ++thread_num_; - - ACE_DEBUG ((LM_INFO, "%d (%P|%t|%T)\tTest_T::svc() Entry\n", - my_number)); - - // Wait for all of threads to get started so that they all - // have a fair shot at the message queue. Comment this out - // and see how the behaviour changes. Does it surprise you? - barrier_.wait(); - - ACE_Message_Block * message; - int mcount = 0; - - // This would usually be an almost-infinite loop. Instead, - // I've governed it so that no single thread can get more than - // "thread count" number of messages. You'll see that with - // ACE_Mutex, this is just about the only way to keep the - // first thread from getting all the action. Ths is obviously - // just for sake of the test since you don't want your - // real-world app to exit after a fixed number of messages! - while( mcount < TEST_THREAD_COUNT ) + // Keep a simple thread identifier. We could always use the + // thread id but this is a nice, simple number. + int my_number = ++thread_num_; + + ACE_DEBUG ((LM_INFO, + "%d (%P|%t|%T)\tTest_T::svc() Entry\n", + my_number)); + + // Wait for all of threads to get started so that they all have a + // fair shot at the message queue. Comment this out and see how the + // behaviour changes. Does it surprise you? + barrier_.wait (); + + ACE_Message_Block *message; + int mcount = 0; + + // This would usually be an almost-infinite loop. Instead, I've + // governed it so that no single thread can get more than "thread + // count" number of messages. You'll see that with ACE_Mutex, this + // is just about the only way to keep the first thread from getting + // all the action. Ths is obviously just for sake of the test since + // you don't want your real-world app to exit after a fixed number + // of messages! + while (mcount < TEST_THREAD_COUNT) { - // Get a message. Since the message queue is already - // thread-safe we don't have to guard it. In fact, moving - // the guard up above getq() will decrease your - // parallelization. - if( getq(message) == -1 ) - { - break; - } - - // Now we pretend that there are shared resources required - // to process the data. We grab the mutex through the - // guard and "do work". In a real application, you'll - // want to keep these critical sections as small as - // possible since they will reduce the usefulness of - // multi-threading. - guard_t guard(mutex_); - - // Increase our message count for the debug output and the - // governor. - ++mcount; - - // Check for a hangup request... - // Notice the use of release() again to prevent leaks - if( message->msg_type() == ACE_Message_Block::MB_HANGUP ) + // Get a message. Since the message queue is already + // thread-safe we don't have to guard it. In fact, moving the + // guard up above getq() will decrease your parallelization. + if (getq (message) == -1) + break; + + // Now we pretend that there are shared resources required to + // process the data. We grab the mutex through the guard and + // "do work". In a real application, you'll want to keep these + // critical sections as small as possible since they will reduce + // the usefulness of multi-threading. + guard_t guard (mutex_); + + // Increase our message count for the debug output and the + // governor. + ++mcount; + + // Check for a hangup request... Notice the use of release() + // again to prevent leaks + if (message->msg_type () == ACE_Message_Block::MB_HANGUP) { - message->release(); - break; + message->release (); + break; } - // Display the message so that we can see if things are - // working the way we want. - ACE_DEBUG ((LM_INFO, "%d (%P|%t|%T)\tTest_T::svc() received message #%d (%s)\n", - my_number,mcount,message->rd_ptr())); + // Display the message so that we can see if things are working + // the way we want. + ACE_DEBUG ((LM_INFO, + "%d (%P|%t|%T)\tTest_T::svc() received message #%d (%s)\n", + my_number, + mcount, + message->rd_ptr ())); - // Pretend that the work takes some time to complete. - // Remember, we're holding that lock during this time! - ACE_OS::sleep(1); + // Pretend that the work takes some time to complete. Remember, + // we're holding that lock during this time! + ACE_OS::sleep (1); - // No leaks... - message->release(); + // No leaks... + message->release (); } - // Send a hangup to the other threads in the pool. If we don't - // do this then wait() will never exit since all of the other - // threads are still blocked on getq(). - this->send(); + // Send a hangup to the other threads in the pool. If we don't do + // this then wait() will never exit since all of the other threads + // are still blocked on getq(). + this->send (); - return(0); + return 0; }; -#endif // TEST_T_C +#endif /* TEST_T_C */
Token_i.h
- +// $Id$ #ifndef TOKEN_I_H @@ -36,16 +35,13 @@ retyping and certainly much less chance of error! class Token : public Test_T<ACE_Token> { public: - Token(void) - : Test_T<ACE_Token>("Token") - {} + Token (void): Test_T<ACE_Token> ("Token") {} }; -#endif // TOKEN_I_H +#endif /* TOKEN_I_H */-Mutex_i.h
- +// $Id$ #ifndef MUTEX_I_H @@ -59,12 +55,10 @@ public: class Mutex : public Test_T<ACE_Mutex> { public: - Mutex(void) - : Test_T<ACE_Mutex>("Mutex") - {} + Mutex (void) : Test_T<ACE_Mutex> ("Mutex") {} }; -#endif // MUTEX_I_H +#endif /* MUTEX_I_H */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/019/page01.html b/docs/tutorials/019/page01.html index c4ab2a7d300..2d74f78c874 100644 --- a/docs/tutorials/019/page01.html +++ b/docs/tutorials/019/page01.html @@ -34,5 +34,6 @@ myself. This tutorial and the next are very simple-minded and primitive. Anyone who wants to provide more realistic replacements is encouraged to drop me a note - (jcej@lads.com).
+ (jcej@lads.com). +
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/019/page02.html b/docs/tutorials/019/page02.html index d6a42dc8aef..722797e6ba4 100644 --- a/docs/tutorials/019/page02.html +++ b/docs/tutorials/019/page02.html @@ -21,7 +21,9 @@ can be created external to your application and can persist beyond it's lifetime. In fact, you can use shared memory to create a layer of persistence between application instances (at least, until the machine comes down.) -+
++// $Id$ /* The client and server both need to know the shared memory key and @@ -30,98 +32,123 @@ machine comes down.) */ #include "shmem.h" -int main (int, char *[]) +#if defined (ACE_LACKS_SYSV_SHMEM) +int +main (int, char *[]) { - /* - You can use the ACE_Malloc template to create memory pools - from various shared memory strategies. It's really cool. - We're not going to use it. - - Instead, I want to get to the roots of it all and directly - use ACE_Shared_Memory_SV. Like many ACE objects, this is a - wrapper around OS services. - - With this constructor we create a shared memory area to - use. The ACE_CREATE flag will cause it to be created if it - doesn't already exist. The SHM_KEY value (from shmem.h) - uniquely identifies the segment and allows other apps to - attach to the same segment. Execute 'ipcs -m' before and - after starting this app to see that the segment is created. - (I can't for the life of me correlate the SHM_KEY value back - to the key/id reported by ipcs though.) - */ - ACE_Shared_Memory_SV shm_server (SHM_KEY, SHMSZ, - ACE_Shared_Memory_SV::ACE_CREATE); - - /* - The constructor created the segment for us but we still need - to map the segment into our address space. (Note that you - can pass a value to malloc() but it will be silently - igored.) The void* (cast to char*) that is returned will - point to the beginning of the shared segment. - */ - char *shm = (char *) shm_server.malloc (); - - /* - This second pointer will be used to walk through the block - of memory... - */ - char *s = shm; - - /* - Out of curiosity, I added this output message. The tests - I've done so far show me the same address for client and - server. What does your OS tell you? - */ - ACE_DEBUG ((LM_INFO, "(%P|%t) Shared Memory is at 0x%x\n", - shm )); - - /* - At this point, our application can use the pointer just like - any other given to us by new or malloc. For our purposes, - we'll copy in the alpabet as a null-terminated string. - */ - for (char c = 'a'; c <= 'z'; c++) - *s++ = c; - - *s = '\0'; - - /* - Using a simple not-too-busy loop, we'll wait for the client - (or anyone else) to change the first byte in the shared area - to a '*' character. This is where you would rather use - semaphores or some similar "resource light" approach. - */ - while (*shm != '*') - ACE_OS::sleep (1); - - /* - Let's see what the client did to the segment... - */ - for (char *s = shm; *s != '\0'; s++) - { - putchar (*s); - } - putchar ('\n'); - - /* - If you're done with the segment and ready for it to be - removed from the system, use the remove() method. Once the - program exits, do 'ipcs -m' again and you'll see that the - segment is gone. If you just want to terminate your use of - the segment but leave it around for other apps, use the - close() method instead. - - The free() method may be tempting but it doesn't actually do - anything. If your app is *really* done with the shared - memory then use either close() or remove(). - */ - if (shm_server.remove () < 0) - ACE_ERROR ((LM_ERROR, "%p\n", "remove")); - - return 0; + ACE_ERROR_RETURN ((LM_ERROR, + "System V Shared Memory not available on this platform\n"), + 100); +} +#else // ACE_LACKS_SYSV_SHMEM +int +main (int, char *argv[]) +{ + /* + You can use the ACE_Malloc template to create memory pools + from various shared memory strategies. It's really cool. + We're not going to use it. + + Instead, I want to get to the roots of it all and directly + use ACE_Shared_Memory_SV. Like many ACE objects, this is a + wrapper around OS services. + + With this constructor we create a shared memory area to + use. The ACE_CREATE flag will cause it to be created if it + doesn't already exist. The SHM_KEY value (from shmem.h) + uniquely identifies the segment and allows other apps to + attach to the same segment. Execute 'ipcs -m' before and + after starting this app to see that the segment is created. + (I can't for the life of me correlate the SHM_KEY value back + to the key/id reported by ipcs though.) + */ + ACE_Shared_Memory_SV shm_server (SHM_KEY, SHMSZ, + ACE_Shared_Memory_SV::ACE_CREATE); + + /* + The constructor created the segment for us but we still need + to map the segment into our address space. (Note that you + can pass a value to malloc() but it will be silently + igored.) The void* (cast to char*) that is returned will + point to the beginning of the shared segment. + */ + char *shm = (char *) shm_server.malloc (); + + /* + Since we're asking to create the segment, we will fail if it + already exists. We could fall back and simply attach to it + like the client but I'd rather not assume it was a previous + instance of this app that left the segment around. + */ + if (shm == 0) + ACE_ERROR_RETURN ((LM_ERROR, + "%p\n\t(%P|%t) Cannot create shared memory segment.\n" + "\tUse 'ipcs' to see if it already exists\n", + argv[0]), + 100); + + /* + This second pointer will be used to walk through the block + of memory... + */ + char *s = shm; + + /* + Out of curiosity, I added this output message. The tests + I've done so far show me the same address for client and + server. What does your OS tell you? + */ + ACE_DEBUG ((LM_INFO, + "(%P|%t) Shared Memory is at 0x%x\n", + shm )); + + /* + At this point, our application can use the pointer just like + any other given to us by new or malloc. For our purposes, + we'll copy in the alpabet as a null-terminated string. + */ + for (char c = 'a'; c <= 'z'; c++) + *s++ = c; + + *s = '\0'; + + /* + Using a simple not-too-busy loop, we'll wait for the client + (or anyone else) to change the first byte in the shared area + to a '*' character. This is where you would rather use + semaphores or some similar "resource light" approach. + */ + while (*shm != '*') + ACE_OS::sleep (1); + + /* + Let's see what the client did to the segment... + */ + for (s = shm; *s != '\0'; s++) + putchar (*s); + + putchar ('\n'); + + /* + If you're done with the segment and ready for it to be + removed from the system, use the remove() method. Once the + program exits, do 'ipcs -m' again and you'll see that the + segment is gone. If you just want to terminate your use of + the segment but leave it around for other apps, use the + close() method instead. + + The free() method may be tempting but it doesn't actually do + anything. If your app is *really* done with the shared + memory then use either close() or remove(). + */ + if (shm_server.remove () < 0) + ACE_ERROR ((LM_ERROR, + "%p\n", + "remove")); + return 0; } +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/019/page03.html b/docs/tutorials/019/page03.html index 76a9c48229e..e5202514c21 100644 --- a/docs/tutorials/019/page03.html +++ b/docs/tutorials/019/page03.html @@ -17,67 +17,80 @@ CREATE flag with no ill effects but note the use of close() instead of remove(). Picking the correct detachment method is rather important!
+// $Id$ // Again, the common stuff #include "shmem.h" -int main (int, char *[]) +#if defined(ACE_LACKS_SYSV_SHMEM) +int +main (int, char *[]) { - /* - Attach ourselves to the shared memory segment. - */ - ACE_Shared_Memory_SV shm_client (SHM_KEY, SHMSZ); + ACE_ERROR_RETURN ((LM_ERROR, + "System V Shared Memory not available on this platform\n"), + 100); +} +#else // ACE_LACKS_SYSV_SHMEM +int +main (int, char *[]) +{ + /* + Attach ourselves to the shared memory segment. + */ + ACE_Shared_Memory_SV shm_client (SHM_KEY, SHMSZ); - /* - Get our reference to the segment... - */ - char *shm = (char *) shm_client.malloc (); + /* + Get our reference to the segment... + */ + char *shm = (char *) shm_client.malloc (); - /* - If the segment identified by SHM_KEY didn't exist then we'll - get back a 0 from malloc(). You should do this check even - if you include the CREATE flag 'cause you never know when it - might fail. - */ - if( ! shm ) - { - ACE_ERROR_RETURN ((LM_ERROR,"(%P|%t) Could not get the segment!\n"),100); - } + /* + If the segment identified by SHM_KEY didn't exist then we'll + get back a 0 from malloc(). You should do this check even + if you include the CREATE flag 'cause you never know when it + might fail. + */ + if (shm == 0) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P|%t) Could not get the segment!\n"), + 100); - /* - Does this match what your server said? - */ - ACE_DEBUG ((LM_INFO, "(%P|%t) Shared Memory is at 0x%x\n", - shm )); + /* + Does this match what your server said? + */ + ACE_DEBUG ((LM_INFO, + "(%P|%t) Shared Memory is at 0x%x\n", + shm )); - /* - Show the shared data to the user and convert it all to - uppper-case along the way. - */ - for (char *s = shm; *s != '\0'; s++) + /* + Show the shared data to the user and convert it all to + uppper-case along the way. + */ + for (char *s = shm; *s != '\0'; s++) { - putchar (*s); - *s = toupper(*s); + putchar (*s); + *s = toupper(*s); } - putchar ('\n'); + putchar ('\n'); - /* - Flag the server that we're done. - */ - *shm = '*'; + /* + Flag the server that we're done. + */ + *shm = '*'; - /* - Here, we use close() instead of remove(). Remember, that - will just remove our attachment to the segment. Look - closely at the 'nattch' column of the ipcs output & you'll - see that this decrements it by one. - */ - shm_client.close(); + /* + Here, we use close() instead of remove(). Remember, that + will just remove our attachment to the segment. Look + closely at the 'nattch' column of the ipcs output & you'll + see that this decrements it by one. + */ + shm_client.close(); - return 0; + return 0; } +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/019/page04.html b/docs/tutorials/019/page04.html index 635659e6204..79f00aefbb7 100644 --- a/docs/tutorials/019/page04.html +++ b/docs/tutorials/019/page04.html @@ -29,86 +29,122 @@ That's not to say you shouldn't try... Just try carefully and test a lot!
server2.cpp
+// $Id$ #include "shmem.h" -int +#if defined (ACE_LACKS_SYSV_SHMEM) +int main (int, char *[]) { - // Be sure the segment is sized to hold our object. - ACE_Shared_Memory_SV shm_server (SHM_KEY, sizeof(SharedData), - ACE_Shared_Memory_SV::ACE_CREATE); - - char *shm = (char *) shm_server.malloc (); - - ACE_DEBUG ((LM_INFO, "(%P|%t) Shared Memory is at 0x%x\n", - shm )); - - /* - Use the placement new syntax to stuff the object into the - correct location. I think they generally reserve this for - the advanced class... - */ - SharedData * sd = new(shm) SharedData; - - // Use the set() method to put some data into the object - sd->set(); - - // Set the 'available' flag to zero so that we can wait on it - sd->available(0); - - /* - Another cheesy busy loop while we wait for the object to - become available. The cool way would be to hide a semaphore - or two behind this method call & eliminate the sleep. - */ - while ( ! sd->available() ) - ACE_OS::sleep (1); - - // Show the user what's in the segment - sd->show(); - - // All done. - if (shm_server.remove () < 0) - ACE_ERROR ((LM_ERROR, "%p\n", "remove")); - - return 0; + ACE_ERROR_RETURN ((LM_ERROR, + "System V Shared Memory not available on this platform\n"), + 100); +} +#else // ACE_LACKS_SYSV_SHMEM +int +main (int, char *argv[]) +{ + // Be sure the segment is sized to hold our object. + ACE_Shared_Memory_SV shm_server (SHM_KEY, + sizeof (SharedData), + ACE_Shared_Memory_SV::ACE_CREATE); + char *shm = (char *) shm_server.malloc (); + + if (shm == 0) + ACE_ERROR_RETURN ((LM_ERROR, + "%p\n\t(%P|%t) Cannot create shared memory segment.\n" + "\tUse 'ipcs' to see if it already exists\n", + argv[0]), + 100); + + ACE_DEBUG ((LM_INFO, + "(%P|%t) Shared Memory is at 0x%x\n", + shm )); + + /* + Use the placement new syntax to stuff the object into the + correct location. I think they generally reserve this for + the advanced class... + */ + SharedData *sd = new (shm) SharedData; + + // Use the set() method to put some data into the object + sd->set (); + + // Set the 'available' flag to zero so that we can wait on it + sd->available (0); + + /* + Another cheesy busy loop while we wait for the object to + become available. The cool way would be to hide a semaphore + or two behind this method call & eliminate the sleep. + */ + while (sd->available () == 0) + ACE_OS::sleep (1); + + // Show the user what's in the segment + sd->show (); + + // All done. + if (shm_server.remove () < 0) + ACE_ERROR ((LM_ERROR, + "%p\n", + "remove")); + return 0; } +#endif /* ACE_LACKS_SYSV_SHMEM */client2.cpp
+// $Id$ #include "shmem.h" -int main (int, char *[]) +#if defined(ACE_LACKS_SYSV_SHMEM) +int +main (int, char *[]) { - ACE_Shared_Memory_SV shm_client (SHM_KEY, sizeof(SharedData)); + ACE_ERROR_RETURN ((LM_ERROR, + "System V Shared Memory not available on this platform\n"), + 100); +} +#else // ACE_LACKS_SYSV_SHMEM +int +main (int, char *[]) +{ + ACE_Shared_Memory_SV shm_client (SHM_KEY, + sizeof (SharedData)); - char *shm = (char *) shm_client.malloc (); - - ACE_DEBUG ((LM_INFO, "(%P|%t) Shared Memory is at 0x%x\n", - shm )); - - /* - More placement new. The constructor parameter prevents - clobbering what the server may have written with it's show() - method. - */ - SharedData * sd = new(shm) SharedData(0); - - // Show it - sd->show(); - // Change it - sd->set(); - // Advertise it - sd->available(1); - - shm_client.close(); + char *shm = (char *) shm_client.malloc (); + + ACE_DEBUG ((LM_INFO, + "(%P|%t) Shared Memory is at 0x%x\n", + shm)); + + /* + More placement new. The constructor parameter prevents + clobbering what the server may have written with it's show() + method. + */ + SharedData *sd = new (shm) SharedData (0); + + // Show it + sd->show (); + + // Change it + sd->set (); + + // Advertise it + sd->available (1); + + shm_client.close (); - return 0; + return 0; } +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/019/page05.html b/docs/tutorials/019/page05.html index b04217a9551..7611341ac98 100644 --- a/docs/tutorials/019/page05.html +++ b/docs/tutorials/019/page05.html @@ -18,6 +18,7 @@
shmem.h
+// $Id$ #ifndef SHMEM_H #define SHMEM_H @@ -39,52 +40,59 @@ class SharedData { public: - // Construct the object and optionally initialize buf_. - SharedData(int _initialize = 1); - - // Put some data into buf_ - void set(void); - // Show the data in buf_ - void show(void); - // What is the value of available_ - int available(void); - // Set the value of available_ - void available(int _available); + // Construct the object and optionally initialize buf_. + SharedData (int initialized = 1); + + // Put some data into buf_ + void set (void); + + // Show the data in buf_ + void show (void); + + // What is the value of available_ + int available (void); + + // Set the value of available_ + void available (int not_in_use); protected: - // Big enough for a simple message - char buf_[128]; - // A cheap mutex - int available_; + // Big enough for a simple message + char buf_[128]; + // A cheap mutex + int available_; }; -#endif // SHMEM_H +#endif /* SHMEM_H */shmem.cpp
+// $Id$ #include "shmem.h" +#if ! defined (ACE_LACKS_SYSV_SHMEM) + /* Set the available_ flag to zero & optionally initialize the buf_ area. */ -SharedData::SharedData(int _initialize) - : available_(0) + +SharedData::SharedData (int initialize) + : available_ (0) { - if( _initialize ) - { - ACE_OS::sprintf(buf_,"UNSET\n"); - } + if (initialize) + ACE_OS::sprintf (buf_, "UNSET\n"); } /* Write the process ID into the buffer. This will prove to us that the data really is shared between the client and server. */ -void SharedData::set(void) +void SharedData::set (void) { - ACE_OS::sprintf(buf_,"My PID is (%d)\n",ACE_OS::getpid()); + ACE_OS::sprintf (buf_, + "My PID is (%d)\n", + ACE_OS::getpid ()); } /* @@ -92,21 +100,24 @@ void SharedData::set(void) */ void SharedData::show(void) { - ACE_DEBUG ((LM_INFO, "(%P|%t) Shared Data text is (%s)\n", - buf_ )); + ACE_DEBUG ((LM_INFO, + "(%P|%t) Shared Data text is (%s)\n", + buf_)); } // Show flag int SharedData::available(void) { - return available_; + return available_; } // Set flag -void SharedData::available(int _available) +void SharedData::available(int a) { - available_ = _available; + available_ = a; } + +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/019/page06.html b/docs/tutorials/019/page06.html index 65b8c7ac7ca..3951e6cfd14 100644 --- a/docs/tutorials/019/page06.html +++ b/docs/tutorials/019/page06.html @@ -23,6 +23,5 @@
+// $Id$ + #include "mmap.h" int @@ -50,7 +52,7 @@ main (int, char *[]) while (*shm != '*') ACE_OS::sleep (1); - for (char *s = shm; *s != '\0'; s++) + for (s = shm; *s != '\0'; s++) { putchar (*s); } diff --git a/docs/tutorials/020/page03.html b/docs/tutorials/020/page03.html index d7b41c8e51e..9ff47b04670 100644 --- a/docs/tutorials/020/page03.html +++ b/docs/tutorials/020/page03.html @@ -17,6 +17,8 @@ There's no important difference between this and the SV client. Is
+// $Id$ + #include "mmap.h" int main (int, char *[]) @@ -45,7 +47,6 @@ int main (int, char *[]) return 0; } -
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/020/page04.html b/docs/tutorials/020/page04.html index 037b820b63e..93155c27c49 100644 --- a/docs/tutorials/020/page04.html +++ b/docs/tutorials/020/page04.html @@ -24,6 +24,8 @@ Imagine if you had an object that contained an image & then you mappedserver2.cpp
+// $Id$ + #include "mmap.h" int @@ -56,6 +58,8 @@ main (int, char *[])client2.cpp
+// $Id$ + #include "mmap.h" int main (int, char *[]) diff --git a/docs/tutorials/020/page05.html b/docs/tutorials/020/page05.html index 04016a34532..0d85cc09afa 100644 --- a/docs/tutorials/020/page05.html +++ b/docs/tutorials/020/page05.html @@ -17,6 +17,7 @@ The mmap.h where we define stuff that needs to be shared between the
mmap.h
+// $Id$ #ifndef MMAP_H #define MMAP_H @@ -41,23 +42,25 @@ The mmap.h where we define stuff that needs to be shared between the class SharedData { public: - SharedData(int _initialize = 1); + SharedData (int initialize = 1); - void set(void); - void show(void); - int available(void); - void available(int _available); + void set (void); + void show (void); + int available (void); + void available (int not_in_use); protected: - char buf_[128]; - int available_; + char buf_[128]; + int available_; }; -#endif // MMAP_H +#endif /* MMAP_H */mmap.cpp
+// $Id$ + #include "mmap.h" SharedData::SharedData(int _initialize) diff --git a/docs/tutorials/021/page02.html b/docs/tutorials/021/page02.html index 0c1a4014b61..2aac4677274 100644 --- a/docs/tutorials/021/page02.html +++ b/docs/tutorials/021/page02.html @@ -19,7 +19,8 @@
+
+// $Id$ @@ -29,126 +30,133 @@ */ #include "mpool.h" -int main (int, char *[]) +#if defined(ACE_LACKS_SYSV_SHMEM) +int +main (int, char *[]) { - /* - Construction of an Allocator will create the memory pool and - provide it with a name. The Constants class is also - declared in mpool.h to keep server and client on the same - page. The name is used to generate a unique semaphore which - prevents simultaneous access to the pools housekeeping - information. (Note that you still have to provide your own - synch mechanisms for the data *you* put in the pool.) - */ - Allocator allocator(Constants::PoolName); - - /* - The Allocator class provides the pool() member so that you - have access to the actual memory pool. A more robust - implementation would behave more as a bridge class but this - is good enough for what we're doing here. - Once you have a reference to the pool, the malloc() method - can be used to get some bytes. If successful, shm will - point to the data. Otherwise, it will be zero. - */ - char *shm = (char *) allocator.pool().malloc (27); - - ACE_ASSERT( shm != 0 ); - - /// FYI - ACE_DEBUG ((LM_INFO, "Shared memory is at 0x%x\n", shm )); - - /* - Something that we can do with a memory pool is map a name to - a region provided by malloc. By doing this, we can - communicate that name to the client as a rendezvous - location. Again, a member of Constants is used to keep the - client and server coordinated. - */ - if( allocator.pool().bind(Constants::RegionName,shm) == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "Cannot bind the name '%s' to the pointer 0x%x\n", - Constants::RegionName,shm), 100 ); - } - - /* - One of the best ways to synch between different processes is - through the use of semaphores. ACE_SV_Semaphore_Complex - hides the gory details and lets us use them rather easily. - - Here, we'll create two semaphores: mutex and synch. mutex - will be used to provide mutually exclusive access to the - shared region for writting/reading. synch will be used to - prevent the server from removing the memory pool before the - client is done with it. - - Both semaphores are created in an initially locked state. - */ - - ACE_SV_Semaphore_Complex mutex; - ACE_ASSERT (mutex.open (Constants::SEM_KEY_1, - ACE_SV_Semaphore_Complex::ACE_CREATE, 0) != -1); - - ACE_SV_Semaphore_Complex synch; - ACE_ASSERT (synch.open (Constants::SEM_KEY_2, - ACE_SV_Semaphore_Complex::ACE_CREATE, 0) != -1); - - /* - We know the mutex is locked because we created it that way. - Take a moment to write some data into the shared region. - */ - for (int i = 0; i < Constants::SHMSZ; i++) - { - shm[i] = Constants::SHMDATA[i]; - } - - /* - The client will be blocking on an acquire() of mutex. By - releasing it here, the client can go look at the shared data. - */ - if (mutex.release () == -1) - { - ACE_ERROR ((LM_ERROR, "(%P) %p", "server mutex.release")); - } - /* - Even though we created the synch semaphore in a locked - state, if we attempt to acquire() it, we will block. Our - design requires that the client release() synch when it is - OK for us to remove the shared memory. - */ - else if (synch.acquire () == -1) - { - ACE_ERROR ((LM_ERROR, "(%P) %p", "server synch.acquire")); - } - - /* - This will remove all of the memory pool's resources. In the - case where a memory mapped file is used, the physical file - will also be removed. - */ - if (allocator.pool ().remove () == -1) - { - ACE_ERROR ((LM_ERROR, "(%P) %p\n", "server allocator.remove")); - } - - /* - We now have to cleanup the semaphores we created. Use the - ipcs command to see that they did, indeed, go away after the - server exits. - */ + ACE_ERROR_RETURN ((LM_ERROR, + "System V Semaphores not available on this platform.\n"),100); +} +#else // ACE_LACKS_SYSV_SHMEM +int +main (int, char *[]) +{ + /* + Construction of an Allocator will create the memory pool and + provide it with a name. The Constants class is also + declared in mpool.h to keep server and client on the same + page. The name is used to generate a unique semaphore which + prevents simultaneous access to the pools housekeeping + information. (Note that you still have to provide your own + synch mechanisms for the data *you* put in the poo.) + */ + Allocator allocator (Constants::PoolName); + + /* + The Allocator class provides the pool() member so that you + have access to the actual memory pool. A more robust + implementation would behave more as a bridge class but this + is good enough for what we're doing here. + Once you have a reference to the pool, the malloc() method + can be used to get some bytes. If successful, shm will + point to the data. Otherwise, it will be zero. + */ + char *shm = (char *) allocator.pool ().malloc (27); + + ACE_ASSERT (shm != 0); + + /// FYI + ACE_DEBUG ((LM_INFO, + "Shared memory is at 0x%x\n", + shm)); + + /* + Something that we can do with a memory pool is map a name to + a region provided by malloc. By doing this, we can + communicate that name to the client as a rendezvous + location. Again, a member of Constants is used to keep the + client and server coordinated. + */ + if (allocator.pool ().bind(Constants::RegionName,shm) == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "Cannot bind the name '%s' to the pointer 0x%x\n", + Constants::RegionName, + shm), + 100); + + /* + One of the best ways to synch between different processes is + through the use of semaphores. ACE_SV_Semaphore_Complex + hides the gory details and lets us use them rather easily. + + Here, we'll create two semaphores: mutex and synch. mutex + will be used to provide mutually exclusive access to the + shared region for writting/reading. synch will be used to + prevent the server from removing the memory pool before the + client is done with it. + + Both semaphores are created in an initially locked state. + */ - if (mutex.remove () == -1) - { - ACE_ERROR ((LM_ERROR, "(%P) %p\n", "server mutex.remove")); - } - - if (synch.remove () == -1) - { - ACE_ERROR ((LM_ERROR, "(%P) %p\n", "server synch.remove")); - } + ACE_SV_Semaphore_Complex mutex; + ACE_ASSERT (mutex.open (Constants::SEM_KEY_1, + ACE_SV_Semaphore_Complex::ACE_CREATE, + 0) != -1); + + ACE_SV_Semaphore_Complex synch; + ACE_ASSERT (synch.open (Constants::SEM_KEY_2, + ACE_SV_Semaphore_Complex::ACE_CREATE, + 0) != -1); + + /* + We know the mutex is locked because we created it that way. + Take a moment to write some data into the shared region. + */ + for (int i = 0; i < Constants::SHMSZ; i++) + shm[i] = Constants::SHMDATA[i]; + + /* + The client will be blocking on an acquire() of mutex. By + releasing it here, the client can go look at the shared data. + */ + if (mutex.release () == -1) + ACE_ERROR ((LM_ERROR, + "(%P) %p", + "server mutex.release")); + /* + Even though we created the synch semaphore in a locked + state, if we attempt to acquire() it, we will block. Our + design requires that the client release() synch when it is + OK for us to remove the shared memory. + */ + else if (synch.acquire () == -1) + ACE_ERROR ((LM_ERROR, + "(%P) %p", + "server synch.acquire")); + /* + This will remove all of the memory pool's resources. In the + case where a memory mapped file is used, the physical file + will also be removed. + */ + if (allocator.pool ().remove () == -1) + ACE_ERROR ((LM_ERROR, + "(%P) %p\n", + "server allocator.remove")); + /* + We now have to cleanup the semaphores we created. Use the + ipcs command to see that they did, indeed, go away after the + server exits. + */ - return 0; - + if (mutex.remove () == -1) + ACE_ERROR ((LM_ERROR, + "(%P) %p\n", + "server mutex.remove")); + else if (synch.remove () == -1) + ACE_ERROR ((LM_ERROR, + "(%P) %p\n", + "server synch.remove")); + return 0; } /* @@ -167,6 +175,8 @@ template class ACE_Read_Guard<ACE_SV_Semaphore_Simple>; #pragma instantiate ACE_Write_Guard<ACE_SV_Semaphore_Simple> #pragma instantiate ACE_Read_Guard<ACE_SV_Semaphore_Simple> #endif /* ACE_HAS_EXPLICIT_TEMPLATE_INSTANTIATION */ + +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/021/page03.html b/docs/tutorials/021/page03.html index 017134a55e5..e2d6e0bf117 100644 --- a/docs/tutorials/021/page03.html +++ b/docs/tutorials/021/page03.html @@ -18,98 +18,110 @@
+
+// $Id$ #include "mpool.h" -int main (int, char *[]) +#if defined(ACE_LACKS_SYSV_SHMEM) +int +main (int, char *[]) { - /* - Use the same pool name used by the server when we create our - Allocator. This assures us that we don't create a whole new - pool. - */ - Allocator allocator(Constants::PoolName); - - /* - You can put anything in the memory pool. Not just the - character array we want. The find() method till, therefore, - return a void* that we will have to cast. - */ - void * region; - - /* - We use find() to locate a named region in the pool. This is - the counterpart to bind() used in the server. - Here, we go try to find the region that the server has created - and filled with data. If there was a problem getting the pool - or finding the region, we'll get back -1 from find(). - */ - if( allocator.pool().find(Constants::RegionName,region) == -1 ) - { - ACE_ERROR_RETURN ((LM_ERROR, "Cannot find the name '%s'\n", - Constants::RegionName), 100 ); - } - - /* - Since find() returns us a void*, we cast it here to the char* - that we want. - */ - char *shm = (char *)region; - - ACE_DEBUG ((LM_INFO, "Shared memory is at 0x%x\n", shm )); - - /* - The same pair of semaphores as used by the server are created - here. We probably don't need the CREATE flag since the server - should have already done that. There may be some very small - windows, however, where the server would have created the - memory pool but not yet gotten to the semaphores. - */ - ACE_SV_Semaphore_Complex mutex; - ACE_ASSERT (mutex.open (Constants::SEM_KEY_1, - ACE_SV_Semaphore_Complex::ACE_CREATE, 0) != -1); - - ACE_SV_Semaphore_Complex synch; - ACE_ASSERT (synch.open (Constants::SEM_KEY_2, - ACE_SV_Semaphore_Complex::ACE_CREATE, 0) != -1); - - /* - It doesn't matter if we created 'mutex' or if the server did. - In either case, it was created in a locked state and we will - block here until somebody unlocks it. In our scenario, that - will have to be the server. - */ - if (mutex.acquire () == -1) - { - ACE_ERROR_RETURN ((LM_ERROR, "(%P) client mutex.acquire"), 1); - } - - /* - Now that we know it is safe to access the data, we'll run - through and make sure that it contains what we think the server - supplied. - */ - for (int i = 0; i < Constants::SHMSZ; i++) - { - ACE_ASSERT (Constants::SHMDATA[i] == shm[i]); - } - - /* - Look back at the server. After filling the region, it will - attempt to acquire the lock on 'synch'. It will wait there - until we release() the semaphore. That will allow it to remove - the pool and cleanup. We can simply exit once we perform the - release. (Ok, a free() of the region would probably be polite...) - */ - if (synch.release () == -1) - { - ACE_ERROR_RETURN ((LM_ERROR, "(%P) client synch.release"), 1); - } + ACE_ERROR_RETURN ((LM_ERROR, + "System V Semaphores not available on this platform.\n"),100); +} +#else // ACE_LACKS_SYSV_SHMEM +int +main (int, char *[]) +{ + /* + Use the same pool name used by the server when we create our + Allocator. This assures us that we don't create a whole new + pool. + */ + Allocator allocator (Constants::PoolName); + + /* + You can put anything in the memory pool. Not just the + character array we want. The find() method till, therefore, + return a void* that we will have to cast. + */ + void *region; + + /* + We use find() to locate a named region in the pool. This is + the counterpart to bind() used in the server. + Here, we go try to find the region that the server has created + and filled with data. If there was a problem getting the pool + or finding the region, we'll get back -1 from find(). + */ + if (allocator.pool ().find (Constants::RegionName,region) == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "Cannot find the name '%s'\n", + Constants::RegionName), + 100); + + /* + Since find() returns us a void*, we cast it here to the char* + that we want. + */ + char *shm = (char *) region; + + ACE_DEBUG ((LM_INFO, + "Shared memory is at 0x%x\n", + shm)); + + /* + The same pair of semaphores as used by the server are created + here. We probably don't need the CREATE flag since the server + should have already done that. There may be some very small + windows, however, where the server would have created the + memory pool but not yet gotten to the semaphores. + */ + ACE_SV_Semaphore_Complex mutex; + ACE_ASSERT (mutex.open (Constants::SEM_KEY_1, + ACE_SV_Semaphore_Complex::ACE_CREATE, + 0) != -1); + + ACE_SV_Semaphore_Complex synch; + ACE_ASSERT (synch.open (Constants::SEM_KEY_2, + ACE_SV_Semaphore_Complex::ACE_CREATE, + 0) != -1); + + /* + It doesn't matter if we created 'mutex' or if the server did. + In either case, it was created in a locked state and we will + block here until somebody unlocks it. In our scenario, that + will have to be the server. + */ + if (mutex.acquire () == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P) client mutex.acquire"), + 1); + + /* + Now that we know it is safe to access the data, we'll run + through and make sure that it contains what we think the server + supplied. + */ + for (int i = 0; i < Constants::SHMSZ; i++) + ACE_ASSERT (Constants::SHMDATA[i] == shm[i]); + + /* + Look back at the server. After filling the region, it will + attempt to acquire the lock on 'synch'. It will wait there + until we release() the semaphore. That will allow it to remove + the pool and cleanup. We can simply exit once we perform the + release. (Ok, a free() of the region would probably be polite...) + */ + if (synch.release () == -1) + ACE_ERROR_RETURN ((LM_ERROR, + "(%P) client synch.release"), + 1); - return 0; + return 0; } /* @@ -127,6 +139,8 @@ template class ACE_Read_Guard<ACE_SV_Semaphore_Simple>; #pragma instantiate ACE_Write_Guard<ACE_SV_Semaphore_Simple> #pragma instantiate ACE_Read_Guard<ACE_SV_Semaphore_Simple> #endif /* ACE_HAS_EXPLICIT_TEMPLATE_INSTANTIATION */ + +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/021/page04.html b/docs/tutorials/021/page04.html index cc6d3c0fa17..a65916da86d 100644 --- a/docs/tutorials/021/page04.html +++ b/docs/tutorials/021/page04.html @@ -19,7 +19,8 @@ The Allocator class is just a thin wrapper around ACE_Malloc<> that moves some of the details out of the application logic. -+
+// $Id$ @@ -29,32 +30,35 @@ // Everything else we need is in this one header #include "ace/Malloc.h" +#if !defined (ACE_LACKS_SYSV_SHMEM) + /* With this we will abstract away some of the details of the memory pool. Note that we don't treat this as a singleton because an application may need more than one pool. Each would have a different name and be used for different purposes. */ + class Allocator { public: - // The pool name will be used to create a unique semaphore to - // keep this pool separate from others. - Allocator( const char * _name = "MemoryPool" ); - ~Allocator(void); + // The pool name will be used to create a unique semaphore to + // keep this pool separate from others. + Allocator (const char * _name = "MemoryPool"); + ~Allocator (void); - typedef ACE_Malloc<ACE_MMAP_Memory_Pool, ACE_SV_Semaphore_Simple> pool_t; + typedef ACE_Malloc<ACE_MMAP_Memory_Pool, ACE_SV_Semaphore_Simple> pool_t; - // Provide an accessor to the pool. This will also allocate the - // pool when first invoked. - pool_t & pool(void); + // Provide an accessor to the pool. This will also allocate the + // pool when first invoked. + pool_t &pool (void); protected: - // The name we gave to the pool - char * name_; + // The name we gave to the pool + char *name_; - pool_t * pool_; + pool_t *pool_; }; /* @@ -64,25 +68,26 @@ protected: class Constants { public: - // The semaphore keys are needed for the two semaphores that - // synch access to the shared memory area. - static const int SEM_KEY_1; - static const int SEM_KEY_2; - - // How big the pool will be and what we'll put into it. A real - // app wouldn't need SHMDATA of course. - static const int SHMSZ; - static const char * SHMDATA; - - // The name assigned to the memory pool by the server is needed - // by the client. Without it, the pool cannot be found. - // Likewise, the name the server will bind() to the region of the - // pool must be available to the client. - static const char * PoolName; - static const char * RegionName; + // The semaphore keys are needed for the two semaphores that + // synch access to the shared memory area. + static const int SEM_KEY_1; + static const int SEM_KEY_2; + + // How big the pool will be and what we'll put into it. A real + // app wouldn't need SHMDATA of course. + static const int SHMSZ; + static const char *SHMDATA; + + // The name assigned to the memory pool by the server is needed + // by the client. Without it, the pool cannot be found. + // Likewise, the name the server will bind() to the region of the + // pool must be available to the client. + static const char *PoolName; + static const char *RegionName; }; -#endif +#endif /* ACE_LACKS_SYSV_SHMEM */ +#endif /* MPOOL_H */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/021/page05.html b/docs/tutorials/021/page05.html index 6e238fcc85b..c5fbcdde621 100644 --- a/docs/tutorials/021/page05.html +++ b/docs/tutorials/021/page05.html @@ -19,12 +19,15 @@ The Allocator class is just a thin wrapper around ACE_Malloc<> that moves some of the details out of the application logic. -+
+// $Id$ #include "mpool.h" +#if !defined (ACE_LACKS_SYSV_SHMEM) + /* Set the values of all of the constants. This guarantees that client and server don't get confused. @@ -43,29 +46,25 @@ const char * Constants::RegionName = " -Allocator::Allocator( const char * _name ) - : name_(ACE_OS::strdup(_name)), - pool_(0) +Allocator::Allocator (const char *_name) + : name_ (ACE_OS::strdup (_name)), + pool_ (0) { - if( ! name_ ) - { - ACE_ERROR ((LM_ERROR, "(%P) %p", - "Allocator::Allocator cannot strdup pool name" )); - } + if (name_ == 0) + ACE_ERROR ((LM_ERROR, "(%P) %p", + "Allocator::Allocator cannot strdup pool name")); } -Allocator::~Allocator(void) +Allocator::~Allocator (void) { - /* - strdup() uses malloc(), so we must use free() to clean up. - */ - if( name_ ) - { - free(name_); - } - - // delete doesn't really care if you give it a NULL pointer. - delete pool_; + /* + strdup() uses malloc(), so we must use free() to clean up. + */ + if (name_) + ACE_OS::free (name_); + + // delete doesn't really care if you give it a NULL pointer. + delete pool_; } /* @@ -77,15 +76,17 @@ const char * Constants::RegionName = " -Allocator::pool_t & Allocator::pool(void) + +Allocator::pool_t & +Allocator::pool (void) { - if( ! pool_ ) - { - pool_ = new pool_t( name_ ); - } + if (pool_ == 0) + pool_ = new pool_t (name_); - return *pool_; + return *pool_; } + +#endif /* ACE_LACKS_SYSV_SHMEM */
[Tutorial Index] [Continue This Tutorial] diff --git a/docs/tutorials/Makefile b/docs/tutorials/Makefile index 60f324b146e..6174987d31b 100644 --- a/docs/tutorials/Makefile +++ b/docs/tutorials/Makefile @@ -1,7 +1,7 @@ # $Id$ -all clean realclean : # +all clean realclean UNSHAR SHAR HTML : # for i in * ; do \ [ -f $$i/Makefile ] || continue ; \ ( cd $$i ; $(MAKE) $@ ) ; \ -- cgit v1.2.1