summaryrefslogtreecommitdiff
path: root/README.rst
diff options
context:
space:
mode:
authorMike Bayer <mike_mp@zzzcomputing.com>2011-10-23 21:27:11 -0400
committerMike Bayer <mike_mp@zzzcomputing.com>2011-10-23 21:27:11 -0400
commitcac6e8f7890dcf53c6a243a0768ac749c4688524 (patch)
tree485069758e68d41962e4f20103b1eaefd4c7580c /README.rst
parentf37b497e8d6b78553f243a09c1886cdbdc425129 (diff)
downloaddogpile-core-cac6e8f7890dcf53c6a243a0768ac749c4688524.tar.gz
- Add new "nameregistry" helper. Another fixture
derived from Beaker, this allows the ad-hoc creation of a new Dogpile lock based on a name, where all other threads calling that name at the same time will get the same Dogpile lock. Allows any number of logical "dogpile" actions to carry on concurrently without any memory taken up outside of those operations. - To support the use case supported by nameregistry, added value_and_created_fn to dogpile.acquire(). The idea is that the value_and_created_fn can return (value, createdtime), so that the creation time of the value can come from the cache, thus eliminating the need for the dogpile lock to hang around persistently.
Diffstat (limited to 'README.rst')
-rw-r--r--README.rst55
1 files changed, 55 insertions, 0 deletions
diff --git a/README.rst b/README.rst
index fc3d0c5..eea2672 100644
--- a/README.rst
+++ b/README.rst
@@ -181,6 +181,61 @@ In particular, Dogpile's system allows us to call the memcached get() function a
once per access, instead of Beaker's system which calls it twice, and doesn't make us call
get() when we just created the value.
+Using Dogpile across lots of keys
+----------------------------------
+
+The above patterns all feature the usage of Dogpile as an object held persistently
+for the lifespan of some value. Two more helpers can allow the dogpile to be created
+as needed and then disposed, while still maintaining that concurrent threads lock.
+Here's the memcached example again using that technique::
+
+ import pylibmc
+ mc_pool = pylibmc.ThreadMappedPool(pylibmc.Client("localhost"))
+
+ from dogpile import Dogpile, NeedRegenerationException, NameRegistry
+ import pickle
+ import time
+
+ def cache(expiration_time)
+ dogpile_registry = NameRegistry(lambda identifier: Dogpile(expiration_time))
+
+ def get_or_create(key):
+
+ def get_value():
+ with mc_pool.reserve() as mc:
+ value = mc.get(key)
+ if value is None:
+ raise NeedRegenerationException()
+ # deserialize a tuple
+ # (value, createdtime)
+ return pickle.loads(value)
+
+ dogpile = dogpile_registry.get(key)
+
+ def gen_cached():
+ value = fn()
+ with mc_pool.reserve() as mc:
+ # serialize a tuple
+ # (value, createdtime)
+ mc.put(key, pickle.dumps(value, time.time()))
+ return value
+
+ with dogpile.acquire(gen_cached, value_and_created_fn=get_value) as value:
+ return value
+
+ return get_or_create
+
+Above, we use a ``NameRegistry`` which will give us a ``Dogpile`` object that's
+unique on a certain name. When all usages of that name are complete, the ``Dogpile``
+object falls out of scope, so total number of keys used is not a memory issue.
+Then, tell Dogpile that we'll give it the "creation time" that we'll store in our
+cache - we do this using the ``value_and_created_fn`` argument, which assumes we'll
+be storing and loading the value as a tuple of (value, createdtime). The creation time
+should always be calculated via ``time.time()``. The ``acquire()`` function
+returns just the first part of the tuple, the value, to us, and uses the
+createdtime portion to determine if the value is expired.
+
+
Development Status
-------------------