summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJordan Cook <jordan.cook@pioneer.com>2022-04-11 20:13:25 -0500
committerJordan Cook <jordan.cook@pioneer.com>2022-04-11 20:20:09 -0500
commitbacf3aada5f73c289aa37113f2776ac3340995d4 (patch)
treeb2157a7c69fac1e3074a1ba79c31f9c9fe2a6163
parent2bd683658e17602a826a26b864a399b68181ee33 (diff)
downloadrequests-cache-bacf3aada5f73c289aa37113f2776ac3340995d4.tar.gz
Add some more notes about SQLite and Redis backends
-rw-r--r--docs/user_guide/backends.md5
-rw-r--r--requests_cache/backends/redis.py25
-rw-r--r--requests_cache/backends/sqlite.py18
3 files changed, 40 insertions, 8 deletions
diff --git a/docs/user_guide/backends.md b/docs/user_guide/backends.md
index b0461b9..2740af1 100644
--- a/docs/user_guide/backends.md
+++ b/docs/user_guide/backends.md
@@ -13,9 +13,8 @@ The default backend is SQLite, since it's simple to use, requires no extra depen
configuration, and has the best all-around performance for the majority of use cases.
```{note}
-In the rare case that SQLite is not available
-(for example, [on Heroku](https://devcenter.heroku.com/articles/sqlite3)), a non-persistent
-in-memory cache is used by default.
+In environments where SQLite is explicitly disabled, a non-persistent in-memory cache is used by
+default.
```
## Backend Dependencies
diff --git a/requests_cache/backends/redis.py b/requests_cache/backends/redis.py
index 752ca01..fe05cf4 100644
--- a/requests_cache/backends/redis.py
+++ b/requests_cache/backends/redis.py
@@ -18,16 +18,30 @@ or disabled entirely. See `Redis Persistence <https://redis.io/topics/persistenc
Connection Options
^^^^^^^^^^^^^^^^^^
-The Redis backend accepts any keyword arguments for :py:class:`redis.client.Redis`. These can be passed
-via :py:class:`.CachedSession`:
+The Redis backend accepts any keyword arguments for :py:class:`redis.client.Redis`. These can be
+passed via :py:class:`.RedisCache`:
- >>> session = CachedSession('http_cache', backend='redis', host='192.168.1.63', port=6379)
+ >>> backend = RedisCache(host='192.168.1.63', port=6379)
+ >>> session = CachedSession('http_cache', backend=backend)
-Or via :py:class:`.RedisCache`:
+Or you can pass an existing ``Redis`` object:
- >>> backend = RedisCache(host='192.168.1.63', port=6379)
+ >>> from redis import Redis
+ >>> connection = Redis(host='192.168.1.63', port=6379)
+ >>> backend=RedisCache(connection=connection))
>>> session = CachedSession('http_cache', backend=backend)
+Redislite
+^^^^^^^^^
+If you can't easily set up your own Redis server, another option is
+`redislite <https://github.com/yahoo/redislite>`_. It contains its own lightweight, embedded Redis
+database, and can be used as a drop-in replacement for redis-py. Usage example:
+ >>> from redislite import Redis
+ >>> from requests_cache import CachedSession, RedisCache
+ >>>
+ >>> backend = RedisCache(connection=Redis())
+ >>> session = CachedSession(backend=backend)
+
API Reference
^^^^^^^^^^^^^
.. automodsumm:: requests_cache.backends.redis
@@ -72,6 +86,7 @@ class RedisDict(BaseStorage):
"""
def __init__(self, namespace: str, collection_name: str = None, connection=None, **kwargs):
+
super().__init__(**kwargs)
connection_kwargs = get_valid_kwargs(Redis, kwargs)
self.connection = connection or StrictRedis(**connection_kwargs)
diff --git a/requests_cache/backends/sqlite.py b/requests_cache/backends/sqlite.py
index ce62749..7be83d4 100644
--- a/requests_cache/backends/sqlite.py
+++ b/requests_cache/backends/sqlite.py
@@ -49,6 +49,24 @@ application. It supports unlimited concurrent reads. Writes, however, are queued
so if you need to make large volumes of concurrent requests, you may want to consider a different
backend that's specifically made for that kind of workload, like :py:class:`.RedisCache`.
+Hosting Services and Filesystem Compatibility
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+There are some caveats to using SQLite with some hosting services, based on what kind of storage is
+available:
+
+* NFS:
+ * SQLite may be used on a NFS, but is usually only safe to use from a single process at a time.
+ See the `SQLite FAQ <https://www.sqlite.org/faq.html#q5>`_ for details.
+ * PythonAnywhere is one example of a host that uses NFS-backed storage. Using SQLite from a
+ multiprocess application will likely result in ``sqlite3.OperationalError: database is locked``.
+* Ephemeral storage:
+ * Heroku `explicitly disables SQLite <https://devcenter.heroku.com/articles/sqlite3>`_ on its dynos.
+ * AWS `EC2 <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html>`_,
+ `Lambda (depending on configuration) <https://aws.amazon.com/blogs/compute/choosing-between-aws-lambda-data-storage-options-in-web-apps/>`_,
+ and some other AWS services use ephemeral storage that only persists for the lifetime of the
+ instance. This is fine for short-term caching. For longer-term persistance, you can use an
+ `attached EBS volume <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html>`_.
+
Connection Options
^^^^^^^^^^^^^^^^^^
The SQLite backend accepts any keyword arguments for :py:func:`sqlite3.connect`. These can be passed