summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSelwin Ong <selwin.ong@gmail.com>2019-01-23 20:38:30 +0700
committerSelwin Ong <selwin.ong@gmail.com>2019-01-23 20:38:30 +0700
commit4b8e6156448ef2e7fafd4cf22fc869bb66286995 (patch)
tree6edee682817a756c412caf6a9c9c2b9d42586c68
parentfc8dd953777aaaa3f3fb84de279e5468bcc33823 (diff)
downloadrq-4b8e6156448ef2e7fafd4cf22fc869bb66286995.tar.gz
Changed docs to use Github compatible code block markup
-rw-r--r--docs/docs/connections.md25
-rw-r--r--docs/docs/exceptions.md16
-rw-r--r--docs/docs/index.md36
-rw-r--r--docs/docs/jobs.md20
-rw-r--r--docs/docs/monitoring.md28
-rw-r--r--docs/docs/results.md8
-rw-r--r--docs/docs/testing.md8
-rw-r--r--docs/docs/workers.md51
8 files changed, 94 insertions, 98 deletions
diff --git a/docs/docs/connections.md b/docs/docs/connections.md
index 9ba30c4..0e55147 100644
--- a/docs/docs/connections.md
+++ b/docs/docs/connections.md
@@ -23,20 +23,20 @@ pass in Redis connection references to queues directly.
In development mode, to connect to a default, local Redis server:
-{% highlight python %}
+```python
from rq import use_connection
use_connection()
-{% endhighlight %}
+```
In production, to connect to a specific Redis server:
-{% highlight python %}
+```python
from redis import Redis
from rq import use_connection
redis = Redis('my.host.org', 6789, password='secret')
use_connection(redis)
-{% endhighlight %}
+```
Be aware of the fact that `use_connection` pollutes the global namespace. It
also implies that you can only ever use a single connection.
@@ -59,7 +59,7 @@ Each RQ object instance (queues, workers, jobs) has a `connection` keyword
argument that can be passed to the constructor. Using this, you don't need to
use `use_connection()`. Instead, you can create your queues like this:
-{% highlight python %}
+```python
from rq import Queue
from redis import Redis
@@ -68,7 +68,7 @@ conn2 = Redis('remote.host.org', 9836)
q1 = Queue('foo', connection=conn1)
q2 = Queue('bar', connection=conn2)
-{% endhighlight %}
+```
Every job that is enqueued on a queue will know what connection it belongs to.
The same goes for the workers.
@@ -85,7 +85,7 @@ default connection to be used.
An example will help to understand it:
-{% highlight python %}
+```python
from rq import Queue, Connection
from redis import Redis
@@ -98,7 +98,7 @@ with Connection(Redis('localhost', 6379)):
assert q1.connection != q2.connection
assert q2.connection != q3.connection
assert q1.connection == q3.connection
-{% endhighlight %}
+```
You can think of this as if, within the `Connection` context, every newly
created RQ object instance will have the `connection` argument set implicitly.
@@ -112,7 +112,7 @@ If your code does not allow you to use a `with` statement, for example, if you
want to use this to set up a unit test, you can use the `push_connection()` and
`pop_connection()` methods instead of using the context manager.
-{% highlight python %}
+```python
import unittest
from rq import Queue
from rq import push_connection, pop_connection
@@ -127,8 +127,7 @@ class MyTest(unittest.TestCase):
def test_foo(self):
"""Any queues created here use local Redis."""
q = Queue()
- ...
-{% endhighlight %}
+```
### Sentinel support
@@ -136,10 +135,10 @@ To use redis sentinel, you must specify a dictionary in the configuration file.
Using this setting in conjunction with the systemd or docker containers with the
automatic restart option allows workers and RQ to have a fault-tolerant connection to the redis.
-{% highlight python %}
+```python
SENTINEL: {'INSTANCES':[('remote.host1.org', 26379), ('remote.host2.org', 26379), ('remote.host3.org', 26379)],
'SOCKET_TIMEOUT': None,
'PASSWORD': 'secret',
'DB': 2,
'MASTER_NAME': 'master'}
-{% endhighlight %}
+```
diff --git a/docs/docs/exceptions.md b/docs/docs/exceptions.md
index 2c855d4..913da70 100644
--- a/docs/docs/exceptions.md
+++ b/docs/docs/exceptions.md
@@ -23,29 +23,27 @@ exception occurs.
This is how you register custom exception handler(s) to an RQ worker:
-{% highlight python %}
+```python
from rq.handlers import move_to_failed_queue # RQ's default exception handler
w = Worker([q], exception_handlers=[my_handler, move_to_failed_queue])
-...
-{% endhighlight %}
+```
The handler itself is a function that takes the following parameters: `job`,
`exc_type`, `exc_value` and `traceback`:
-{% highlight python %}
+```python
def my_handler(job, exc_type, exc_value, traceback):
# do custom things here
# for example, write the exception info to a DB
- ...
-{% endhighlight %}
+
+```
You might also see the three exception arguments encoded as:
```python
def my_handler(job, *exc_info):
# do custom things here
- ...
```
## Chaining exception handlers
@@ -63,7 +61,7 @@ as `True` (i.e. continue with the next handler).
To replace the default behaviour (i.e. moving the job to the `failed` queue),
use a custom exception handler that doesn't fall through, for example:
-{% highlight python %}
+```python
def black_hole(job, *exc_info):
return False
-{% endhighlight %}
+```
diff --git a/docs/docs/index.md b/docs/docs/index.md
index b2fd645..28b900a 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -13,13 +13,13 @@ arguments onto a queue. This is called _enqueueing_.
To put jobs on queues, first declare a function:
-{% highlight python %}
+```python
import requests
def count_words_at_url(url):
resp = requests.get(url)
return len(resp.text.split())
-{% endhighlight %}
+```
Noticed anything? There's nothing special about this function! Any Python
function call can be put on an RQ queue.
@@ -27,7 +27,7 @@ function call can be put on an RQ queue.
To put this potentially expensive word count for a given URL in the background,
simply do this:
-{% highlight python %}
+```python
from rq import Queue
from redis import Redis
from somewhere import count_words_at_url
@@ -43,14 +43,14 @@ print(job.result) # => None
# Now, wait a while, until the worker is finished
time.sleep(2)
print(job.result) # => 889
-{% endhighlight %}
+```
If you want to put the work on a specific queue, simply specify its name:
-{% highlight python %}
+```python
q = Queue('low', connection=redis_conn)
q.enqueue(count_words_at_url, 'http://nvie.com')
-{% endhighlight %}
+```
Notice the `Queue('low')` in the example above? You can use any queue name, so
you can quite flexibly distribute work to your own desire. A common naming
@@ -78,28 +78,28 @@ job function.
In the last case, it may be advantageous to instead use the explicit version of
`.enqueue()`, `.enqueue_call()`:
-{% highlight python %}
+```python
q = Queue('low', connection=redis_conn)
q.enqueue_call(func=count_words_at_url,
args=('http://nvie.com',),
timeout=30)
-{% endhighlight %}
+```
For cases where the web process doesn't have access to the source code running
in the worker (i.e. code base X invokes a delayed function from code base Y),
you can pass the function as a string reference, too.
-{% highlight python %}
+```python
q = Queue('low', connection=redis_conn)
q.enqueue('my_package.my_module.my_func', 3, 4)
-{% endhighlight %}
+```
## Working with Queues
Besides enqueuing jobs, Queues have a few useful methods:
-{% highlight python %}
+```python
from rq import Queue
from redis import Redis
@@ -117,7 +117,7 @@ job = q.fetch_job('my_id') # Returns job having ID "my_id"
# Deleting the queue
q.delete(delete_jobs=True) # Passing in `True` will remove all jobs in the queue
# queue is unusable now unless re-instantiated
-{% endhighlight %}
+```
### On the Design
@@ -151,7 +151,7 @@ of course).
If you're familiar with Celery, you might be used to its `@task` decorator.
Starting from RQ >= 0.3, there exists a similar decorator:
-{% highlight python %}
+```python
from rq.decorators import job
@job('low', connection=my_redis_conn, timeout=5)
@@ -161,7 +161,7 @@ def add(x, y):
job = add.delay(3, 4)
time.sleep(1)
print(job.result)
-{% endhighlight %}
+```
## Bypassing workers
@@ -170,12 +170,12 @@ For testing purposes, you can enqueue jobs without delegating the actual
execution to a worker (available since version 0.3.1). To do this, pass the
`is_async=False` argument into the Queue constructor:
-{% highlight pycon %}
+```python
>>> q = Queue('low', is_async=False, connection=my_redis_conn)
>>> job = q.enqueue(fib, 8)
>>> job.result
21
-{% endhighlight %}
+```
The above code runs without an active worker and executes `fib(8)`
synchronously within the same process. You may know this behaviour from Celery
@@ -188,11 +188,11 @@ a redis instance for storing states related to job execution and completion.
New in RQ 0.4.0 is the ability to chain the execution of multiple jobs.
To execute a job that depends on another job, use the `depends_on` argument:
-{% highlight python %}
+```python
q = Queue('low', connection=my_redis_conn)
report_job = q.enqueue(generate_report)
q.enqueue(send_report, depends_on=report_job)
-{% endhighlight %}
+```
The ability to handle job dependencies allows you to split a big job into
several smaller ones. A job that is dependent on another is enqueued only when
diff --git a/docs/docs/jobs.md b/docs/docs/jobs.md
index 12b8aea..234b4a7 100644
--- a/docs/docs/jobs.md
+++ b/docs/docs/jobs.md
@@ -13,14 +13,14 @@ jobs.
All job information is stored in Redis. You can inspect a job and its attributes
by using `Job.fetch()`.
-{% highlight python %}
+```python
from redis import Redis
from rq.job import Job
connection = Redis()
job = Job.fetch('my_job_id', connection=redis)
print('Status: %s' $ job.get_status())
-{% endhighlight %}
+```
Some interesting job attributes include:
* `job.status`
@@ -38,14 +38,14 @@ Some interesting job attributes include:
Since job functions are regular Python functions, you have to ask RQ for the
current job ID, if any. To do this, you can use:
-{% highlight python %}
+```python
from rq import get_current_job
def add(x, y):
job = get_current_job()
print('Current job: %s' % (job.id,))
return x + y
-{% endhighlight %}
+```
## Storing arbitrary data on jobs
@@ -56,7 +56,7 @@ To add/update custom status information on this job, you have access to the
`meta` property, which allows you to store arbitrary pickleable data on the job
itself:
-{% highlight python %}
+```python
import socket
def add(x, y):
@@ -67,7 +67,7 @@ def add(x, y):
# do more work
time.sleep(1)
return x + y
-{% endhighlight %}
+```
## Time to live for job in queue
@@ -77,13 +77,13 @@ _New in version 0.4.7._
A job has two TTLs, one for the job result and one for the job itself. This means that if you have
job that shouldn't be executed after a certain amount of time, you can define a TTL as such:
-{% highlight python %}
+```python
# When creating the job:
job = Job.create(func=say_hello, ttl=43)
# or when queueing a new job:
job = q.enqueue(count_words_at_url, 'http://nvie.com', ttl=43)
-{% endhighlight %}
+```
## Failed Jobs
@@ -92,7 +92,7 @@ If a job fails and raises an exception, the worker will put the job in a failed
On the Job instance, the `is_failed` property will be true. To fetch all failed jobs, scan
through the `get_failed_queue()` queue.
-{% highlight python %}
+```python
from redis import StrictRedis
from rq import push_connection, get_failed_queue, Queue
from rq.job import Job
@@ -115,4 +115,4 @@ fq.requeue(job.id)
assert fq.count == 0
assert Queue('fake').count == 1
-{% endhighlight %}
+```
diff --git a/docs/docs/monitoring.md b/docs/docs/monitoring.md
index 0f53554..8acf7d0 100644
--- a/docs/docs/monitoring.md
+++ b/docs/docs/monitoring.md
@@ -13,10 +13,10 @@ which looks like this:
To install, just do:
-{% highlight console %}
+```console
$ pip install rq-dashboard
$ rq-dashboard
-{% endhighlight %}
+```
It can also be integrated easily in your Flask app.
@@ -25,7 +25,7 @@ It can also be integrated easily in your Flask app.
To see what queues exist and what workers are active, just type `rq info`:
-{% highlight console %}
+```console
$ rq info
high |██████████████████████████ 20
low |██████████████ 12
@@ -36,14 +36,14 @@ Bricktop.19233 idle: low
Bricktop.19232 idle: high, default, low
Bricktop.18349 idle: default
3 workers, 3 queues
-{% endhighlight %}
+```
## Querying by queue names
You can also query for a subset of queues, if you're looking for specific ones:
-{% highlight console %}
+```console
$ rq info high default
high |██████████████████████████ 20
default |█████████ 8
@@ -52,7 +52,7 @@ default |█████████ 8
Bricktop.19232 idle: high, default
Bricktop.18349 idle: default
2 workers, 2 queues
-{% endhighlight %}
+```
## Organising workers by queue
@@ -60,7 +60,7 @@ Bricktop.18349 idle: default
By default, `rq info` prints the workers that are currently active, and the
queues that they are listening on, like this:
-{% highlight console %}
+```console
$ rq info
...
@@ -68,12 +68,12 @@ Mickey.26421 idle: high, default
Bricktop.25458 busy: high, default, low
Turkish.25812 busy: high, default
3 workers, 3 queues
-{% endhighlight %}
+```
To see the same data, but organised by queue, use the `-R` (or `--by-queue`)
flag:
-{% highlight console %}
+```console
$ rq info -R
...
@@ -82,7 +82,7 @@ low: Bricktop.25458 (busy)
default: Bricktop.25458 (busy), Mickey.26421 (idle), Turkish.25812 (busy)
failed: –
3 workers, 4 queues
-{% endhighlight %}
+```
## Interval polling
@@ -90,16 +90,16 @@ failed: –
By default, `rq info` will print stats and exit.
You can specify a poll interval, by using the `--interval` flag.
-{% highlight console %}
+```console
$ rq info --interval 1
-{% endhighlight %}
+```
`rq info` will now update the screen every second. You may specify a float
value to indicate fractions of seconds. Be aware that low interval values will
increase the load on Redis, of course.
-{% highlight console %}
+```console
$ rq info --interval 0.5
-{% endhighlight %}
+```
[dashboard]: https://github.com/nvie/rq-dashboard
diff --git a/docs/docs/results.md b/docs/docs/results.md
index a641bf8..26087d6 100644
--- a/docs/docs/results.md
+++ b/docs/docs/results.md
@@ -93,15 +93,15 @@ If a job requires more (or less) time to complete, the default timeout period
can be loosened (or tightened), by specifying it as a keyword argument to the
`enqueue()` call, like so:
-{% highlight python %}
+```python
q = Queue()
q.enqueue(mytask, args=(foo,), kwargs={'bar': qux}, timeout=600) # 10 mins
-{% endhighlight %}
+```
You can also change the default timeout for jobs that are enqueued via specific
queue instances at once, which can be useful for patterns like this:
-{% highlight python %}
+```python
# High prio jobs should end in 8 secs, while low prio
# work may take up to 10 mins
high = Queue('high', default_timeout=8) # 8 secs
@@ -109,7 +109,7 @@ low = Queue('low', default_timeout=600) # 10 mins
# Individual jobs can still override these defaults
low.enqueue(really_really_slow, timeout=3600) # 1 hr
-{% endhighlight %}
+```
Individual jobs can still specify an alternative timeout, as workers will
respect these.
diff --git a/docs/docs/testing.md b/docs/docs/testing.md
index bfa4e88..1bc0320 100644
--- a/docs/docs/testing.md
+++ b/docs/docs/testing.md
@@ -9,7 +9,7 @@ You may wish to include your RQ tasks inside unit tests. However many frameworks
Therefore, you must use the SimpleWorker class to avoid fork();
-{% highlight python %}
+```python
from redis import Redis
from rq import SimpleWorker, Queue
@@ -18,7 +18,7 @@ queue.enqueue(my_long_running_job)
worker = SimpleWorker([queue], connection=queue.connection)
worker.work(burst=True) # Runs enqueued job
# Check for result...
-{% endhighlight %}
+```
## Running Jobs in unit tests
@@ -31,11 +31,11 @@ Additionally, we can use fakeredis to mock a redis instance, so we don't have to
run a redis server separately. The instance of the fake redis server can
be directly passed as the connection argument to the queue:
-{% highlight python %}
+```python
from fakeredis import FakeStrictRedis
from rq import Queue
queue = Queue(is_async=False, connection=FakeStrictRedis())
job = queue.enqueue(my_long_running_job)
assert job.is_finished
-{% endhighlight %}
+```
diff --git a/docs/docs/workers.md b/docs/docs/workers.md
index 25b047e..64888ff 100644
--- a/docs/docs/workers.md
+++ b/docs/docs/workers.md
@@ -13,14 +13,14 @@ to perform inside web processes.
To start crunching work, simply start a worker from the root of your project
directory:
-{% highlight console %}
+```console
$ rq worker high normal low
*** Listening for work on high, normal, low
Got send_newsletter('me@nvie.com') from default
Job ended normally without result
*** Listening for work on high, normal, low
...
-{% endhighlight %}
+```
Workers will read jobs from the given queues (the order is important) in an
endless loop, waiting for new work to arrive when all jobs are done.
@@ -37,14 +37,14 @@ new work when they run out of work. Workers can also be started in _burst
mode_ to finish all currently available work and quit as soon as all given
queues are emptied.
-{% highlight console %}
+```console
$ rq worker --burst high normal low
*** Listening for work on high, normal, low
Got send_newsletter('me@nvie.com') from default
Job ended normally without result
No more work, burst finished.
Registering death.
-{% endhighlight %}
+```
This can be useful for batch work that needs to be processed periodically, or
just to scale up your workers temporarily during peak periods.
@@ -106,7 +106,7 @@ yourself before starting the work loop.
To do this, provide your own worker script (instead of using `rq worker`).
A simple implementation example:
-{% highlight python %}
+```python
#!/usr/bin/env python
import sys
from rq import Connection, Worker
@@ -121,7 +121,7 @@ with Connection():
w = Worker(qs)
w.work()
-{% endhighlight %}
+```
### Worker names
@@ -139,7 +139,7 @@ starting the worker, using the `--name` option.
`Worker` instances store their runtime information in Redis. Here's how to
retrieve them:
-{% highlight python %}
+```python
from redis import Redis
from rq import Queue, Worker
@@ -150,14 +150,14 @@ workers = Worker.all(connection=redis)
# Returns all workers in this queue (new in version 0.10.0)
queue = Queue('queue_name')
workers = Worker.all(queue=queue)
-{% endhighlight %}
+```
_New in version 0.10.0._
If you only want to know the number of workers for monitoring purposes, using
`Worker.count()` is much more performant.
-{% highlight python %}
+```python
from redis import Redis
from rq import Worker
@@ -169,8 +169,7 @@ workers = Worker.count(connection=redis)
# Count the number of workers for a specific queue
queue = Queue('queue_name', connection=redis)
workers = Worker.all(queue=queue)
-
-{% endhighlight %}
+```
### Worker statistics
@@ -180,14 +179,14 @@ _New in version 0.9.0._
If you want to check the utilization of your queues, `Worker` instances
store a few useful information:
-{% highlight python %}
+```python
from rq.worker import Worker
worker = Worker.find_by_key('rq:worker:name')
worker.successful_job_count # Number of jobs finished successfully
worker.failed_job_count. # Number of failed jobs processed by this worker
worker.total_working_time # Number of time spent executing jobs
-{% endhighlight %}
+```
## Taking down workers
@@ -209,7 +208,7 @@ If you'd like to configure `rq worker` via a configuration file instead of
through command line arguments, you can do this by creating a Python file like
`settings.py`:
-{% highlight python %}
+```python
REDIS_URL = 'redis://localhost:6379/1'
# You can also specify the Redis DB to use
@@ -228,7 +227,7 @@ SENTRY_DSN = 'sync+http://public:secret@example.com/1'
# If you want custom worker name
# NAME = 'worker-1024'
-{% endhighlight %}
+```
The example above shows all the options that are currently supported.
@@ -236,9 +235,9 @@ _Note: The_ `QUEUES` _and_ `REDIS_PASSWORD` _settings are new since 0.3.3._
To specify which module to read settings from, use the `-c` option:
-{% highlight console %}
+```console
$ rq worker -c settings
-{% endhighlight %}
+```
## Custom worker classes
@@ -255,9 +254,9 @@ more common requests so far are:
You can use the `-w` option to specify a different worker class to use:
-{% highlight console %}
+```console
$ rq worker -w 'path.to.GeventWorker'
-{% endhighlight %}
+```
## Custom Job and Queue classes
@@ -267,15 +266,15 @@ _Will be available in next release._
You can tell the worker to use a custom class for jobs and queues using
`--job-class` and/or `--queue-class`.
-{% highlight console %}
+```console
$ rq worker --job-class 'custom.JobClass' --queue-class 'custom.QueueClass'
-{% endhighlight %}
+```
Don't forget to use those same classes when enqueueing the jobs.
For example:
-{% highlight python %}
+```python
from rq import Queue
from rq.job import Job
@@ -287,14 +286,14 @@ class CustomQueue(Queue):
queue = CustomQueue('default', connection=redis_conn)
queue.enqueue(some_func)
-{% endhighlight %}
+```
## Custom DeathPenalty classes
When a Job times-out, the worker will try to kill it using the supplied
`death_penalty_class` (default: `UnixSignalDeathPenalty`). This can be overridden
-if you wish to attempt to kill jobs in an application specific or 'cleaner' manner.
+if you wish to attempt to kill jobs in an application specific or 'cleaner' manner.
DeathPenalty classes are constructed with the following arguments
`BaseDeathPenalty(timeout, JobTimeoutException, job_id=job.id)`
@@ -307,9 +306,9 @@ _New in version 0.5.5._
If you need to handle errors differently for different types of jobs, or simply want to customize
RQ's default error handling behavior, run `rq worker` using the `--exception-handler` option:
-{% highlight console %}
+```console
$ rq worker --exception-handler 'path.to.my.ErrorHandler'
# Multiple exception handlers is also supported
$ rq worker --exception-handler 'path.to.my.ErrorHandler' --exception-handler 'another.ErrorHandler'
-{% endhighlight %}
+```