summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSelwin Ong <selwin.ong@gmail.com>2021-12-07 19:35:56 +0700
committerGitHub <noreply@github.com>2021-12-07 19:35:56 +0700
commitf14dd9e2d72dfd39d832885947fda5abae73cded (patch)
tree4f7835a3bc74a9f1a7423008a82c61ff69dbabf5
parent93f34c796f541ea4b1c156426d6524df05753826 (diff)
downloadrq-f14dd9e2d72dfd39d832885947fda5abae73cded.tar.gz
Replace highlight tag in docs (#1600)
-rw-r--r--docs/docs/exceptions.md4
-rw-r--r--docs/docs/workers.md4
-rw-r--r--docs/index.md24
-rw-r--r--docs/patterns/django.md4
-rw-r--r--docs/patterns/index.md20
-rw-r--r--docs/patterns/supervisor.md8
-rw-r--r--docs/patterns/systemd.md4
7 files changed, 34 insertions, 34 deletions
diff --git a/docs/docs/exceptions.md b/docs/docs/exceptions.md
index 2510ebf..8da62e9 100644
--- a/docs/docs/exceptions.md
+++ b/docs/docs/exceptions.md
@@ -91,12 +91,12 @@ def my_handler(job, *exc_info):
# do custom things here
```
-{% highlight python %}
+```python
from exception_handlers import foo_handler
w = Worker([q], exception_handlers=[foo_handler],
disable_default_exception_handler=True)
-{% endhighlight %}
+```
## Chaining Exception Handlers
diff --git a/docs/docs/workers.md b/docs/docs/workers.md
index a08f135..ad4ab65 100644
--- a/docs/docs/workers.md
+++ b/docs/docs/workers.md
@@ -138,7 +138,7 @@ Workers are registered to the system under their names, which are generated
randomly during instantiation (see [monitoring][m]). To override this default,
specify the name when starting the worker, or use the `--name` cli option.
-{% highlight python %}
+```python
from redis import Redis
from rq import Queue, Worker
@@ -147,7 +147,7 @@ queue = Queue('queue_name')
# Start a worker with a custom name
worker = Worker([queue], connection=redis, name='foo')
-{% endhighlight %}
+```
[m]: /docs/monitoring/
diff --git a/docs/index.md b/docs/index.md
index b4b7845..4728bcf 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -15,43 +15,43 @@ First, run a Redis server. You can use an existing one. To put jobs on
queues, you don't have to do anything special, just define your typically
lengthy or blocking function:
-{% highlight python %}
+```python
import requests
def count_words_at_url(url):
resp = requests.get(url)
return len(resp.text.split())
-{% endhighlight %}
+```
Then, create a RQ queue:
-{% highlight python %}
+```python
from redis import Redis
from rq import Queue
q = Queue(connection=Redis())
-{% endhighlight %}
+```
And enqueue the function call:
-{% highlight python %}
+```python
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')
-{% endhighlight %}
+```
Scheduling jobs are similarly easy:
-{% highlight python %}
+```python
# Schedule job to run at 9:15, October 10th
job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello)
# Schedule job to be run in 10 seconds
job = queue.enqueue_in(timedelta(seconds=10), say_hello)
-{% endhighlight %}
+```
You can also ask RQ to retry failed jobs:
-{% highlight python %}
+```python
from rq import Retry
# Retry up to 3 times, failed job will be requeued immediately
@@ -59,20 +59,20 @@ queue.enqueue(say_hello, retry=Retry(max=3))
# Retry up to 3 times, with configurable intervals between retries
queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60]))
-{% endhighlight %}
+```
### The worker
To start executing enqueued function calls in the background, start a worker
from your project's directory:
-{% highlight console %}
+```console
$ rq worker --with-scheduler
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default
-{% endhighlight %}
+```
That's about it.
diff --git a/docs/patterns/django.md b/docs/patterns/django.md
index 4f0fc9b..bb83559 100644
--- a/docs/patterns/django.md
+++ b/docs/patterns/django.md
@@ -18,6 +18,6 @@ environmental variable will already do the trick.
If `settings.py` is your Django settings file (as it is by default), use this:
-{% highlight console %}
+```console
$ DJANGO_SETTINGS_MODULE=settings rq worker high default low
-{% endhighlight %}
+```
diff --git a/docs/patterns/index.md b/docs/patterns/index.md
index bf767ee..23f67a4 100644
--- a/docs/patterns/index.md
+++ b/docs/patterns/index.md
@@ -14,7 +14,7 @@ To setup RQ on [Heroku][1], first add it to your
Create a file called `run-worker.py` with the following content (assuming you
are using [Redis To Go][2] with Heroku):
-{% highlight python %}
+```python
import os
import urlparse
from redis import Redis
@@ -35,7 +35,7 @@ if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
-{% endhighlight %}
+```
Then, add the command to your `Procfile`:
@@ -43,13 +43,13 @@ Then, add the command to your `Procfile`:
Now, all you have to do is spin up a worker:
-{% highlight console %}
+```console
$ heroku scale worker=1
-{% endhighlight %}
+```
If you are using [Heroku Redis][5]) you might need to change the Redis connection as follows:
-{% highlight console %}
+```console
conn = redis.Redis(
host=host,
password=password,
@@ -57,16 +57,16 @@ conn = redis.Redis(
ssl=True,
ssl_cert_reqs=None
)
-{% endhighlight %}
+```
and for using the cli:
-{% highlight console %}
+```console
rq info --config rq_conf
-{% endhighlight %}{% endhighlight %}
+``````
Where the rq_conf.py file looks like:
-{% highlight console %}
+```console
REDIS_HOST = "host"
REDIS_PORT = port
REDIS_PASSWORD = "password"
@@ -74,7 +74,7 @@ REDIS_SSL = True
REDIS_SSL_CA_CERTS = None
REDIS_DB = 0
REDIS_SSL_CERT_REQS = None
-{% endhighlight %}{% endhighlight %}
+``````
## Putting RQ under foreman
diff --git a/docs/patterns/supervisor.md b/docs/patterns/supervisor.md
index d14a9e6..fc3e6e2 100644
--- a/docs/patterns/supervisor.md
+++ b/docs/patterns/supervisor.md
@@ -13,7 +13,7 @@ your product.
RQ can be used in combination with supervisor easily. You'd typically want to
use the following supervisor settings:
-{% highlight ini %}
+```
[program:myworker]
; Point the command to the specific rq command you want to run.
; If you use virtualenv, be sure to point it to
@@ -38,14 +38,14 @@ stopsignal=TERM
; These are up to you
autostart=true
autorestart=true
-{% endhighlight %}
+```
### Conda environments
[Conda][2] virtualenvs can be used for RQ jobs which require non-Python
dependencies. You can use a similar approach as with regular virtualenvs.
-{% highlight ini %}
+```
[program:myworker]
; Point the command to the specific rq command you want to run.
; For conda virtual environments, install RQ into your env.
@@ -70,7 +70,7 @@ stopsignal=TERM
; These are up to you
autostart=true
autorestart=true
-{% endhighlight %}
+```
[1]: http://supervisord.org/
[2]: https://conda.io/docs/
diff --git a/docs/patterns/systemd.md b/docs/patterns/systemd.md
index a9a3aec..b313b6d 100644
--- a/docs/patterns/systemd.md
+++ b/docs/patterns/systemd.md
@@ -12,7 +12,7 @@ To run multiple workers under systemd, you'll first need to create a unit file.
We can name this file `rqworker@.service`, put this file in `/etc/systemd/system`
directory (location may differ by what distributions you run).
-{% highlight ini %}
+```
[Unit]
Description=RQ Worker Number %i
After=network.target
@@ -31,7 +31,7 @@ Restart=always
[Install]
WantedBy=multi-user.target
-{% endhighlight %}
+```
If your unit file is properly installed, you should be able to start workers by
invoking `systemctl start rqworker@1.service`, `systemctl start rqworker@2.service`