summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorAlex Grönholm <alex.gronholm@nextday.fi>2022-07-31 00:49:47 +0300
committerAlex Grönholm <alex.gronholm@nextday.fi>2022-07-31 00:49:47 +0300
commit59102a555f4203758ca337d1593847709359295e (patch)
tree8659d41822a2c23961f1556110794deb5c727214 /docs
parentd33f393557f84b3bfaa0b0c714671da00f75606c (diff)
downloadapscheduler-59102a555f4203758ca337d1593847709359295e.tar.gz
Updated a number of documentation pages
Diffstat (limited to 'docs')
-rw-r--r--docs/conf.py2
-rw-r--r--docs/contributing.rst52
-rw-r--r--docs/extending.rst174
-rw-r--r--docs/faq.rst88
-rw-r--r--docs/integrations.rst26
5 files changed, 150 insertions, 192 deletions
diff --git a/docs/conf.py b/docs/conf.py
index e1972b3..561a5b8 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -28,7 +28,7 @@ author = "Alex Grönholm"
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
-extensions = ["sphinx.ext.autodoc", "sphinx.ext.intersphinx"]
+extensions = ["sphinx.ext.autodoc", "sphinx.ext.intersphinx", "sphinx_tabs.tabs"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
diff --git a/docs/contributing.rst b/docs/contributing.rst
index 3b3b5f2..609e160 100644
--- a/docs/contributing.rst
+++ b/docs/contributing.rst
@@ -3,21 +3,26 @@ Contributing to APScheduler
.. highlight:: bash
-If you wish to contribute a fix or feature to APScheduler, please follow the following guidelines.
+If you wish to contribute a fix or feature to APScheduler, please follow the following
+guidelines.
-When you make a pull request against the main APScheduler codebase, Github runs the test suite
-against your modified code. Before making a pull request, you should ensure that the modified code
-passes tests and code quality checks locally.
+When you make a pull request against the main APScheduler codebase, Github runs the test
+suite against your modified code. Before making a pull request, you should ensure that
+the modified code passes tests and code quality checks locally.
Running the test suite
----------------------
The test suite has dependencies on several external services, such as database servers.
-To make this easy for the developer, a docker-compose_ configuration is provided.
-You need both Docker_ (or a suitable replacement) and docker-compose_ installed to start them.
-Once you do, you can start the services with this command::
+To make this easy for the developer, a `docker compose`_ configuration is provided.
+To use it, you need Docker_ (or a suitable replacement). On Linux, unless you're using
+Docker Desktop, you may need to also install the compose (v2) plugin (named
+``docker-compose-plugin``, or similar) separately.
- docker-compose up -d
+Once you have the necessary tools installed, you can start the services with this
+command::
+
+ docker compose up -d
You can run the test suite two ways: either with tox_, or by running pytest_ directly.
@@ -27,13 +32,14 @@ To run tox_ against all supported (of those present on your system) Python versi
Tox will handle the installation of dependencies in separate virtual environments.
-To pass arguments to the underlying pytest_ command, you can add them after ``--``, like this::
+To pass arguments to the underlying pytest_ command, you can add them after ``--``, like
+this::
tox -- -k somekeyword
-To use pytest directly, you can set up a virtual environment and install the project in development
-mode along with its test dependencies (virtualenv activation demonstrated for Linux and macOS;
-on Windows you need ``venv\Scripts\activate`` instead)::
+To use pytest directly, you can set up a virtual environment and install the project in
+development mode along with its test dependencies (virtualenv activation demonstrated
+for Linux and macOS; on Windows you need ``venv\Scripts\activate`` instead)::
python -m venv venv
source venv/bin/activate
@@ -47,15 +53,16 @@ Building the documentation
--------------------------
To build the documentation, run ``tox -e docs``. This will place the documentation in
-``build/sphinx/html`` where you can open ``index.html`` to view the formatted documentation.
+``build/sphinx/html`` where you can open ``index.html`` to view the formatted
+documentation.
-APScheduler uses ReadTheDocs_ to automatically build the documentation so the above procedure is
-only necessary if you are modifying the documentation and wish to check the results before
-committing.
+APScheduler uses ReadTheDocs_ to automatically build the documentation so the above
+procedure is only necessary if you are modifying the documentation and wish to check the
+results before committing.
-APScheduler uses pre-commit_ to perform several code style/quality checks. It is recommended to
-activate pre-commit_ on your local clone of the repository (using ``pre-commit install``) to ensure
-that your changes will pass the same checks on GitHub.
+APScheduler uses pre-commit_ to perform several code style/quality checks. It is
+recommended to activate pre-commit_ on your local clone of the repository (using
+``pre-commit install``) to ensure that your changes will pass the same checks on GitHub.
Making a pull request on Github
-------------------------------
@@ -69,7 +76,8 @@ To get your changes merged to the main codebase, you need a Github account.
#. Create a branch for your pull request, like ``git checkout -b myfixname``
#. Make the desired changes to the code base.
#. Commit your changes locally. If your changes close an existing issue, add the text
- ``Fixes #XXX.`` or ``Closes #XXX.`` to the commit message (where XXX is the issue number).
+ ``Fixes #XXX.`` or ``Closes #XXX.`` to the commit message (where XXX is the issue
+ number).
#. Push the changeset(s) to your forked repository (``git push``)
#. Navigate to Pull requests page on the original repository (not your fork) and click
"New pull request"
@@ -79,8 +87,8 @@ To get your changes merged to the main codebase, you need a Github account.
If you have trouble, consult the `pull request making guide`_ on opensource.com.
-.. _Docker: https://docs.docker.com/engine/install/
-.. _docker-compose: https://docs.docker.com/compose/install/#install-compose
+.. _Docker: https://docs.docker.com/desktop/#download-and-install
+.. _docker compose: https://docs.docker.com/compose/
.. _tox: https://tox.readthedocs.io/en/latest/install.html
.. _pre-commit: https://pre-commit.com/#installation
.. _pytest: https://pypi.org/project/pytest/
diff --git a/docs/extending.rst b/docs/extending.rst
index 9efe8b7..adde8eb 100644
--- a/docs/extending.rst
+++ b/docs/extending.rst
@@ -2,141 +2,109 @@
Extending APScheduler
#####################
-This document is meant to explain how to develop your custom triggers, job stores, executors and
-schedulers.
+This document is meant to explain how to develop your custom triggers and data stores.
Custom triggers
---------------
-The built-in triggers cover the needs of the majority of all users.
-However, some users may need specialized scheduling logic. To that end, the trigger system was made
-pluggable.
+.. py:currentmodule:: apscheduler.triggers
-To implement your scheduling logic, subclass :class:`~apscheduler.triggers.base.BaseTrigger`.
-Look at the interface documentation in that class. Then look at the existing trigger
-implementations. That should give you a good idea what is expected of a trigger implementation.
+The built-in triggers cover the needs of the majority of all users, particularly so when
+combined using :class:`~.combining.AndTrigger` and :class:`~.combining.OrTrigger`.
+However, some users may need specialized scheduling logic. This can be accomplished by
+creating your own custom trigger class.
-To use your trigger, you can use :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` like
-this::
+To implement your scheduling logic, create a new class that inherits from the
+:class:`~..abc.Trigger` interface class::
- trigger = MyTrigger(arg1='foo')
- scheduler.add_job(target, trigger)
+ from __future__ import annotations
-You can also register it as a plugin so you can use the alternate form of
-``add_job``::
+ from apscheduler.abc import Trigger
- scheduler.add_job(target, 'my_trigger', arg1='foo')
+ class MyCustomTrigger(Trigger):
+ def next() -> datetime | None:
+ ... # Your custom logic here
-This is done by adding an entry point in your project's :file:`setup.py`::
+ def __getstate__():
+ ... # Return the serializable state here
- ...
- entry_points={
- 'apscheduler.triggers': ['my_trigger = mytoppackage.subpackage:MyTrigger']
- }
+ def __setstate__(state):
+ ... # Restore the state from the return value of __getstate__()
+Requirements and constraints for trigger classes:
-Custom job stores
------------------
+* :meth:`~..abc.Trigger.next` must always either return a timezone aware
+ :class:`datetime` object or :data:`None` if a new run time cannot be calculated
+* :meth:`~..abc.Trigger.next` must never return the same :class:`~datetime.datetime`
+ twice and never one that is earlier than the previously returned one
+* :meth:`~..abc.Trigger.__setstate__` must accept the return value of
+ :meth:`~..abc.Trigger.__getstate__` and restore the trigger to the functionally same
+ state as the original
-If you want to store your jobs in a fancy new NoSQL database, or a totally custom datastore, you
-can implement your own job store by subclassing :class:`~apscheduler.jobstores.base.BaseJobStore`.
+Triggers are stateful objects. The :meth:`~..abc.Trigger.next` method is where you
+determine the next run time based on the current state of the trigger. The trigger's
+internal state needs to be updated before returning to ensure that the trigger won't
+return the same datetime on the next call. The trigger code does **not** need to be
+thread-safe.
-A job store typically serializes the :class:`~apscheduler.job.Job` objects given to it, and
-constructs new Job objects from binary data when they are loaded from the backing store. It is
-important that the job store restores the ``_scheduler`` and ``_jobstore_alias`` attribute of any
-Job that it creates. Refer to existing implementations for examples.
-It should be noted that :class:`~apscheduler.jobstores.memory.MemoryJobStore` is special in that it
-does not deserialize the jobs. This comes with its own problems, which it handles in its own way.
-If your job store does serialize jobs, you can of course use a serializer other than pickle.
-You should, however, use the ``__getstate__`` and ``__setstate__`` special methods to respectively
-get and set the Job state. Pickle uses them implicitly.
+Custom data stores
+------------------
-To use your job store, you can add it to the scheduler like this::
+If you want to make use of some external service to store the scheduler data, and it's
+not covered by a built-in data store implementation, you may want to create a custom
+data store class. It should be noted that custom data stores are substantially harder to
+implement than custom triggers.
- jobstore = MyJobStore()
- scheduler.add_jobstore(jobstore, 'mystore')
+Data store classes have the following design requirements:
-You can also register it as a plugin so you can use can use the alternate form of
-``add_jobstore``::
+* Must publish the appropriate events to an event broker
+* Code must be thread safe (synchronous API) or task safe (asynchronous API)
- scheduler.add_jobstore('my_jobstore', 'mystore')
+The data store class needs to inherit from either :class:`~..abc.DataStore` or
+:class:`~..abc.AsyncDataStore`, depending on whether you want to implement the store
+using synchronous or asynchronous APIs:
-This is done by adding an entry point in your project's :file:`setup.py`::
+.. tabs::
- ...
- entry_points={
- 'apscheduler.jobstores': ['my_jobstore = mytoppackage.subpackage:MyJobStore']
- }
+ .. tab:: python Synchronous
+ from apscheduler.abc import DataStore, EventBroker
-Custom executors
-----------------
+ class MyCustomDataStore(Datastore):
+ def start(self, event_broker: EventBroker) -> None:
+ ... # Save the event broker in a member attribute and initialize the store
-If you need custom logic for executing your jobs, you can create your own executor classes.
-One scenario for this would be if you want to use distributed computing to run your jobs on other
-nodes.
+ def stop(self, *, force: bool = False) -> None:
+ ... # Shut down the store
-Start by subclassing :class:`~apscheduler.executors.base.BaseExecutor`.
-The responsibilities of an executor are as follows:
+ # See the interface class for the rest of the abstract methods
-* Performing any initialization when ``start()`` is called
-* Releasing any resources when ``shutdown()`` is called
-* Keeping track of the number of instances of each job running on it, and refusing to run more
- than the maximum
-* Notifying the scheduler of the results of the job
+ .. tab:: python Asynchronous
-If your executor needs to serialize the jobs, make sure you either use pickle for it, or invoke the
-``__getstate__`` and ``__setstate__`` special methods to respectively get and set the Job state.
-Pickle uses them implicitly.
+ from apscheduler.abc import AsyncDataStore, AsyncEventBroker
-To use your executor, you can add it to the scheduler like this::
+ class MyCustomDataStore(AsyncDatastore):
+ async def start(self, event_broker: AsyncEventBroker) -> None:
+ ... # Save the event broker in a member attribute and initialize the store
- executor = MyExecutor()
- scheduler.add_executor(executor, 'myexecutor')
+ async def stop(self, *, force: bool = False) -> None:
+ ... # Shut down the store
-You can also register it as a plugin so you can use can use the alternate form of
-``add_executor``::
+ # See the interface class for the rest of the abstract methods
- scheduler.add_executor('my_executor', 'myexecutor')
+Handling temporary failures
++++++++++++++++++++++++++++
-This is done by adding an entry point in your project's :file:`setup.py`::
+If you plan to make the data store implementation public, it is strongly recommended
+that you make an effort to ensure that the implementation can tolerate the loss of
+connectivity to the backing store. The Tenacity_ library is used for this purpose by the
+built-in stores to retry operations in case of a disconnection. If you use it to retry
+operations when exceptions are raised, it is important to only do that in cases of
+*temporary* errors, like connectivity loss, and not in cases like authentication
+failure, missing database and so forth. See the built-in data store implementations and
+Tenacity_ documentation for more information on how to pick the exceptions on which to
+retry the operations.
- ...
- entry_points={
- 'apscheduler.executors': ['my_executor = mytoppackage.subpackage:MyExecutor']
- }
-
-
-Custom schedulers
------------------
-
-A typical situation where you would want to make your own scheduler subclass is when you want to
-integrate it with your
-application framework of choice.
-
-Your custom scheduler should always be a subclass of
-:class:`~apscheduler.schedulers.base.BaseScheduler`. But if you're not adapting to a framework that
-relies on callbacks, consider subclassing
-:class:`~apscheduler.schedulers.blocking.BlockingScheduler` instead.
-
-The most typical extension points for scheduler subclasses are:
- * :meth:`~apscheduler.schedulers.base.BaseScheduler.start`
- must be overridden to wake up the scheduler for the first time
- * :meth:`~apscheduler.schedulers.base.BaseScheduler.shutdown`
- must be overridden to release resources allocated during ``start()``
- * :meth:`~apscheduler.schedulers.base.BaseScheduler.wakeup`
- must be overridden to manage the timernotify the scheduler of changes in the job store
- * :meth:`~apscheduler.schedulers.base.BaseScheduler._create_lock`
- override if your framework uses some alternate locking implementation (like gevent)
- * :meth:`~apscheduler.schedulers.base.BaseScheduler._create_default_executor`
- override if you need to use an alternative default executor
-
-.. important:: Remember to call the superclass implementations of overridden methods, even abstract
- ones (unless they're empty).
-
-The most important responsibility of the scheduler subclass is to manage the scheduler's sleeping
-based on the return values of ``_process_jobs()``. This can be done in various ways, including
-setting timeouts in ``wakeup()`` or running a blocking loop in ``start()``. Again, see the existing
-scheduler classes for examples.
+.. _Tenacity: https://pypi.org/project/tenacity/
diff --git a/docs/faq.rst b/docs/faq.rst
index baac05a..9d94209 100644
--- a/docs/faq.rst
+++ b/docs/faq.rst
@@ -7,24 +7,28 @@ Why doesn't the scheduler run my jobs?
This could be caused by a number of things. The two most common issues are:
-#. Running the scheduler inside a uWSGI worker process while threads have not been enabled (see the
- next section for this)
-#. Running a :class:`~apscheduler.schedulers.background.BackgroundScheduler` and then letting the
- execution reach the end of the script
+#. Running the scheduler inside a uWSGI worker process while threads have not been
+ enabled (see the next section for this)
+#. Starting a synchronous scheduler with
+ :meth:`~apscheduler.schedulers.sync.Scheduler.start_in_background` and then letting
+ the execution reach the end of the script
To demonstrate the latter case, a script like this will **not work**::
- from apscheduler.schedulers.background import BackgroundScheduler
+ from apscheduler.schedulers.sync import Scheduler
+ from apscheduler.schedulers.triggers.cron import CronTrigger
- def myjob():
- print('hello')
- scheduler = BackgroundScheduler()
- scheduler.start()
- scheduler.add_job(myjob, 'cron', hour=0)
+ def mytask():
+ print("hello")
-The script above will **exit** right after calling ``add_job()`` so the scheduler will not have a
-chance to run the scheduled job.
+ scheduler = Scheduler()
+ scheduler.start_in_background()
+ scheduler.add_schedule(mytask, CronTrigger(hour=0))
+
+The script above will **exit** right after calling
+:meth:`~apscheduler.schedulers.sync.add_schedule` so the scheduler will not have a
+chance to run the scheduled task.
If you're having any other issue, then enabling debug logging as instructed in the
:ref:`troubleshooting` section should shed some light into the problem.
@@ -38,66 +42,42 @@ If you're receiving an error like the following::
of <__main__.xxxxxxx object at xxxxxxxxxxxxx>>) could not be determined. Consider giving a textual reference (module:function
name) instead.
-This means that the function you are attempting to schedule has one of the following problems:
+This means that the function you are attempting to schedule has one of the following
+problems:
* It is a lambda function (e.g. ``lambda x: x + 1``)
* It is a bound method (function tied to a particular instance of some class)
* It is a nested function (function inside another function)
-* You are trying to schedule a function that is not tied to any actual module (such as a function
- defined in the REPL, hence ``__main__`` as the module name)
+* You are trying to schedule a function that is not tied to any actual module (such as a
+ function defined in the REPL, hence ``__main__`` as the module name)
-In these cases, it is impossible for the scheduler to determine a "lookup path" to find that
-specific function instance in situations where, for example, the scheduler process is restarted,
-or a process pool worker is being sent the related job object.
+In these cases, it is impossible for the scheduler to determine a "lookup path" to find
+that specific function instance in situations where, for example, the scheduler process
+is restarted, or a process pool worker is being sent the related job object.
Common workarounds for these problems include:
* Converting a lambda to a regular function
-* Moving a nested function to the module level or to class level as either a class method or a
- static method
-* In case of a bound method, passing the unbound version (``YourClass.method_name``) as the target
- function to ``add_job()`` with the class instance as the first argument (so it gets passed as the
- ``self`` argument)
-
-How can I use APScheduler with uWSGI?
-=====================================
-
-uWSGI employs some tricks which disable the Global Interpreter Lock and with it, the use of threads
-which are vital to the operation of APScheduler. To fix this, you need to re-enable the GIL using
-the ``--enable-threads`` switch. See the `uWSGI documentation <uWSGI-threads>`_ for more details.
-
-Also, assuming that you will run more than one worker process (as you typically would in
-production), you should also read the next section.
-
-.. _uWSGI-threads: https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#a-note-on-python-threads
-
-How do I use APScheduler in a web application?
-==============================================
-
-If you're running Django, you may want to check out django_apscheduler_. Note, however, that this
-is a third party library and APScheduler developers are not responsible for it.
-
-Likewise, there is an unofficial extension called Flask-APScheduler_ which may or may not be useful
-when running APScheduler with Flask.
-
-For Pyramid users, the pyramid_scheduler_ library may potentially be helpful.
-
-Other than that, you pretty much run APScheduler normally, usually using
-:class:`~apscheduler.schedulers.background.BackgroundScheduler`. If you're running an asynchronous
-web framework like aiohttp_, you probably want to use a different scheduler in order to take some
-advantage of the asynchronous nature of the framework.
+* Moving a nested function to the module level or to class level as either a class
+ method or a static method
+* In case of a bound method, passing the unbound version (``YourClass.method_name``) as
+ the target function to ``add_job()`` with the class instance as the first argument (so
+ it gets passed as the ``self`` argument)
Is there a graphical user interface for APScheduler?
====================================================
-No graphical interface is provided by the library itself. However, there are some third party
-implementations, but APScheduler developers are not responsible for them. Here is a potentially
-incomplete list:
+No graphical interface is provided by the library itself. However, there are some third
+party implementations, but APScheduler developers are not responsible for them. Here is
+a potentially incomplete list:
* django_apscheduler_
* apschedulerweb_
* `Nextdoor scheduler`_
+.. warning:: As of this writing, these third party offerings have not been updated to
+ work with APScheduler 4.
+
.. _django_apscheduler: https://pypi.org/project/django-apscheduler/
.. _Flask-APScheduler: https://pypi.org/project/flask-apscheduler/
.. _pyramid_scheduler: https://github.com/cadithealth/pyramid_scheduler
diff --git a/docs/integrations.rst b/docs/integrations.rst
index fbaacc5..521d0ee 100644
--- a/docs/integrations.rst
+++ b/docs/integrations.rst
@@ -4,9 +4,9 @@ Integrating with application frameworks
WSGI
----
-To integrate APScheduler with web frameworks using WSGI_ (Web Server Gateway Interface), you need
-to use the synchronous scheduler and start it as a side effect of importing the module that
-contains your application instance::
+To integrate APScheduler with web frameworks using WSGI_ (Web Server Gateway Interface),
+you need to use the synchronous scheduler and start it as a side effect of importing the
+module that contains your application instance::
from apscheduler.schedulers.sync import Scheduler
@@ -25,19 +25,21 @@ contains your application instance::
scheduler = Scheduler()
scheduler.start_in_background()
-Assuming you saved this as ``example.py``, you can now start the application with uWSGI_ with:
+Assuming you saved this as ``example.py``, you can now start the application with uWSGI_
+with:
.. code-block:: bash
uwsgi --enable-threads --http :8080 --wsgi-file example.py
-The ``--enable-threads`` option is necessary because uWSGI disables threads by default which then
-prevents the scheduler from working. See the `uWSGI documentation <uWSGI-threads>`_ for more
-details.
+The ``--enable-threads`` (or ``-T``) option is necessary because uWSGI disables threads
+by default which then prevents the scheduler from working. See the
+`uWSGI documentation <uWSGI-threads>`_ for more details.
.. note::
- The :meth:`.schedulers.sync.Scheduler.start_in_background` method installs an :mod:`atexit`
- hook that shuts down the scheduler gracefully when the worker process exits.
+ The :meth:`.schedulers.sync.Scheduler.start_in_background` method installs an
+ :mod:`atexit` hook that shuts down the scheduler gracefully when the worker process
+ exits.
.. _WSGI: https://wsgi.readthedocs.io/en/latest/what.html
.. _uWSGI: https://www.fullstackpython.com/uwsgi.html
@@ -46,9 +48,9 @@ details.
ASGI
----
-To integrate APScheduler with web frameworks using ASGI_ (Asynchronous Server Gateway Interface),
-you need to use the asynchronous scheduler and tie its lifespan to the lifespan of the application
-by wrapping it in middleware, as follows::
+To integrate APScheduler with web frameworks using ASGI_ (Asynchronous Server Gateway
+Interface), you need to use the asynchronous scheduler and tie its lifespan to the
+lifespan of the application by wrapping it in middleware, as follows::
from apscheduler.schedulers.async_ import AsyncScheduler