summaryrefslogtreecommitdiff
path: root/Doc/library/queue.rst
blob: 924be5aadf2169781d449f13ebd2f73d7a332093 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
:mod:`queue` --- A synchronized queue class
===========================================

.. module:: queue
   :synopsis: A synchronized queue class.

**Source code:** :source:`Lib/queue.py`

--------------

The :mod:`queue` module implements multi-producer, multi-consumer queues.
It is especially useful in threaded programming when information must be
exchanged safely between multiple threads.  The :class:`Queue` class in this
module implements all the required locking semantics.  It depends on the
availability of thread support in Python; see the :mod:`threading`
module.

The module implements three types of queue, which differ only in the order in
which the entries are retrieved.  In a FIFO queue, the first tasks added are
the first retrieved. In a LIFO queue, the most recently added entry is
the first retrieved (operating like a stack).  With a priority queue,
the entries are kept sorted (using the :mod:`heapq` module) and the
lowest valued entry is retrieved first.

Internally, the module uses locks to temporarily block competing threads;
however, it is not designed to handle reentrancy within a thread.

The :mod:`queue` module defines the following classes and exceptions:

.. class:: Queue(maxsize=0)

   Constructor for a FIFO queue.  *maxsize* is an integer that sets the upperbound
   limit on the number of items that can be placed in the queue.  Insertion will
   block once this size has been reached, until queue items are consumed.  If
   *maxsize* is less than or equal to zero, the queue size is infinite.

.. class:: LifoQueue(maxsize=0)

   Constructor for a LIFO queue.  *maxsize* is an integer that sets the upperbound
   limit on the number of items that can be placed in the queue.  Insertion will
   block once this size has been reached, until queue items are consumed.  If
   *maxsize* is less than or equal to zero, the queue size is infinite.


.. class:: PriorityQueue(maxsize=0)

   Constructor for a priority queue.  *maxsize* is an integer that sets the upperbound
   limit on the number of items that can be placed in the queue.  Insertion will
   block once this size has been reached, until queue items are consumed.  If
   *maxsize* is less than or equal to zero, the queue size is infinite.

   The lowest valued entries are retrieved first (the lowest valued entry is the
   one returned by ``sorted(list(entries))[0]``).  A typical pattern for entries
   is a tuple in the form: ``(priority_number, data)``.


.. exception:: Empty

   Exception raised when non-blocking :meth:`~Queue.get` (or
   :meth:`~Queue.get_nowait`) is called
   on a :class:`Queue` object which is empty.


.. exception:: Full

   Exception raised when non-blocking :meth:`~Queue.put` (or
   :meth:`~Queue.put_nowait`) is called
   on a :class:`Queue` object which is full.


.. _queueobjects:

Queue Objects
-------------

Queue objects (:class:`Queue`, :class:`LifoQueue`, or :class:`PriorityQueue`)
provide the public methods described below.


.. method:: Queue.qsize()

   Return the approximate size of the queue.  Note, qsize() > 0 doesn't
   guarantee that a subsequent get() will not block, nor will qsize() < maxsize
   guarantee that put() will not block.


.. method:: Queue.empty()

   Return ``True`` if the queue is empty, ``False`` otherwise.  If empty()
   returns ``True`` it doesn't guarantee that a subsequent call to put()
   will not block.  Similarly, if empty() returns ``False`` it doesn't
   guarantee that a subsequent call to get() will not block.


.. method:: Queue.full()

   Return ``True`` if the queue is full, ``False`` otherwise.  If full()
   returns ``True`` it doesn't guarantee that a subsequent call to get()
   will not block.  Similarly, if full() returns ``False`` it doesn't
   guarantee that a subsequent call to put() will not block.


.. method:: Queue.put(item, block=True, timeout=None)

   Put *item* into the queue. If optional args *block* is true and *timeout* is
   ``None`` (the default), block if necessary until a free slot is available. If
   *timeout* is a positive number, it blocks at most *timeout* seconds and raises
   the :exc:`Full` exception if no free slot was available within that time.
   Otherwise (*block* is false), put an item on the queue if a free slot is
   immediately available, else raise the :exc:`Full` exception (*timeout* is
   ignored in that case).


.. method:: Queue.put_nowait(item)

   Equivalent to ``put(item, False)``.


.. method:: Queue.get(block=True, timeout=None)

   Remove and return an item from the queue. If optional args *block* is true and
   *timeout* is ``None`` (the default), block if necessary until an item is available.
   If *timeout* is a positive number, it blocks at most *timeout* seconds and
   raises the :exc:`Empty` exception if no item was available within that time.
   Otherwise (*block* is false), return an item if one is immediately available,
   else raise the :exc:`Empty` exception (*timeout* is ignored in that case).


.. method:: Queue.get_nowait()

   Equivalent to ``get(False)``.

Two methods are offered to support tracking whether enqueued tasks have been
fully processed by daemon consumer threads.


.. method:: Queue.task_done()

   Indicate that a formerly enqueued task is complete.  Used by queue consumer
   threads.  For each :meth:`get` used to fetch a task, a subsequent call to
   :meth:`task_done` tells the queue that the processing on the task is complete.

   If a :meth:`join` is currently blocking, it will resume when all items have been
   processed (meaning that a :meth:`task_done` call was received for every item
   that had been :meth:`put` into the queue).

   Raises a :exc:`ValueError` if called more times than there were items placed in
   the queue.


.. method:: Queue.join()

   Blocks until all items in the queue have been gotten and processed.

   The count of unfinished tasks goes up whenever an item is added to the queue.
   The count goes down whenever a consumer thread calls :meth:`task_done` to
   indicate that the item was retrieved and all work on it is complete. When the
   count of unfinished tasks drops to zero, :meth:`join` unblocks.


Example of how to wait for enqueued tasks to be completed::

    def worker():
        while True:
            item = q.get()
            if item is None:
                break
            do_work(item)
            q.task_done()

    q = queue.Queue()
    threads = []
    for i in range(num_worker_threads):
        t = threading.Thread(target=worker)
        t.start()
        threads.append(t)

    for item in source():
        q.put(item)

    # block until all tasks are done
    q.join()

    # stop workers
    for i in range(num_worker_threads):
        q.put(None)
    for t in threads:
        t.join()


.. seealso::

   Class :class:`multiprocessing.Queue`
      A queue class for use in a multi-processing (rather than multi-threading)
      context.

   :class:`collections.deque` is an alternative implementation of unbounded
   queues with fast atomic :meth:`~collections.deque.append` and
   :meth:`~collections.deque.popleft` operations that do not require locking.