summaryrefslogtreecommitdiff
path: root/Doc/library/tokenize.rst
diff options
context:
space:
mode:
authorMartin Panter <vadmium+py@gmail.com>2016-01-16 04:32:52 +0000
committerMartin Panter <vadmium+py@gmail.com>2016-01-16 04:32:52 +0000
commit30ba199e174441d8d180804bec7755c03c1cacc9 (patch)
tree2ee6b39b20a0fb45e7a50adf4c259636fb805974 /Doc/library/tokenize.rst
parent40e931b2f2c5cf405116403cf22a5f02b282b66d (diff)
downloadcpython-30ba199e174441d8d180804bec7755c03c1cacc9.tar.gz
Issue #26127: Fix links in tokenize documentation; patch by Silent Ghost
Diffstat (limited to 'Doc/library/tokenize.rst')
-rw-r--r--Doc/library/tokenize.rst14
1 files changed, 7 insertions, 7 deletions
diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst
index c9cb51896e..a5f3be39d7 100644
--- a/Doc/library/tokenize.rst
+++ b/Doc/library/tokenize.rst
@@ -27,7 +27,7 @@ The primary entry point is a :term:`generator`:
.. function:: tokenize(readline)
- The :func:`tokenize` generator requires one argument, *readline*, which
+ The :func:`.tokenize` generator requires one argument, *readline*, which
must be a callable object which provides the same interface as the
:meth:`io.IOBase.readline` method of file objects. Each call to the
function should return one line of input as bytes.
@@ -52,7 +52,7 @@ The primary entry point is a :term:`generator`:
.. versionchanged:: 3.3
Added support for ``exact_type``.
- :func:`tokenize` determines the source encoding of the file by looking for a
+ :func:`.tokenize` determines the source encoding of the file by looking for a
UTF-8 BOM or encoding cookie, according to :pep:`263`.
@@ -74,7 +74,7 @@ All constants from the :mod:`token` module are also exported from
.. data:: ENCODING
Token value that indicates the encoding used to decode the source bytes
- into text. The first token returned by :func:`tokenize` will always be an
+ into text. The first token returned by :func:`.tokenize` will always be an
ENCODING token.
@@ -96,17 +96,17 @@ write back the modified script.
positions) may change.
It returns bytes, encoded using the ENCODING token, which is the first
- token sequence output by :func:`tokenize`.
+ token sequence output by :func:`.tokenize`.
-:func:`tokenize` needs to detect the encoding of source files it tokenizes. The
+:func:`.tokenize` needs to detect the encoding of source files it tokenizes. The
function it uses to do this is available:
.. function:: detect_encoding(readline)
The :func:`detect_encoding` function is used to detect the encoding that
should be used to decode a Python source file. It requires one argument,
- readline, in the same way as the :func:`tokenize` generator.
+ readline, in the same way as the :func:`.tokenize` generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (not decoded from bytes) it has read
@@ -120,7 +120,7 @@ function it uses to do this is available:
If no encoding is specified, then the default of ``'utf-8'`` will be
returned.
- Use :func:`open` to open Python source files: it uses
+ Use :func:`.open` to open Python source files: it uses
:func:`detect_encoding` to detect the file encoding.