summaryrefslogtreecommitdiff
path: root/Tools/i18n
diff options
context:
space:
mode:
authorTrent Nelson <trent.nelson@snakebite.org>2008-03-18 22:41:35 +0000
committerTrent Nelson <trent.nelson@snakebite.org>2008-03-18 22:41:35 +0000
commita7feeecf2c00fabb1ff0af365335166060777f22 (patch)
treeb6cd6dd512f2ed20bb09a189b3ec8cd3d03b7722 /Tools/i18n
parent6d5c2ad95168b9c7043ffe9678ffb72811d3dbc4 (diff)
downloadcpython-a7feeecf2c00fabb1ff0af365335166060777f22.tar.gz
- Issue #719888: Updated tokenize to use a bytes API. generate_tokens has been
renamed tokenize and now works with bytes rather than strings. A new detect_encoding function has been added for determining source file encoding according to PEP-0263. Token sequences returned by tokenize always start with an ENCODING token which specifies the encoding used to decode the file. This token is used to encode the output of untokenize back to bytes. Credit goes to Michael "I'm-going-to-name-my-first-child-unittest" Foord from Resolver Systems for this work.
Diffstat (limited to 'Tools/i18n')
-rwxr-xr-xTools/i18n/pygettext.py4
1 files changed, 3 insertions, 1 deletions
diff --git a/Tools/i18n/pygettext.py b/Tools/i18n/pygettext.py
index bdf52e0ff8..69a19ef902 100755
--- a/Tools/i18n/pygettext.py
+++ b/Tools/i18n/pygettext.py
@@ -631,7 +631,9 @@ def main():
try:
eater.set_filename(filename)
try:
- tokenize.tokenize(fp.readline, eater)
+ tokens = tokenize.generate_tokens(fp.readline)
+ for _token in tokens:
+ eater(*_token)
except tokenize.TokenError as e:
print('%s: %s, line %d, column %d' % (
e.args[0], filename, e.args[1][0], e.args[1][1]),