diff options
author | Marc-André Lemburg <mal@egenix.com> | 2008-08-07 18:54:33 +0000 |
---|---|---|
committer | Marc-André Lemburg <mal@egenix.com> | 2008-08-07 18:54:33 +0000 |
commit | 9474310515fdfc07d315230fd7039ce912afb002 (patch) | |
tree | 90302a8176bf837998f2454d8ef422ca3de3c7f9 /Parser/tokenizer.c | |
parent | a71903fb010d90a76a800d9f6f5713f2e2136817 (diff) | |
download | cpython-9474310515fdfc07d315230fd7039ce912afb002.tar.gz |
Rename PyUnicode_AsString -> _PyUnicode_AsString and
PyUnicode_AsStringAndSize -> _PyUnicode_AsStringAndSize to mark
them for interpreter internal use only.
We'll have to rework these APIs or create new ones for the
purpose of accessing the UTF-8 representation of Unicode objects
for 3.1.
Diffstat (limited to 'Parser/tokenizer.c')
-rw-r--r-- | Parser/tokenizer.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/Parser/tokenizer.c b/Parser/tokenizer.c index 487405f20e..e2da3e5b6b 100644 --- a/Parser/tokenizer.c +++ b/Parser/tokenizer.c @@ -391,7 +391,7 @@ fp_readl(char *s, int size, struct tok_state *tok) } if (PyUnicode_CheckExact(bufobj)) { - buf = PyUnicode_AsStringAndSize(bufobj, &buflen); + buf = _PyUnicode_AsStringAndSize(bufobj, &buflen); if (buf == NULL) { goto error; } |