summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--ANNOUNCE4
-rw-r--r--CHANGES16
-rw-r--r--README7
-rw-r--r--doc/ply.html72
-rw-r--r--ply/lex.py55
-rw-r--r--ply/yacc.py18
-rw-r--r--test/lex_ignore.exp2
-rw-r--r--test/lex_ignore2.exp1
-rw-r--r--test/lex_ignore2.py29
-rw-r--r--test/lex_re1.exp2
-rw-r--r--test/lex_re2.exp2
-rw-r--r--test/lex_re3.exp2
-rw-r--r--test/lex_state1.exp2
-rw-r--r--test/lex_state2.exp2
-rw-r--r--test/lex_state3.exp2
-rw-r--r--test/lex_state4.exp2
-rw-r--r--test/lex_state5.exp2
-rw-r--r--test/lex_state_norule.exp2
-rw-r--r--test/yacc_noerror.exp2
19 files changed, 148 insertions, 76 deletions
diff --git a/ANNOUNCE b/ANNOUNCE
index d8fb81d..f409020 100644
--- a/ANNOUNCE
+++ b/ANNOUNCE
@@ -1,4 +1,4 @@
-February 17, 2007
+February 19, 2007
Announcing : PLY-2.3 (Python Lex-Yacc)
@@ -6,7 +6,7 @@ February 17, 2007
I'm pleased to announce a significant new update to PLY---a 100% Python
implementation of the common parsing tools lex and yacc. PLY-2.3 is
-a minor bug fix release.
+a minor bug fix release, but also features improved performance.
If you are new to PLY, here are a few highlights:
diff --git a/CHANGES b/CHANGES
index 6fb78b8..99b52f6 100644
--- a/CHANGES
+++ b/CHANGES
@@ -1,5 +1,21 @@
Version 2.3
-----------------------------
+02/19/07: beazley
+ Warning messages are now redirected to stderr instead of being printed
+ to standard output.
+
+02/19/07: beazley
+ Added a warning message to lex.py if it detects a literal backslash
+ character inside the t_ignore declaration. This is to help
+ problems that might occur if someone accidentally defines t_ignore
+ as a Python raw string. For example:
+
+ t_ignore = r' \t'
+
+ The idea for this is from an email I received from David Cimimi who
+ reported bizarre behavior in lexing as a result of defining t_ignore
+ as a raw string by accident.
+
02/18/07: beazley
Performance improvements. Made some changes to the internal
table organization and LR parser to improve parsing performance.
diff --git a/README b/README
index d5e26a5..6e246c2 100644
--- a/README
+++ b/README
@@ -96,9 +96,10 @@ A simple example is found at the end of this document
Requirements
============
-PLY requires the use of Python 2.0 or greater. It should work on
-just about any platform. PLY has been tested with both CPython and
-Jython. However, it does not work with IronPython.
+PLY requires the use of Python 2.1 or greater. However, you should
+use the latest Python release if possible. It should work on just
+about any platform. PLY has been tested with both CPython and Jython.
+However, it does not seem to work with IronPython.
Resources
=========
diff --git a/doc/ply.html b/doc/ply.html
index ed4d56f..dba0c62 100644
--- a/doc/ply.html
+++ b/doc/ply.html
@@ -2433,8 +2433,27 @@ to discard huge portions of the input text to find a valid restart point.
<H3><a name="ply_nn33"></a>5.9 Line Number and Position Tracking</H3>
+Position tracking is often a tricky problem when writing compilers. By default, PLY tracks the line number and position of
+all tokens. This information is available using the following functions:
-<tt>yacc.py</tt> can automatically track line numbers and positions for all of the grammar symbols and tokens it processes. However, this
+<ul>
+<li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em>
+<li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em>
+</ul>
+
+For example:
+
+<blockquote>
+<pre>
+def p_expression(p):
+ 'expression : expression PLUS expression'
+ line = p.lineno(2) # line number of the PLUS token
+ index = p.lexpos(2) # Position of the PLUS token
+</pre>
+</blockquote>
+
+As an optional feature, <tt>yacc.py</tt> can automatically track line numbers and positions for all of the grammar symbols
+as well. However, this
extra tracking requires extra processing and can significantly slow down parsing. Therefore, it must be enabled by passing the
<tt>tracking=True</tt> option to <tt>yacc.parse()</tt>. For example:
@@ -2444,11 +2463,12 @@ yacc.parse(data,tracking=True)
</pre>
</blockquote>
-Once enabled, line numbers can be retrieved using the following two functions in grammar rules:
+Once enabled, the <tt>lineno()</tt> and <tt>lexpos()</tt> methods work for all grammar symbols. In addition, two
+additional methods can be used:
<ul>
-<li><tt>p.lineno(num)</tt>. Return the starting line number for symbol <em>num</em>
<li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>.
+<li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>.
</ul>
For example:
@@ -2462,42 +2482,44 @@ def p_expression(p):
p.lineno(3) # line number of the right expression
...
start,end = p.linespan(3) # Start,end lines of the right expression
+ starti,endi = p.lexspan(3) # Start,end positions of right expression
</pre>
</blockquote>
-Since line numbers are managed internally by the parser, there is usually no need to modify the line
-numbers. However, if you want to save the line numbers in a parse-tree node, you will need to make your own
-private copy.
+Note: The <tt>lexspan()</tt> function only returns the range of values up to the start of the last grammar symbol.
<p>
-To get positional information about where tokens were lexed, the following two functions are used:
+Although it may be convenient for PLY to track position information on
+all grammar symbols, this is often unnecessary. For example, if you
+are merely using line number information in an error message, you can
+often just key off of a specific token in the grammar rule. For
+example:
-<ul>
-<li><tt>p.lexpos(num)</tt>. Return the starting lexing position for symbol <em>num</em>
-<li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>.
-</ul>
+<blockquote>
+<pre>
+def p_bad_func(p):
+ 'funccall : fname LPAREN error RPAREN'
+ # Line number reported from LPAREN token
+ print "Bad function call at line", p.lineno(2)
+</pre>
+</blockquote>
-For example:
+<p>
+Similarly, you may get better parsing performance if you only propagate line number
+information where it's needed. For example:
<blockquote>
<pre>
-def p_expression(p):
- 'expression : expression PLUS expression'
- p.lexpos(1) # Lexing position of the left expression
- p.lexpos(2) # Lexing position of the PLUS operator
- p.lexpos(3) # Lexing position of the right expression
- ...
- start,end = p.lexspan(3) # Start,end positions of the right expression
+def p_fname(p):
+ 'fname : ID'
+ p[0] = (p[1],p.lineno(1))
</pre>
</blockquote>
-Note: The <tt>lexspan()</tt> function only returns the range of values up the start of the last grammar symbol.
-
-<p>
-Note: The <tt>lineno()</tt> and <tt>lexpos()</tt> methods can always be called to get positional information
-on raw tokens or terminals. This information is available regardless of whether or not the parser is tracking
-positional information for other grammar symbols.
+Finally, it should be noted that PLY does not store position information after a rule has been
+processed. If it is important for you to retain this information in an abstract syntax tree, you
+must make your own copy.
<H3><a name="ply_nn34"></a>5.10 AST Construction</H3>
diff --git a/ply/lex.py b/ply/lex.py
index bb0a5d6..71a2679 100644
--- a/ply/lex.py
+++ b/ply/lex.py
@@ -377,7 +377,7 @@ def _validate_file(filename):
if not prev:
counthash[name] = linen
else:
- print "%s:%d: Rule %s redefined. Previously defined on line %d" % (filename,linen,name,prev)
+ print >>sys.stderr, "%s:%d: Rule %s redefined. Previously defined on line %d" % (filename,linen,name,prev)
noerror = 0
linen += 1
return noerror
@@ -439,7 +439,6 @@ def _form_master_re(relist,reflags,ldict,toknames):
# callback function to carry out the action
if f.find("ignore_") > 0:
lexindexfunc[i] = (None,None)
- print "IGNORE", f
else:
lexindexfunc[i] = (None, toknames[f])
@@ -551,10 +550,10 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
if not optimize:
for n in tokens:
if not _is_identifier.match(n):
- print "lex: Bad token name '%s'" % n
+ print >>sys.stderr, "lex: Bad token name '%s'" % n
error = 1
if warn and lexobj.lextokens.has_key(n):
- print "lex: Warning. Token '%s' multiply defined." % n
+ print >>sys.stderr, "lex: Warning. Token '%s' multiply defined." % n
lexobj.lextokens[n] = None
else:
for n in tokens: lexobj.lextokens[n] = None
@@ -565,12 +564,12 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
try:
for c in literals:
if not (isinstance(c,types.StringType) or isinstance(c,types.UnicodeType)) or len(c) > 1:
- print "lex: Invalid literal %s. Must be a single character" % repr(c)
+ print >>sys.stderr, "lex: Invalid literal %s. Must be a single character" % repr(c)
error = 1
continue
except TypeError:
- print "lex: Invalid literals specification. literals must be a sequence of characters."
+ print >>sys.stderr, "lex: Invalid literals specification. literals must be a sequence of characters."
error = 1
lexobj.lexliterals = literals
@@ -578,25 +577,25 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
# Build statemap
if states:
if not (isinstance(states,types.TupleType) or isinstance(states,types.ListType)):
- print "lex: states must be defined as a tuple or list."
+ print >>sys.stderr, "lex: states must be defined as a tuple or list."
error = 1
else:
for s in states:
if not isinstance(s,types.TupleType) or len(s) != 2:
- print "lex: invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')" % repr(s)
+ print >>sys.stderr, "lex: invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')" % repr(s)
error = 1
continue
name, statetype = s
if not isinstance(name,types.StringType):
- print "lex: state name %s must be a string" % repr(name)
+ print >>sys.stderr, "lex: state name %s must be a string" % repr(name)
error = 1
continue
if not (statetype == 'inclusive' or statetype == 'exclusive'):
- print "lex: state type for state %s must be 'inclusive' or 'exclusive'" % name
+ print >>sys.stderr, "lex: state type for state %s must be 'inclusive' or 'exclusive'" % name
error = 1
continue
if stateinfo.has_key(name):
- print "lex: state '%s' already defined." % name
+ print >>sys.stderr, "lex: state '%s' already defined." % name
error = 1
continue
stateinfo[name] = statetype
@@ -630,7 +629,7 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
elif (isinstance(t, types.StringType) or isinstance(t,types.UnicodeType)):
for s in states: strsym[s].append((f,t))
else:
- print "lex: %s not defined as a function or string" % f
+ print >>sys.stderr, "lex: %s not defined as a function or string" % f
error = 1
# Sort the functions by line number
@@ -663,17 +662,17 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
else:
reqargs = 1
if nargs > reqargs:
- print "%s:%d: Rule '%s' has too many arguments." % (file,line,f.__name__)
+ print >>sys.stderr, "%s:%d: Rule '%s' has too many arguments." % (file,line,f.__name__)
error = 1
continue
if nargs < reqargs:
- print "%s:%d: Rule '%s' requires an argument." % (file,line,f.__name__)
+ print >>sys.stderr, "%s:%d: Rule '%s' requires an argument." % (file,line,f.__name__)
error = 1
continue
if tokname == 'ignore':
- print "%s:%d: Rule '%s' must be defined as a string." % (file,line,f.__name__)
+ print >>sys.stderr, "%s:%d: Rule '%s' must be defined as a string." % (file,line,f.__name__)
error = 1
continue
@@ -686,13 +685,13 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
try:
c = re.compile("(?P<%s>%s)" % (f.__name__,f.__doc__), re.VERBOSE | reflags)
if c.match(""):
- print "%s:%d: Regular expression for rule '%s' matches empty string." % (file,line,f.__name__)
+ print >>sys.stderr, "%s:%d: Regular expression for rule '%s' matches empty string." % (file,line,f.__name__)
error = 1
continue
except re.error,e:
- print "%s:%d: Invalid regular expression for rule '%s'. %s" % (file,line,f.__name__,e)
+ print >>sys.stderr, "%s:%d: Invalid regular expression for rule '%s'. %s" % (file,line,f.__name__,e)
if '#' in f.__doc__:
- print "%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'." % (file,line, f.__name__)
+ print >>sys.stderr, "%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'." % (file,line, f.__name__)
error = 1
continue
@@ -704,13 +703,15 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
regex_list.append("(?P<%s>%s)" % (f.__name__,f.__doc__))
else:
- print "%s:%d: No regular expression defined for rule '%s'" % (file,line,f.__name__)
+ print >>sys.stderr, "%s:%d: No regular expression defined for rule '%s'" % (file,line,f.__name__)
# Now add all of the simple rules
for name,r in strsym[state]:
tokname = toknames[name]
if tokname == 'ignore':
+ if "\\" in r:
+ print >>sys.stderr, "lex: Warning. %s contains a literal backslash '\\'" % name
ignore[state] = r
continue
@@ -721,19 +722,19 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
continue
if not lexobj.lextokens.has_key(tokname) and tokname.find("ignore_") < 0:
- print "lex: Rule '%s' defined for an unspecified token %s." % (name,tokname)
+ print >>sys.stderr, "lex: Rule '%s' defined for an unspecified token %s." % (name,tokname)
error = 1
continue
try:
c = re.compile("(?P<%s>%s)" % (name,r),re.VERBOSE | reflags)
if (c.match("")):
- print "lex: Regular expression for rule '%s' matches empty string." % name
+ print >>sys.stderr, "lex: Regular expression for rule '%s' matches empty string." % name
error = 1
continue
except re.error,e:
- print "lex: Invalid regular expression for rule '%s'. %s" % (name,e)
+ print >>sys.stderr, "lex: Invalid regular expression for rule '%s'. %s" % (name,e)
if '#' in r:
- print "lex: Make sure '#' in rule '%s' is escaped with '\\#'." % name
+ print >>sys.stderr, "lex: Make sure '#' in rule '%s' is escaped with '\\#'." % name
error = 1
continue
@@ -743,7 +744,7 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
regex_list.append("(?P<%s>%s)" % (name,r))
if not regex_list:
- print "lex: No rules defined for state '%s'" % state
+ print >>sys.stderr, "lex: No rules defined for state '%s'" % state
error = 1
regexs[state] = regex_list
@@ -788,15 +789,15 @@ def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,now
lexobj.lexstateerrorf = errorf
lexobj.lexerrorf = errorf.get("INITIAL",None)
if warn and not lexobj.lexerrorf:
- print "lex: Warning. no t_error rule is defined."
+ print >>sys.stderr, "lex: Warning. no t_error rule is defined."
# Check state information for ignore and error rules
for s,stype in stateinfo.items():
if stype == 'exclusive':
if warn and not errorf.has_key(s):
- print "lex: Warning. no error rule is defined for exclusive state '%s'" % s
+ print >>sys.stderr, "lex: Warning. no error rule is defined for exclusive state '%s'" % s
if warn and not ignore.has_key(s) and lexobj.lexignore:
- print "lex: Warning. no ignore rule is defined for exclusive state '%s'" % s
+ print >>sys.stderr, "lex: Warning. no ignore rule is defined for exclusive state '%s'" % s
elif stype == 'inclusive':
if not errorf.has_key(s):
errorf[s] = errorf.get("INITIAL",None)
diff --git a/ply/yacc.py b/ply/yacc.py
index 395e3c0..d2aab67 100644
--- a/ply/yacc.py
+++ b/ply/yacc.py
@@ -293,7 +293,9 @@ class Parser:
del symstack[-plen:]
del statestack[-plen:]
else:
- sym.lineno = 0
+ if tracking:
+ sym.lineno = lexer.lineno
+ sym.lexpos = lexer.lexpos
targ = [ sym ]
pslice.slice = targ
@@ -1936,8 +1938,8 @@ del _lr_goto_items
f.close()
except IOError,e:
- print "Unable to create '%s'" % filename
- print e
+ print >>sys.stderr, "Unable to create '%s'" % filename
+ print >>sys.stderr, e
return
def lr_read_tables(module=tab_module,optimize=0):
@@ -2052,7 +2054,7 @@ def yacc(method=default_lr, debug=yaccdebug, module=None, tabmodule=tab_module,
v1 = [x.split(".") for x in v]
Requires[r] = v1
except StandardError:
- print "Invalid specification for rule '%s' in require. Expected a list of strings" % r
+ print >>sys.stderr, "Invalid specification for rule '%s' in require. Expected a list of strings" % r
# Build the dictionary of terminals. We a record a 0 in the
@@ -2060,12 +2062,12 @@ def yacc(method=default_lr, debug=yaccdebug, module=None, tabmodule=tab_module,
# used in the grammar
if 'error' in tokens:
- print "yacc: Illegal token 'error'. Is a reserved word."
+ print >>sys.stderr, "yacc: Illegal token 'error'. Is a reserved word."
raise YaccError,"Illegal token name"
for n in tokens:
if Terminals.has_key(n):
- print "yacc: Warning. Token '%s' multiply defined." % n
+ print >>sys.stderr, "yacc: Warning. Token '%s' multiply defined." % n
Terminals[n] = [ ]
Terminals['error'] = [ ]
@@ -2100,7 +2102,7 @@ def yacc(method=default_lr, debug=yaccdebug, module=None, tabmodule=tab_module,
global Errorfunc
Errorfunc = ef
else:
- print "yacc: Warning. no p_error() function is defined."
+ print >>sys.stderr, "yacc: Warning. no p_error() function is defined."
# Get the list of built-in functions with p_ prefix
symbols = [ldict[f] for f in ldict.keys()
@@ -2172,7 +2174,7 @@ def yacc(method=default_lr, debug=yaccdebug, module=None, tabmodule=tab_module,
f.write(_vf.getvalue())
f.close()
except IOError,e:
- print "yacc: can't create '%s'" % debugfile,e
+ print >>sys.stderr, "yacc: can't create '%s'" % debugfile,e
# Made it here. Create a parser object and set up its internal state.
# Set global parse() method to bound method of parser object.
diff --git a/test/lex_ignore.exp b/test/lex_ignore.exp
index 466ce19..6b6b67c 100644
--- a/test/lex_ignore.exp
+++ b/test/lex_ignore.exp
@@ -2,6 +2,6 @@
Traceback (most recent call last):
File "./lex_ignore.py", line 29, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_ignore2.exp b/test/lex_ignore2.exp
new file mode 100644
index 0000000..0eb6bf2
--- /dev/null
+++ b/test/lex_ignore2.exp
@@ -0,0 +1 @@
+lex: Warning. t_ignore contains a literal backslash '\'
diff --git a/test/lex_ignore2.py b/test/lex_ignore2.py
new file mode 100644
index 0000000..fc95bd1
--- /dev/null
+++ b/test/lex_ignore2.py
@@ -0,0 +1,29 @@
+# lex_token.py
+#
+# ignore declaration as a raw string
+
+import sys
+sys.path.insert(0,"..")
+
+import ply.lex as lex
+
+tokens = [
+ "PLUS",
+ "MINUS",
+ "NUMBER",
+ ]
+
+t_PLUS = r'\+'
+t_MINUS = r'-'
+t_NUMBER = r'\d+'
+
+t_ignore = r' \t'
+
+def t_error(t):
+ pass
+
+import sys
+
+lex.lex()
+
+
diff --git a/test/lex_re1.exp b/test/lex_re1.exp
index ef8ba98..4d54f4b 100644
--- a/test/lex_re1.exp
+++ b/test/lex_re1.exp
@@ -2,6 +2,6 @@ lex: Invalid regular expression for rule 't_NUMBER'. unbalanced parenthesis
Traceback (most recent call last):
File "./lex_re1.py", line 25, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_re2.exp b/test/lex_re2.exp
index 2e9cc73..a4e2e89 100644
--- a/test/lex_re2.exp
+++ b/test/lex_re2.exp
@@ -2,6 +2,6 @@ lex: Regular expression for rule 't_PLUS' matches empty string.
Traceback (most recent call last):
File "./lex_re2.py", line 25, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_re3.exp b/test/lex_re3.exp
index 1205372..b9ada21 100644
--- a/test/lex_re3.exp
+++ b/test/lex_re3.exp
@@ -3,6 +3,6 @@ lex: Make sure '#' in rule 't_POUND' is escaped with '\#'.
Traceback (most recent call last):
File "./lex_re3.py", line 27, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_state1.exp b/test/lex_state1.exp
index be59cbf..facad03 100644
--- a/test/lex_state1.exp
+++ b/test/lex_state1.exp
@@ -2,6 +2,6 @@ lex: states must be defined as a tuple or list.
Traceback (most recent call last):
File "./lex_state1.py", line 38, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_state2.exp b/test/lex_state2.exp
index 241779c..8b04251 100644
--- a/test/lex_state2.exp
+++ b/test/lex_state2.exp
@@ -3,6 +3,6 @@ lex: invalid state specifier 'example'. Must be a tuple (statename,'exclusive|in
Traceback (most recent call last):
File "./lex_state2.py", line 38, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_state3.exp b/test/lex_state3.exp
index 9495ad5..53ab57f 100644
--- a/test/lex_state3.exp
+++ b/test/lex_state3.exp
@@ -3,6 +3,6 @@ lex: No rules defined for state 'example'
Traceback (most recent call last):
File "./lex_state3.py", line 40, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_state4.exp b/test/lex_state4.exp
index 9a2deed..412ae8f 100644
--- a/test/lex_state4.exp
+++ b/test/lex_state4.exp
@@ -2,6 +2,6 @@ lex: state type for state comment must be 'inclusive' or 'exclusive'
Traceback (most recent call last):
File "./lex_state4.py", line 39, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_state5.exp b/test/lex_state5.exp
index 229c6c4..8eeae56 100644
--- a/test/lex_state5.exp
+++ b/test/lex_state5.exp
@@ -2,6 +2,6 @@ lex: state 'comment' already defined.
Traceback (most recent call last):
File "./lex_state5.py", line 40, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/lex_state_norule.exp b/test/lex_state_norule.exp
index 8cb1f04..7097d2a 100644
--- a/test/lex_state_norule.exp
+++ b/test/lex_state_norule.exp
@@ -2,6 +2,6 @@ lex: No rules defined for state 'example'
Traceback (most recent call last):
File "./lex_state_norule.py", line 40, in <module>
lex.lex()
- File "../ply/lex.py", line 758, in lex
+ File "../ply/lex.py", line 759, in lex
raise SyntaxError,"lex: Unable to build lexer."
SyntaxError: lex: Unable to build lexer.
diff --git a/test/yacc_noerror.exp b/test/yacc_noerror.exp
index 658f907..3ae7712 100644
--- a/test/yacc_noerror.exp
+++ b/test/yacc_noerror.exp
@@ -1,2 +1,2 @@
-yacc: Generating LALR parsing table...
yacc: Warning. no p_error() function is defined.
+yacc: Generating LALR parsing table...