summaryrefslogtreecommitdiff
path: root/pygments/lexers/rdf.py
Commit message (Collapse)AuthorAgeFilesLines
* Update copyright year to 2023.Matthäus G. Chajdas2023-03-291-1/+1
|
* all: style fixesGeorg Brandl2022-10-271-2/+2
|
* Happy new year.Georg Brandl2022-01-251-1/+1
|
* Run pyupgrade across codebase to modernize syntax and patterns (#1622)Jon Dufresne2021-01-171-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | pyupgrade is a tool to automatically upgrade syntax for newer versions of the Python language. The project has been Python 3 only since 35544e2fc6eed0ce4a27ec7285aac71ff0ddc473, allowing for several cleanups: - Remove unnecessary "-*- coding: utf-8 -*-" cookie. Python 3 reads all source files as utf-8 by default. - Replace IOError/EnvironmentError with OSError. Python 3 unified these exceptions. The old names are aliases only. - Use the Python 3 shorter super() syntax. - Remove "utf8" argument form encode/decode. In Python 3, this value is the default. - Remove "r" from open() calls. In Python 3, this value is the default. - Remove u prefix from Unicode strings. In Python 3, all strings are Unicode. - Replace io.open() with builtin open(). In Python 3, these functions are functionally equivalent. Co-authored-by: Matthäus G. Chajdas <Anteru@users.noreply.github.com>
* Bump copyright year.Matthäus G. Chajdas2021-01-031-1/+1
|
* testing turtle prefix names where reference starts with number (#1590)elf Pavlik2020-12-051-11/+51
| | | | | | | | | * testing turtle prefix names where reference starts with number * remove case insensitive flag from Turtle lexer * use same end-of-string regex as in SPARQL and ShExC * make example.ttl valid turtle
* Add missing tokens to SPARQL lexer (#1559)Lucas Werkmeister2020-10-021-5/+5
| | | | | | | | | | | | | | | | @belett noticed that VALUES was missing [1]; I found the other ones by running the following snippet on the SPARQL 1.1 Query Language spec: new Set(Array.from(document.querySelectorAll('.grammarTable')) .reduce((text, elem) => text + elem.textContent) .match(/'[a-z0-9-_ ]*'/ig)) I don’t know why a few keywords were missing; the docstring linked to the SPARQL 1.0 Query Language spec (also fixed here), but the lexer already contained other tokens which were only added in SPARQL 1.1, such as the aggregate functions (MIN, MAX etc.), which have already been in Pygments since the initial commit of the current history (6ded9db394). [1]: https://phabricator.wikimedia.org/T264175
* all: remove "u" string prefix (#1536)Georg Brandl2020-09-081-32/+32
| | | | | | | | | | | * all: remove "u" string prefix * util: remove unirange Since Python 3.3, all builds are wide unicode compatible. * unistring: remove support for narrow-unicode builds which stopped being relevant with Python 3.3
* Update copyright year (fixes #1514.)Matthäus G. Chajdas2020-08-221-1/+1
|
* Fix remaining issues with ShExC parser.Matth?us G. Chajdas2019-07-221-2/+2
|
* Fix raw strings for regexLucas Werkmeister2019-07-211-2/+2
|
* Add lexer for ShExCLucas Werkmeister2019-07-201-1/+147
| | | | | | | | | | | | | | | | | ShExC [1] is one syntax for the ShEx (shape expressions) language [2] to describe the structure of RDF graphs (the other two syntaxes are based on JSON-LD and RDF and don?t need special lexers). It is syntactically similar to SPARQL, which is why a lot of the productions of ShExCLexer are copied from SparqlLexer, but at the same time has enough differences that I feel it?s better to simply copy the productions rather than trying to share them between the two lexers (compare e.?g. PN_LOCAL_ESCAPE_CHARS or IRIREF). The example file purports to be a brief schema for Pygments lexers, which I put together from scratch to avoid licensing issues with existing example schemas; it should not be taken too seriously. [1]: https://shex.io/shex-semantics/#shexc [2]: https://shexspec.github.io/primer/
* Fixup all headers and some more minor problems.2.4.2Georg Brandl2019-05-281-1/+1
|
* Merged in kurtmckee/pygments-main/support-tera-term (pull request #749)Anteru2019-04-301-0/+7
|\ | | | | | | Support Tera Term macro language
| * Support the Tera Term macro languageKurt McKee2018-01-281-0/+7
| | | | | | | | | | | | The patch modifies the Turtle parser in rdf.py, which uses the same file extension. A unit test file is included.
* | Fix invalid escapes due to missing raw string prefix.Georg Brandl2018-11-281-4/+4
|/
* Copyright update.Georg Brandl2017-01-221-1/+1
|
* Add support for partials and path segments for Handlebars.Christian Hammond2016-11-041-0/+270
This introduces support for some missing features to the Handlebars lexer: Partials and path segments. Partials mostly appeared to work before, but the `>` in `{{> ... }}` would appear as a syntax error, as could other components of the partial. This change introduces support for: * Standard partials: `{{> partialName}}` * Partials with parameters: `{{> partialName varname="value"}}` * Ddynamic partials: `{{> (partialFunc)}}` * Ddynamic partials with lookups: `{{> (lookup ../path "partialName")}}` * Partial blocks: `{{> @partial-block}}` * Inline partials: `{{#*inline}}..{{/inline}}` It also introduces support for path segments, which can reference content in the current context or in a parent context. For instance, `this.name`, `this/name`, `./name`, `../name`, `this/name`, etc. These are all now tracked as variables.