summaryrefslogtreecommitdiff
path: root/docs/src
diff options
context:
space:
mode:
authorgbrandl <devnull@localhost>2006-10-28 22:09:41 +0200
committergbrandl <devnull@localhost>2006-10-28 22:09:41 +0200
commit7f8c2354a497187f33e7c7fc7e8bcdc5f5a8b7a5 (patch)
treeaa49a45ff8216f4502eedba057d621aa0e22589b /docs/src
parentd336a1d4c0375b2b7cf266b6073932e3abb0968d (diff)
downloadpygments-7f8c2354a497187f33e7c7fc7e8bcdc5f5a8b7a5.tar.gz
[svn] Some fixes, add docs for new features.
Diffstat (limited to 'docs/src')
-rw-r--r--docs/src/api.txt42
-rw-r--r--docs/src/quickstart.txt53
-rw-r--r--docs/src/tokens.txt3
3 files changed, 90 insertions, 8 deletions
diff --git a/docs/src/api.txt b/docs/src/api.txt
index 90317147..1d32c59f 100644
--- a/docs/src/api.txt
+++ b/docs/src/api.txt
@@ -43,6 +43,25 @@ def `get_lexer_for_filename(fn, **options):`
Will raise `ValueError` if no lexer for that filename is found.
+def `get_lexer_for_mimetype(mime, **options):`
+ Return a `Lexer` subclass instance that has `mime` in its mimetype
+ list. The lexer is given the `options` at its instantiation.
+
+ Will raise `ValueError` if not lexer for that mimetype is found.
+
+def `guess_lexer(text, **options):`
+ Return a `Lexer` subclass instance that's guessed from the text
+ in `text`. For that, the `analyze_text()` method of every known
+ lexer class is called with the text as argument, and the lexer
+ which returned the highest value will be instantiated and returned.
+
+ `ValueError` is raised if no lexer thinks it can handle the content.
+
+def `guess_lexer_for_filename(text, filename, **options):`
+ As `guess_lexer()`, but only lexers which have a pattern in `filenames`
+ or `alias_filenames` that matches `filename` are taken into consideration.
+
+ `ValueError` is raised if no lexer thinks it can handle the content.
Functions from `pygments.formatters`:
@@ -101,6 +120,12 @@ def `get_tokens_unprocessed(self, text):`
This method must be overridden by subclasses.
+def `analyze_text(text):`
+ A static method which is called for lexer guessing. It should analyze
+ the text and return a float in the range from ``0.0`` to ``1.0``.
+ If it returns ``0.0``, the lexer will not be selected as the most
+ probable one, if it returns ``1.0``, it will be selected immediately.
+
For a list of known tokens have a look at the `Tokens`_ page.
The lexer also recognizes the following attributes that are used by the
@@ -114,8 +139,21 @@ builtin lookup mechanism.
the lexer from a list.
`filenames`
- A list of `fnmatch` patterns that can be used to find a lexer for
- a given filename.
+ A list of `fnmatch` patterns that match filenames which contain
+ content for this lexer. The patterns in this list should be unique among
+ all lexers.
+
+`alias_filenames`
+ A list of `fnmatch` patterns that match filenames which may or may not
+ contain content for this lexer. This list is used by the
+ `guess_lexer_for_filename()` function, to determine which lexers are
+ then included in guessing the correct one. That means that e.g. every
+ lexer for HTML and a template language should include ``\*.html`` in
+ this list.
+
+`mimetypes`
+ A list of MIME types for content that can be lexed with this
+ lexer.
.. _Tokens: tokens.txt
diff --git a/docs/src/quickstart.txt b/docs/src/quickstart.txt
index 5b8cdfaf..0d9a62bc 100644
--- a/docs/src/quickstart.txt
+++ b/docs/src/quickstart.txt
@@ -87,17 +87,58 @@ one of the following methods:
.. sourcecode:: pycon
- >>> from pygments.lexers import get_lexer_by_name, get_lexer_for_filename
+ >>> from pygments.lexers import (get_lexer_by_name,
+ ... get_lexer_for_filename, get_lexer_for_mimetype)
+
>>> get_lexer_by_name('python')
- <pygments.lexers.agile.PythonLexer object at 0xb7bd6d0c>
- >>> get_lexer_for_filename('spam.py')
- <pygments.lexers.agile.PythonLexer object at 0xb7bd6b2c>
+ <pygments.lexers.PythonLexer>
+
+ >>> get_lexer_for_filename('spam.rb')
+ <pygments.lexers.RubyLexer>
+
+ >>> get_lexer_for_mimetype('text/x-perl')
+ <pygments.lexers.PerlLexer>
+
+All these functions accept keyword arguments; they will be passed to the lexer
+as options.
-The same API is available for formatters: use `get_formatter_by_name` and
-`get_formatter_for_filename` from the `pygments.formatters` module
+A similar API is available for formatters: use `get_formatter_by_name()` and
+`get_formatter_for_filename()` from the `pygments.formatters` module
for this purpose.
+Guessing lexers
+===============
+
+If you don't know the content of the file, or you want to highlight a file
+whose extension is ambiguous, such as ``.html`` (which could contain plain HTML
+or some template tags), use these functions:
+
+.. sourcecode:: pycon
+
+ >>> from pygments.lexers import guess_lexer, guess_lexer_for_filename
+
+ >>> guess_lexer('#!/usr/bin/python\nprint "Hello World!"')
+ <pygments.lexers.PythonLexer>
+
+ >>> guess_lexer_for_filename('test.py', 'print "Hello World!"')
+ <pygments.lexers.PythonLexer>
+
+`guess_lexer()` passes the given content to the lexer classes' `analyze_text()`
+method and returns the one for which it returns the highest number.
+
+All lexers have two different filename pattern lists: the primary and the
+secondary one. The `get_lexer_for_filename()` function only uses the primary
+list, whose entries are supposed to be unique among all lexers.
+`guess_lexer_for_filename()`, however, will first loop through all lexers and
+look at the primary and secondary filename patterns if the filename matches.
+If only one lexer matches, it is returned, else the guessing mechanism of
+`guess_lexer()` is used with the matching lexers.
+
+As usual, keyword arguments to these functions are given to the created lexer
+as options.
+
+
Command line usage
==================
diff --git a/docs/src/tokens.txt b/docs/src/tokens.txt
index 47d8feea..daaf1eca 100644
--- a/docs/src/tokens.txt
+++ b/docs/src/tokens.txt
@@ -56,6 +56,9 @@ Normally you just create token types using the already defined aliases. For each
of those token aliases, a number of subtypes exists (excluding the special tokens
`Token.Text`, `Token.Error` and `Token.Other`)
+The `is_token_subtype()` function in the `pygments.token` module can be used to
+test if a token type is a subtype of another (such as `Name.Tag` and `Name`).
+
Keyword Tokens
==============