summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/build/core/pooling.rst147
-rw-r--r--doc/build/orm/collections.rst17
-rw-r--r--doc/build/orm/inheritance.rst8
-rw-r--r--doc/build/outline.txt169
-rw-r--r--lib/sqlalchemy/orm/collections.py2
5 files changed, 122 insertions, 221 deletions
diff --git a/doc/build/core/pooling.rst b/doc/build/core/pooling.rst
index 7af56eab8..edb6a334e 100644
--- a/doc/build/core/pooling.rst
+++ b/doc/build/core/pooling.rst
@@ -5,18 +5,18 @@ Connection Pooling
.. module:: sqlalchemy.pool
-SQLAlchemy ships with a connection pooling framework that integrates
-with the Engine system and can also be used on its own to manage plain
-DB-API connections.
-
-At the base of any database helper library is a system for efficiently
-acquiring connections to the database. Since the establishment of a
-database connection is typically a somewhat expensive operation, an
-application needs a way to get at database connections repeatedly
-without incurring the full overhead each time. Particularly for
+The establishment of a
+database connection is typically a somewhat expensive operation, and
+applications need a way to get at database connections repeatedly
+with minimal overhead. Particularly for
server-side web applications, a connection pool is the standard way to
-maintain a group or "pool" of active database connections which are
-reused from request to request in a single server process.
+maintain a "pool" of active database connections in memory which are
+reused across requests.
+
+SQLAlchemy includes several connection pool implementations
+which integrate with the :class:`.Engine`. They can also be used
+directly for applications that want to add pooling to an otherwise
+plain DBAPI approach.
Connection Pool Configuration
-----------------------------
@@ -36,55 +36,118 @@ directly to :func:`~sqlalchemy.create_engine` as keyword arguments:
pool_size=20, max_overflow=0)
In the case of SQLite, a :class:`SingletonThreadPool` is provided instead,
-to provide compatibility with SQLite's restricted threading model.
+to provide compatibility with SQLite's restricted threading model, as well
+as to provide a reasonable default behavior to SQLite "memory" databases,
+which maintain their entire dataset within the scope of a single connection.
+
+All SQLAlchemy pool implementations have in common
+that none of them "pre create" connections - all implementations wait
+until first use before creating a connection. At that point, if
+no additional concurrent checkout requests for more connections
+are made, no additional connections are created. This is why it's perfectly
+fine for :func:`.create_engine` to default to using a :class:`.QueuePool`
+of size five without regard to whether or not the application really needs five connections
+queued up - the pool would only grow to that size if the application
+actually used five connections concurrently, in which case the usage of a
+small pool is an entirely appropriate default behavior.
+
+Switching Pool Implementations
+------------------------------
+
+The usual way to use a different kind of pool with :func:`.create_engine`
+is to use the ``poolclass`` argument. This argument accepts a class
+imported from the ``sqlalchemy.pool`` module, and handles the details
+of building the pool for you. Common options include specifying
+:class:`.QueuePool` with SQLite::
+
+ from sqlalchemy.pool import QueuePool
+ engine = create_engine('sqlite:///file.db', poolclass=QueuePool)
+
+Disabling pooling using :class:`.NullPool`::
+
+ from sqlalchemy.pool import NullPool
+ engine = create_engine(
+ 'postgresql+psycopg2://scott:tiger@localhost/test',
+ poolclass=NullPool)
+
+Using a Custom Connection Function
+----------------------------------
+
+All :class:`.Pool` classes accept an argument ``creator`` which is
+a callable that creates a new connection. :func:`.create_engine`
+accepts this function to pass onto the pool via an argument of
+the same name::
+ import sqlalchemy.pool as pool
+ import psycopg2
-Custom Pool Construction
-------------------------
+ def getconn():
+ c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
+ # do things with 'c' to set up
+ return c
-:class:`Pool` instances may be created directly for your own use or to
-supply to :func:`sqlalchemy.create_engine` via the ``pool=``
-keyword argument.
+ engine = create_engine('postgresql+psycopg2://', creator=getconn)
-Constructing your own pool requires supplying a callable function the
-Pool can use to create new connections. The function will be called
-with no arguments.
+For most "initialize on connection" routines, it's more convenient
+to use a :class:`.PoolListener`, so that the usual URL argument to
+:func:`.create_engine` is still usable. ``creator`` is there as
+a total last resort for when a DBAPI has some form of ``connect``
+that is not at all supported by SQLAlchemy.
-Through this method, custom connection schemes can be made, such as a
-using connections from another library's pool, or making a new
-connection that automatically executes some initialization commands::
+Constructing a Pool
+------------------------
+
+To use a :class:`.Pool` by itself, the ``creator`` function is
+the only argument that's required and is passed first, followed
+by any additional options::
import sqlalchemy.pool as pool
import psycopg2
def getconn():
c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
- # execute an initialization function on the connection before returning
- c.cursor.execute("setup_encodings()")
return c
- p = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
+ mypool = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
-Or with SingletonThreadPool::
+DBAPI connections can then be procured from the pool using the :meth:`.Pool.connect`
+function. The return value of this method is a DBAPI connection that's contained
+within a transparent proxy::
- import sqlalchemy.pool as pool
- import sqlite
+ # get a connection
+ conn = mypool.connect()
- p = pool.SingletonThreadPool(lambda: sqlite.connect(filename='myfile.db'))
+ # use it
+ cursor = conn.cursor()
+ cursor.execute("select foo")
+The purpose of the transparent proxy is to intercept the ``close()`` call,
+such that instead of the DBAPI connection being closed, its returned to the
+pool::
-Builtin Pool Implementations
-----------------------------
+ # "close" the connection. Returns
+ # it to the pool.
+ conn.close()
-.. autoclass:: AssertionPool
- :show-inheritance:
+The proxy also returns its contained DBAPI connection to the pool
+when it is garbage collected,
+though it's not deterministic in Python that this occurs immediately (though
+it is typical with cPython).
- .. automethod:: __init__
+A particular pre-created :class:`.Pool` can be shared with one or more
+engines by passing it to the ``pool`` argument of :func:`.create_engine`::
-.. autoclass:: NullPool
- :show-inheritance:
+ e = create_engine('postgresql://', pool=mypool)
- .. automethod:: __init__
+Pool Event Listeners
+--------------------
+
+Connection pools support an event interface that allows hooks to execute
+upon first connect, upon each new connection, and upon checkout and
+checkin of connections. See :class:`.PoolListener` for details.
+
+Builtin Pool Implementations
+----------------------------
.. autoclass:: sqlalchemy.pool.Pool
@@ -103,10 +166,14 @@ Builtin Pool Implementations
.. automethod:: __init__
-.. autoclass:: StaticPool
+.. autoclass:: AssertionPool
:show-inheritance:
- .. automethod:: __init__
+.. autoclass:: NullPool
+ :show-inheritance:
+
+.. autoclass:: StaticPool
+ :show-inheritance:
Pooling Plain DB-API Connections
diff --git a/doc/build/orm/collections.rst b/doc/build/orm/collections.rst
index a9a160d9d..73ba1277b 100644
--- a/doc/build/orm/collections.rst
+++ b/doc/build/orm/collections.rst
@@ -51,17 +51,20 @@ applied as well as limits and offsets, either explicitly or via array slices:
posts = jack.posts[5:20]
The dynamic relationship supports limited write operations, via the
-``append()`` and ``remove()`` methods. Since the read side of the dynamic
-relationship always queries the database, changes to the underlying collection
-will not be visible until the data has been flushed:
-
-.. sourcecode:: python+sql
+``append()`` and ``remove()`` methods::
oldpost = jack.posts.filter(Post.headline=='old post').one()
jack.posts.remove(oldpost)
jack.posts.append(Post('new post'))
+Since the read side of the dynamic relationship always queries the
+database, changes to the underlying collection will not be visible
+until the data has been flushed. However, as long as "autoflush" is
+enabled on the :class:`.Session` in use, this will occur
+automatically each time the collection is about to emit a
+query.
+
To place a dynamic relationship on a backref, use ``lazy='dynamic'``:
.. sourcecode:: python+sql
@@ -135,7 +138,7 @@ values accessible through an attribute on the parent instance. By default,
this collection is a ``list``::
mapper(Parent, properties={
- children = relationship(Child)
+ 'children' : relationship(Child)
})
parent = Parent()
@@ -151,7 +154,7 @@ default list, by specifying the ``collection_class`` option on
# use a set
mapper(Parent, properties={
- children = relationship(Child, collection_class=set)
+ 'children' : relationship(Child, collection_class=set)
})
parent = Parent()
diff --git a/doc/build/orm/inheritance.rst b/doc/build/orm/inheritance.rst
index 71b3fb820..65bcd06f9 100644
--- a/doc/build/orm/inheritance.rst
+++ b/doc/build/orm/inheritance.rst
@@ -237,7 +237,7 @@ Using :func:`~sqlalchemy.orm.query.Query.with_polymorphic` with
``with_polymorphic`` setting.
Advanced Control of Which Tables are Queried
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++
The :meth:`.Query.with_polymorphic` method and configuration works fine for
simplistic scenarios. However, it currently does not work with any
@@ -249,8 +249,8 @@ use the :class:`.Table` objects directly and construct joins manually. For exam
query the name of employees with particular criterion::
session.query(Employee.name).\
- outerjoin((engineer, engineer.c.employee_id==Employee.c.employee_id)).\
- outerjoin((manager, manager.c.employee_id==Employee.c.employee_id)).\
+ outerjoin((engineer, engineer.c.employee_id==Employee.employee_id)).\
+ outerjoin((manager, manager.c.employee_id==Employee.employee_id)).\
filter(or_(Engineer.engineer_info=='w', Manager.manager_data=='q'))
The base table, in this case the "employees" table, isn't always necessary. A
@@ -265,7 +265,7 @@ what's specified in the :meth:`.Session.query`, :meth:`.Query.filter`, or
session.query(engineer.c.id).filter(engineer.c.engineer_info==manager.c.manager_data)
Creating Joins to Specific Subtypes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++
The :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` method is a
helper which allows the construction of joins along
diff --git a/doc/build/outline.txt b/doc/build/outline.txt
deleted file mode 100644
index 408fdeae4..000000000
--- a/doc/build/outline.txt
+++ /dev/null
@@ -1,169 +0,0 @@
-introduction intro.rst
-SQLAlchemy ORM orm.rst
- tutorial tutorial.rst
- mapper configuration mapper_config.rst
- customizing column properties
- subset of table columns
- attr names for mapped columns
- multiple cols for single attribute
- deferred col loading
- ref: deferred
- ref: defer
- ref: undefer
- ref: undefer-group
- sql expressions as mapped attributes
- ref: column_property
- changing attribute behavior
- simple validators
- ref: validates
- using descriptors
- ref: synonym
- ref: hybrid
- custom comparators
- ref: PropComparator (needs the inherits)
- ref: comparable_property
- composite column types
- ref: composite
- Mapper API
- ref: mapper
- ref: Mapper
- relationship configuration relationships.rst
- basic patterns
- adjacency list
- join conditions
- mutually dependent rows
- mutable priamry keys update cascades
- loading options (blah blah eager loading is better ad-hoc link)
- ref: relationship()
- inheritance configuration and usage inheritance.rst
- session usage session.rst
- reference
- ref: session
- ref: object_session
- ref: attribute stuff
- query object query.rst
- ref: query
- eagerloading loading.rst
- what kind of loading to use ?
- routing explicit joins
- reference
- ref: joinedload
- ref: subqueryload
- ref: lazyload
- ref: noload
- ref: contains_eager
- other options
- ref:
- constructs
- events events.rst
- collection configuration collections.rst
- dictionaries
- custom collections
- collection decorators
- instr and custom types
- large collection techniques
- ORM Extensions extensions.rst
- association proxy
- declarative
- orderinglist
- horizontal shard
- sqlsoup
- examples examples.rst
- deprecated interfaces deprecated.rst
-
-SQLAlchemy Core core.rst
- Expression Tutorial sqlexpression.rst
- Expression API Reference expression_api.rst
- engines engines.rst
- intro
- db support
- create_engine URLs
- DBAPI arguments
- logging
- api reference
- ref: create_engine
- ref: engine_from_config
- ref: engine
- ref: resultproxy
- ref: URL
-
- connection pools pooling.rst
- events
-
- connections / transactions connections.rst
- more on connections
- using transactions with connection
- understanding autocommit
- connectionless/implicit
- threadlocal
- events
- Connection API ref
- ref: connection
- ref: connectable
- ref: transaction
-
- schema schema.rst
- metadata
- accessing tables and columns
- creating dropping
- binding
- reflecting
- ..
- ..
- ..
- ref: inspector
- specifying schema name
- backend options
- table metadata API
- ref: metadata
- ref: table (put the tometadata section in the method)
- ref: column
- column insert/update defaults
- scalar defaults
- pyhton functions
- context sensitive
- sql expressions
- server side defaults
- triggered
- defining sequences
- column default API
- ref: fetchavlue
- ref: seq
- etc
- defining constraints and indexes
- defining foregin keys
- creating/dropping via alter
- onupdate ondelete
- Foreign Key API
- unique constraint
- ref: unique
- check constraint
- ref: check
- indexes
- ref: index
- customizing ddl
- contorl ddl sequences
- custom ddl
- events
- DDL API
- :ddlelement
- ddl
- ddlevents
- ceratetable
- droptable
- et etc
- datatypes types.rst
- generic
- sql standard
- vendor
- custom
- - changing type compliation
- - wrapping types with typedec
- - user defined types
- base API
- - abstract
- - typeengine
- - mutable
- - etc
- custom compilation compiler.rst
-Dialects
diff --git a/lib/sqlalchemy/orm/collections.py b/lib/sqlalchemy/orm/collections.py
index a9ad34239..884ec1122 100644
--- a/lib/sqlalchemy/orm/collections.py
+++ b/lib/sqlalchemy/orm/collections.py
@@ -189,7 +189,7 @@ class collection(object):
The recipe decorators all require parens, even those that take no
arguments::
- @collection.adds('entity'):
+ @collection.adds('entity')
def insert(self, position, entity): ...
@collection.removes_return()