| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Found using: https://github.com/intgr/topy
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it for DELETE would fail to target the correct row for DELETE.
Then to compound matters, basic "number of rows matched" checks were
not being performed. Both issues are fixed, however note that the
"rows matched" check requires so-called "sane multi-row count"
functionality; the DBAPI's executemany() method must count up the
rows matched by individual statements and SQLAlchemy's dialect must
mark this feature as supported, currently applies to some mysql dialects,
psycopg2, sqlite only. fixes #3006
- Enabled "sane multi-row count" checking for the psycopg2 DBAPI, as
this seems to be supported as of psycopg2 2.0.9.
|
|
|
|
|
|
|
|
|
|
|
| |
1. make sure pool._invalidate() sets the timestamp up before
invalidating the target connection. we can otherwise show how the
conn.invalidate() + pool._invalidate() can lead to an extra connection
being made.
2. to help with that, soften up the check on connection.invalidate()
when connection is already closed. a warning is fine here
3. add a mutex to test_max_overflow() when we connect, because the way
we're using mock depends on an iterator, that needs to be synchronized
|
|
|
|
|
|
|
| |
implementation allows an event handler to redefine the specific mechanics
by which an arbitrary dialect invokes execute() or executemany() on a
DBAPI cursor. The new events, at this point semi-public and experimental,
are in support of some upcoming transaction-related extensions.
|
|
|
|
|
|
|
|
|
|
| |
after one or more :class:`.Connection` objects have been created
(such as by an orm :class:`.Session` or via explicit connect)
and the listener will pick up events from those connections.
Previously, performance concerns pushed the event transfer from
:class:`.Engine` to :class:`.Connection` at init-time only, but
we've inlined a bunch of conditional checks to make this possible
without any additional function calls. fixes #2978
|
|
|
|
|
|
|
|
| |
within userland engine.dispose(); as some SQLA tests already failed when the replace step
was removed, due to those conns still being referenced, it's likely this will
create surprises for all those users that incorrectly use dispose()
and it's not really worth dealing with. This doesn't affect the change
we made for ref: #2985.
|
|
|
|
|
|
|
|
|
|
|
|
| |
recycles the connection pool when a "disconnect" condition is detected;
instead of discarding the pool and explicitly closing out connections,
the pool is retained and a "generational" timestamp is updated to
reflect the current time, thereby causing all existing connections
to be recycled when they are next checked out. This greatly simplifies
the recycle process, removes the need for "waking up" connect attempts
waiting on the old pool and eliminates the race condition that many
immediately-discarded "pool" objects could be created during the
recycle operation. fixes #2985
|
|
|
|
|
|
|
|
|
|
|
|
| |
emitted for the "_cursor_execute()" method of :class:`.Connection`;
this is the "quick" executor that is used for things like
when a sequence is executed ahead of an INSERT statement, as well as
for dialect startup checks like unicode returns, charset, etc.
the :meth:`.ConnectionEvents.before_cursor_execute` event was already
invoked here. The "executemany" flag is now always set to False
here, as this event always corresponds to a single execution.
Previously the flag could be True if we were acting on behalf of
an executemany INSERT statement.
|
|
|
|
|
| |
and :func:`.event.listens_for`. This is a convenience feature which
will wrap the given listener such that it is only invoked once.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to support dialect-level reflection options for all :class:`.Table`
objects reflected.
- Added a new dialect-level argument ``postgresql_ignore_search_path``;
this argument is accepted by both the :class:`.Table` constructor
as well as by the :meth:`.MetaData.reflect` method. When in use
against Postgresql, a foreign-key referenced table which specifies
a remote schema name will retain that schema name even if the name
is present in the ``search_path``; the default behavior since 0.7.3
has been that schemas present in ``search_path`` would not be copied
to reflected :class:`.ForeignKey` objects. The documentation has been
updated to describe in detail the behavior of the ``pg_get_constraintdef()``
function and how the ``postgresql_ignore_search_path`` feature essentially
determines if we will honor the schema qualification reported by
this function or not. [ticket:2922]
|
|
|
|
|
|
|
|
|
| |
would lead to ``TypeError`` when compared to non-tuple types as it attempted
to apply tuple() to the "other" object unconditionally. The
full range of Python comparison operators have now been implemented on
:class:`.RowProxy`, using an approach that guarantees a comparison
system that is equivalent to that of a tuple, and the "other" object
is only coerced if it's an instance of RowProxy. [ticket:2924]
|
|
|
|
| |
and modernize the engines chapter a bit
|
|
|
|
|
|
|
|
|
|
|
| |
MySQL, to correctly render the SET clause among multiple columns
with the same name across tables. This also changes the name used for
the bound parameter in the SET clause to "<tablename>_<colname>" for
the non-primary table only; as this parameter is typically specified
using the :class:`.Column` object directly this should not have an
impact on applications. The fix takes effect for both
:meth:`.Table.update` as well as :meth:`.Query.update` in the ORM.
[ticket:2912]
|
|
|
|
|
| |
sent to the primary key constraint; existing tests in the PG dialect
confirm this.
|
|
|
|
|
|
|
|
|
|
|
|
| |
reflection now updates the PKC in place.
- support the use case of the empty PrimaryKeyConstraint in order to specify
constraint options; the columns marked as primary_key=True will now be gathered
into the columns collection, rather than being ignored. [ticket:2910]
- add validation such that column specification should only take place
in the PrimaryKeyConstraint directly, or by using primary_key=True flags;
if both are present, they have to match exactly, otherwise the condition is
assumed to be ambiguous, and a warning is emitted; the old behavior of
using the PKC columns only is maintained.
|
|
|
|
| |
- test_autoincrement_col still needs reflection overall
|
|
|
|
| |
- clean up some shenanigans in reflection
|
|
|
|
|
| |
arguments; [ticket:2866]
- add dialect specific kwarg functionality to ForeignKeyConstraint, ForeignKey
|
|
|
|
|
|
| |
double close
- ensure no iterator changed size issues in testing.engines
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
type such as "charset" and "collation". While MySQL wants all character-
based CAST calls to use the CHAR type, we now create a real CHAR
object at CAST time and copy over all the parameters it has, so that
an expression like ``cast(x, mysql.TEXT(charset='utf8'))`` will
render ``CAST(t.col AS CHAR CHARACTER SET utf8)``.
- Added new "unicode returns" detection to the MySQL dialect and
to the default dialect system overall, such that any dialect
can add extra "tests" to the on-first-connect "does this DBAPI
return unicode directly?" detection. In this case, we are
adding a check specifically against the "utf8" encoding with
an explicit "utf8_bin" collation type (after checking that
this collation is available) to test for some buggy unicode
behavior observed with MySQLdb version 1.2.3. While MySQLdb
has resolved this issue as of 1.2.4, the check here should
guard against regressions. The change also allows the "unicode"
checks to log in the engine logs, which was not previously
the case. [ticket:2906]
|
|
|
|
| |
except for the changelog part
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_reset_agent, so that it's local to the various begin_impl(),
rollback_impl(), etc. this allows setting/resetting of the flag
to be symmetric.
- don't set _reset_agent if it's not None, don't unset it if it isn't
our own transaction.
- make sure we clean it out in close().
- basically, we're dealing here with pools using "threadlocal" that have a
counter, other various mismatches that the tests bring up
- test for recover() now has to invalidate() the previous connection,
because closing it actually rolls it back (e.g. this test was relying
on the broken behavior).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
:class:`.RootTransaction` or :class:`.TwoPhaseTransaction`
with its immediate :class:`._ConnectionFairy` as a "reset handler"
for the span of that transaction, which takes over the task
of calling commit() or rollback() for the "reset on return" behavior
of :class:`.Pool` if the transaction was not otherwise completed.
This resolves the issue that a picky transaction
like that of MySQL two-phase will be
properly closed out when the connection is closed without an
explicit rollback or commit (e.g. no longer raises "XAER_RMFAIL"
in this case - note this only shows up in logging as the exception
is not propagated within pool reset).
This issue would arise e.g. when using an orm
:class:`.Session` with ``twophase`` set, and then
:meth:`.Session.close` is called without an explicit rollback or
commit. The change also has the effect that you will now see
an explicit "ROLLBACK" in the logs when using a :class:`.Session`
object in non-autocommit mode regardless of how that session was
discarded. Thanks to Jeff Dairiki and Laurence Rowe for isolating
the issue here. [ticket:2907]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
events including auto-invalidation, which is useful both for tests here as well as
detecting failure conditions within the "reset" or "close" cases.
- rename the argument for PoolEvents.reset() to dbapi_connection and connection_record
to be consistent with everything else.
- add new documentation sections on invalidation, including auto-invalidation
and the invalidation process within the pool.
- add _ConnectionFairy and _ConnectionRecord to the pool documentation. Establish
docs for common _ConnectionFairy/_ConnectionRecord methods and accessors and
have PoolEvents docs refer to _ConnectionRecord,
since it is passed to all events. Rename a few _ConnectionFairy methods that are actually
private to pool such as _checkout(), _checkin() and _checkout_existing(); there should not
be any external code calling these
|
| |
|
| |
|
|
|
|
| |
[ticket:2852]
|
|
|
|
|
|
|
|
| |
we will be able to parse dialect-specific arguments from string
configuration dictionaries. Dialect classes can now provide their
own list of parameter types and string-conversion routines.
The feature is not yet used by the built-in dialects, however.
[ticket:2875]
|
|
|
|
|
|
|
|
|
|
|
|
| |
of dbapi.Error (such as ``TypeError``, ``NotImplementedError``, etc.)
will propagate the exception unchanged. Previously,
the error handling specific to the ``connect()`` routine would both
inappropriately run the exception through the dialect's
:meth:`.Dialect.is_disconnect` routine as well as wrap it in
a :class:`sqlalchemy.exc.DBAPIError`. It is now propagated unchanged
in the same way as occurs within the execute process. [ticket:2881]
- add tests for this in test_parseconnect, but also add tests in test_execute
to ensure the execute() behavior as well
|
| |
|
|
|
|
|
|
|
| |
"@", or "/" must be encoded." - so re-apply encoding to both password
and username, don't encode spaces as plus signs, don't encode any chars
outside of :, @, / on stringification - but we still parse for any
%XX character (is that right?)
|
|
|
|
|
|
|
| |
:func:`.make_url` function **no longer URL encode the password**.
Database passwords that include characters like spaces, plus signs
and anything else should now represent these characters directly,
without any URL escaping. [ticket:2873]
|
|
|
|
|
|
|
| |
when a pre-DBAPI :class:`.StatementError` were raised within
:meth:`.Connection.execute`, causing encoding errors for
non-ASCII statements. The stringification now remains within
Python unicode thus avoiding encoding errors. [ticket:2871]
|
|
|
|
|
|
| |
tuple is; this is accomplished via ensuring tuple() conversion on
both sides within the ``__eq__()`` method as well as
the addition of a ``__lt__()`` method. [ticket:2848]
|
|
|
|
|
|
| |
https://bitbucket.org/zzzeek/sqlalchemy_informixdb
- remove informix, maxdb, access symbols from tests etc.
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
exposes it
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
with conjunctions, e.g.
``None`` :func:`.expression.null` :func:`.expression.true`
:func:`.expression.false`, including consistency in rendering NULL
in conjunctions, "short-circuiting" of :func:`.and_` and :func:`.or_`
expressions which contain boolean constants, and rendering of
boolean constants and expressions as compared to "1" or "0" for backends
that don't feature ``true``/``false`` constants. [ticket:2804]
|
|/
|
|
| |
ipv6 addresses, e.g. surrounded by brackets. [ticket:2851]
|
|
|
|
|
|
|
| |
all known cases is provided by :class:`.DefaultDialect`, has been
tightened to expect ``include_columns`` and ``exclude_columns``
arguments without any kw option, reducing ambiguity - previously
``exclude_columns`` was missing. [ticket:2748]
|
|\
| |
| | |
Hide password in URL and Engine __repr__
|
| |
| |
| |
| | |
Fixes #2821
|
|/
|
|
| |
pattern
|
|
|
|
| |
in-python only cols
|
|
|
|
|
|
| |
of the incoming :class:`.Column` would prevent primary key constraints,
indexes, and foreign key constraints from being correctly reflected.
Also in 0.8.3. [ticket:2811]
|