| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
number of columns we're actually reporting on
- add more tests for negative row index
- changelog/migration
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| | |
via :paramref:`.create_engine.isolation_level` and
:paramref:`.Connection.execution_options.isolation_level`
parameters. fixes #3534
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
handled within visit_select(); this attribute was added in the
1.0 series to accommodate the subquery wrapping behavior of
SQL Server and Oracle while also working with positional
column targeting and no longer relying upon "key fallback"
in order to target columns in such a statement. The IBM DB2
third-party dialect also has this use case, but its implementation
is using regular expressions to rewrite the textual SELECT only
and does not make use of a "wrapped" select at this time.
The logic no longer attempts to reconcile proxy set collections as
this was not deterministic, and instead assumes that the select()
and the wrapper select() match their columns postionally,
at least for the column positions they have in common,
so it is now very simple and safe. fixes #3657.
- as a side effect of #3657 it was also revealed that the
strategy of calling upon a ResultProxy._getter was not
correctly calling into NoSuchColumnError when an expected
column was not present, and instead returned None up to
loading.instances() to produce NoneType failures; added
a raiseerr argument to _getter() which is called when we
aren't expecting None, fixes #3658.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
logging, exception, and ``repr()`` purposes now truncate very large
scalar values within each collection, including an
"N characters truncated"
notation, similar to how the display for large multiple-parameter sets
are themselves truncated.
fixes #2837
|
|/ |
|
|
|
|
|
| |
and this time also fix the cext itself to properly handle int vs. long
on py2k
|
|
|
|
|
|
| |
This reverts commit de0d144a395c31eb74084177df95a4858b830f88.
Apparently the test suite is not using the cextensions correctly at the moment.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
when the construct contains non-standard sql elements such as
returning, array index operations, or dialect-specific or custom
datatypes. a string is now returned in these cases rendering an
approximation of the construct (typically the postgresql-style
version of it) rather than raising an error. fixes #3631
- add within_group to top-level imports
- add eq_ignore_whitespace to sqlalchemy.testing imports
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
method, and its interaction with result-row processing, now allows
the columns passed to the method to be positionally matched with the
result columns in the statement, rather than matching on name alone.
The advantage to this includes that when linking a textual SQL statement
to an ORM or Core table model, no system of labeling or de-duping of
common column names needs to occur, which also means there's no need
to worry about how label names match to ORM columns and so-forth. In
addition, the :class:`.ResultProxy` has been further enhanced to
map column and string keys to a row with greater precision in some
cases. fixes #3501
- reorganize the initialization of ResultMetaData for readability
and complexity; use the name "cursor_description", define the
task of "merging" cursor_description with compiled column information
as its own function, and also define "name extraction" as a separate task.
- fully change the name we use in the "ambiguous column" error to be the
actual name that was ambiguous, modify the C ext also
|
|
|
|
| |
back by using an attrgetter for the default case
|
|
|
|
| |
have it; keep it simple
|
|
|
|
|
|
|
|
|
| |
This supports the use case of an application that uses the same set of
:class:`.Table` objects in many schemas, such as schema-per-user.
A new execution option
:paramref:`.Connection.execution_options.schema_translate_map` is
added. fixes #2685
- latest tox doesn't like the {posargs} in the profile rerunner
|
|
|
|
|
|
|
|
|
|
|
| |
be stated in the query string for a URL. Custom plugins can
be written which will be given the chance up front to alter and/or
consume the engine's URL and keyword arguments, and then at engine
create time will be given the engine itself to allow additional
modifications or event registration. Plugins are written as a
subclass of :class:`.CreateEnginePlugin`; see that class for
details.
fixes #3536
|
|
|
|
|
|
|
|
| |
no longer called for an UPDATE or DELETE statement emitted via plain
text or via the :func:`.text` construct, affecting those drivers
that erase cursor.rowcount once the cursor is closed such as SQL
Server ODBC and Firebird drivers.
fixes #3622
|
|
|
|
|
|
|
|
|
|
|
| |
From [PEP 479](https://www.python.org/dev/peps/pep-0479/) the correct way to
terminate a generator is to return (which implicitly raises StopIteration)
rather than raise StopIteration.
Without this change using sqlalchemy in python 3.5 or greater results in
these warnings
PendingDeprecationWarning: generator '__iter__' raised StopIteration
which this commit should remove.
|
|
|
|
| |
(cherry picked from commit 5db5e18d3babdb3ee857c075c774a585505b78ce)
|
|
|
|
| |
(cherry picked from commit 81eefe038ea44a5314002483dde9cf00580df1bd)
|
|
|
|
| |
(cherry picked from commit ea084bdc656a6a64db1ee582630d415bc8154505)
|
|
|
|
|
|
|
|
|
| |
by the ORM :class:`.Query` object (part of the performance
enhancements of :ticket:`3175`) would not raise the "this result
does not return rows" exception in the case where the driver
(typically MySQL) fails to generate cursor.description correctly;
an AttributeError against NoneType would be raised instead.
fixes #3481
|
|
|
|
|
|
|
|
| |
un-adjusted internal symbol names for "anonymous" labels, which
are the "foo_1" types of labels we see generated for SQL functions
without labels and similar. This was a side effect of the
performance enhancements implemented as part of references #918.
fixes #3483
|
|
|
|
| |
(cherry picked from commit 3ef00e816da042d4932be53b86f76db17c800842)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
"max_row_buffer" execution option for BufferedRowResultProxy
- also add documentation, changelog and version notes
- rework the max_row_buffer argument to be interpreted from
the execution options upfront when the BufferedRowResultProxy
is first initialized.
|
|
|
|
| |
it to prevent excess memory usage with yield_per
|
| |
|
|
|
|
| |
Called after the :meth:`.Engine.dispose` method is called.
|
|
|
|
|
|
|
|
| |
:func:`.engine_from_config` were not being parsed correctly;
these included ``pool_threadlocal`` and the psycopg2 argument
``use_native_unicode``. fixes #3435
- add legacy_schema_aliasing config parsing for mssql
- move use_native_unicode config arg to the psycopg2 dialect
|
|
|
|
| |
the URL design a little simpler
|
|
|
|
|
|
|
|
| |
:meth:`.URL.get_dialect` method will continue to return the
ultimate :class:`.Dialect` object when a dialect plugin is used,
without the need for the caller to be aware of the
:meth:`.Dialect.get_dialect_cls` method.
reference #3379
|
|
|
|
|
|
|
|
|
|
|
| |
pep-249 exception names linked to exception classes of an entirely
different name, preventing SQLAlchemy's own exception wrapping from
wrapping the error appropriately.
The SQLAlchemy dialect in use needs to implement a new
accessor :attr:`.DefaultDialect.dbapi_exception_translation_map`
to support this feature; this is implemented now for the py-postgresql
dialect.
fixes #3421
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functionality. Added a new "soft invalidate" feature to the
connection pool at the level of the checked out connection wrapper
as well as the :class:`._ConnectionRecord`. This works similarly
to a modern pool invalidation in that connections aren't actively
closed, but are recycled only on next checkout; this is essentially
a per-connection version of that feature. A new event
:class:`.PoolEvents.soft_invalidate` is added to complement it.
fixes #3379
- Added new flag
:attr:`.ExceptionContext.invalidate_pool_on_disconnect`.
Allows an error handler within :meth:`.ConnectionEvents.handle_error`
to maintain a "disconnect" condition, but to handle calling invalidate
on individual connections in a specific manner within the event.
- Added new event :class:`.DialectEvents.do_connect`, which allows
interception / replacement of when the :meth:`.Dialect.connect`
hook is called to create a DBAPI connection. Also added
dialect plugin hooks :meth:`.Dialect.get_dialect_cls` and
:meth:`.Dialect.engine_created` which allow external plugins to
add events to existing dialects using entry points.
fixes #3355
|
|
|
|
|
|
| |
fail to store the correct value for MSSQL on an INSERT where the
primary key value was present in the insert params before execution.
fixes #3360
|
|
|
|
|
|
|
|
|
|
|
| |
That is, after exhausing all rows using the fetch methods, the
DBAPI cursor is released as before and the object may be safely
discarded, but the fetch methods may continue to be called for which
they will return an end-of-result object (None for fetchone, empty list
for fetchmany and fetchall). Only if :meth:`.ResultProxy.close`
is called explicitly will these methods raise the "result is closed"
error.
fixes #3330 fixes #3329
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
persistence.py could theoretically hit the limit of the cache
(100 items by default) and at some points fail to have a key that
we check for, due to the cleanup. This has never been observed
so its likely that so far, the total number of INSERT, UPDATE and
DELETE statement structures in real apps has not exceeded 100
on a per-mapper basis; this could happen for apps that run a
very wide variety of attribute modified combinations into the unit
of work, *and* which have very high concurrency going on.
This change will be a lot more significant when we open up
use of LRUCache + compiled cache with the baked query extension.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "wrapping" employed by the mssql and oracle dialects using the
"iswrapper" argument was not being used intelligently by the compiler,
and the result map was being written incorrectly, using
*more* columns in the result map than were actually returned by
the statement, due to "row number" columns that are inside the
subquery. The compiler now writes out result map on the
"top level" select in all cases
fully, and for the mssql/oracle wrapping case extracts out
the "proxied" columns in a second step, which only includes
those columns that are proxied outwards to the top level.
This change might have implications for 3rd party dialects that
might be imitating oracle's approach. They can safely continue
to use the "iswrapper" kw which is now ignored, but they may
need to also add the _select_wraps argument as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
such that they are matched to the received result set positionally,
rather than by name. Originally, this was seen as a way to handle
cases where we had columns returned with difficult-to-predict names,
though in modern use that issue has been overcome by anonymous
labeling. In this version, the approach basically reduces function
call count per-result by a few dozen calls, or more for larger
sets of result columns. The approach still degrades into a modern
version of the old approach if textual elements modify the result
map, or if any discrepancy in size exists between
the compiled set of columns versus what was received, so there's no
issue for partially or fully textual compilation scenarios where these
lists might not line up. fixes #918
- callcounts still need to be adjusted down for this so zoomark
tests won't pass at the moment
|
|
|
|
|
|
|
|
|
| |
:meth:`.Connection.invalidate` method, or an invalidation due
to a database disconnect, would fail if the
``isolation_level`` parameter had been used with
:meth:`.Connection.execution_options`; the "finalizer" that resets
the isolation level would be called on the no longer opened connection.
fixes #3302
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with :meth:`.Connection.execution_options` when a :class:`.Transaction`
is in play; DBAPIs and/or SQLAlchemy dialects such as psycopg2,
MySQLdb may implicitly rollback or commit the transaction, or
not change the setting til next transaction, so this is never safe.
- Added new parameter :paramref:`.Session.connection.execution_options`
which may be used to set up execution options on a :class:`.Connection`
when it is first checked out, before the transaction has begun.
This is used to set up options such as isolation level on the
connection before the transaction starts.
- added new documentation section
detailing best practices for setting transaction isolation with
sessions.
fixes #3296
|
|
|
|
|
|
| |
for a for-loop through an empty tuple. we add one more local flag
to handle the logic without repetition of dialect.do_execute()
calls.
|