| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where usage of the standalone :func:`_sql.distinct()` used
in the form of being directly SELECTed would fail to be locatable in the
result set by column identity, which is how the ORM locates columns. While
standalone :func:`_sql.distinct()` is not oriented towards being directly
SELECTed (use :meth:`_sql.select.distinct` for a regular
``SELECT DISTINCT..``) , it was usable to a limited extent in this way
previously (but wouldn't work in subqueries, for example). The column
targeting for unary expressions such as "DISTINCT <col>" has been improved
so that this case works again, and an additional improvement has been made
so that usage of this form in a subquery at least generates valid SQL which
was not the case previously.
The change additionally enhances the ability to target elements in
``row._mapping`` based on SQL expression objects in ORM-enabled
SELECT statements, including whether the statement was invoked by
``connection.execute()`` or ``session.execute()``.
Fixes: #6008
Change-Id: I5cfa39435f5418861d70a7db8f52ab4ced6a792e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new execution option
:paramref:`_engine.Connection.execution_options.logging_token`. This option
will add an additional per-message token to log messages generated by the
:class:`_engine.Connection` as it executes statements. This token is not
part of the logger name itself (that part can be affected using the
existing :paramref:`_sa.create_engine.logging_name` parameter), so is
appropriate for ad-hoc connection use without the side effect of creating
many new loggers. The option can be set at the level of
:class:`_engine.Connection` or :class:`_engine.Engine`.
Fixes: #5911
Change-Id: Iec9c39b868b3578fcedc1c094dace5b6f64bacea
|
|
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
|
|
|
|
| |
Fixes: #5642
Change-Id: I07a77483e6e2ec593d87d3d3467a4339c5f77a26
|
|
|
|
|
|
|
| |
It's better, the majority of these changes look more readable to me.
also found some docstrings that had formatting / quoting issues.
Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixed an issue where even though the method claims to be
matching up columns positionally, it was failing on that by
looking in "keymap" based on string name.
Adds a new member to the _keymap recs MD_RESULT_MAP_INDEX
so that we can efficiently link from the generated keymap
back to the compiled._result_columns structure without
any ambiguity.
Fixes: #5559
Change-Id: Ie2fa9165c16625ef860ffac1190e00575e96761f
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue of Result.fetchXXX() methods returning Row
objects unless filtering is applied will not provide a
clear enough API story when type annotations are applied,
so break out scalars/mappings into separate wrapper objects.
this makes some things more intuitive and other things a little
more bumpy. however the return type story is now clearer.
Fixes: #5503
Change-Id: I629a061823179680dc0723559183859a67ea4db1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in order to accommodate relationship loaders
with lambda caching, a lot more is needed. This is
a full refactor of the lambda system such that it
now has two levels of caching; the first level caches what
can be known from the __code__ element, then the next level
of caching is against the lambda itself and the contents
of __closure__. This allows for the elements inside
the lambdas, like columns and entities, to change and
then be part of the cache key. Lazy/selectinloads' use of
baked queries had to add distinct cache key elements,
which was attempted here but overall things needed to be
more robust than that.
This commit is broken out from the very long and sprawling
commit at Id6b5c03b1ce9ddb7b280f66792212a0ef0a1c541 .
Change-Id: I29a513c98917b1d503abfdd61e6b6e8800851aa8
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Note the PR has a few remaining doc linking issues
listed in the comment that must be addressed separately.
Signed-off-by: aplatkouski <5857672+aplatkouski@users.noreply.github.com>
Closes: #5371
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5371
Pull-request-sha: 7e7d233cf3a0c66980c27db0fcdb3c7d93bc2510
Change-Id: I9c36e8d8804483950db4b42c38ee456e384c59e3
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The psycopg2 dialect now defaults to using the very performant
``execute_values()`` psycopg2 extension for compiled INSERT statements,
and also impements RETURNING support when this extension is used. This
allows INSERT statements that even include an autoincremented SERIAL
or IDENTITY value to run very fast while still being able to return the
newly generated primary key values. The ORM will then integrate this
new feature in a separate change.
Implements RETURNING for insert with executemany
Adds support to return_defaults() mode and inserted_primary_key
to support mutiple INSERTed rows, via return_defauls_rows
and inserted_primary_key_rows accessors.
within default execution context, new cached compiler
getters are used to fetch primary keys from rows
inserted_primary_key now returns a plain tuple. this
is not yet a row-like object however this can be
added.
Adds distinct "values_only" and "batch" modes, as
"values" has a lot of benefits but "batch" breaks
cursor.rowcount
psycopg2 minimum version 2.7 so we can remove the
large number of checks for very old versions of
psycopg2
simplify tests to no longer distinguish between
native and non-native json
Fixes: #5401
Change-Id: Ic08fd3423d4c5d16ca50994460c0c234868bd61c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A variety of caching issues found by running
all tests with statement caching turned on.
The cache system now has a more conservative approach where
any subclass of a SQL element will by default invalidate
the cache key unless it adds the flag inherit_cache=True
at the class level, or if it implements its own caching.
Add working caching to a few elements that were
omitted previously; fix some caching implementations
to suit lesser used edge cases such as json casts
and array slices.
Refine the way BaseCursorResult and CursorMetaData
interact with caching; to suit cases like Alembic
modifying table structures, don't cache the
cursor metadata if it were created against a
cursor.description using non-positional matching,
e.g. "select *". if a table re-ordered its columns
or added/removed, now that data is obsolete.
Additionally we have to adapt the cursor metadata
_keymap regardless of if we just processed
cursor.description, because if we ran against
a cached SQLCompiler we won't have the right
columns in _keymap.
Other refinements to how and when we do this
adaption as some weird cases
were exposed in the Postgresql dialect,
a text() construct that names just one column that
is not actually in the statement. Fixed that
also as it looks like a cut-and-paste artifact
that doesn't actually affect anything.
Various issues with re-use of compiled result maps
and cursor metadata in conjunction with tables being
changed, such as change in order of columns.
mappers can be cleared but the class remains, meaning
a mapper has to use itself as the cache key not the class.
lots of bound parameter / literal issues, due to Alembic
creating a straight subclass of bindparam that renders
inline directly. While we can update Alembic to not
do this, we have to assume other people might be doing
this, so bindparam() implements the inherit_cache=True
logic as well that was a bit involved.
turn on cache stats in logging.
Includes a fix to subqueryloader which moves all setup to
the create_row_processor() phase and elminates any storage
within the compiled context. This includes some changes
to create_row_processor() signature and a revising of the
technique used to determine if the loader can participate
in polymorphic queries, which is also applied to
selectinloading.
DML update.values() and ordered_values() now coerces the
keys as we have tests that pass an arbitrary class here
which only includes __clause_element__(), so the
key can't be cached unless it is coerced. this in turn
changed how composite attributes support bulk update
to use the standard approach of ClauseElement with
annotations that are parsed in the ORM context.
memory profiling successfully caught that the Session
from Query was getting passed into _statement_20()
so that was a big win for that test suite.
Apparently Compiler had .execute() and .scalar() methods
stuck on it, these date back to version 0.4 and there
was a single test in the PostgreSQL dialect tests
that exercised it for no apparent reason. Removed
these methods as well as the concept of a Compiler
holding onto a "bind".
Fixes: #5386
Change-Id: I990b43aab96b42665af1b2187ad6020bee778784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reorganizes the BulkUD model in sqlalchemy.orm.persistence
to be based on the CompileState concept and to allow plain
update() / delete() to be passed to session.execute() where
the ORM synchronize session logic will take place.
Also gets "synchronize_session='fetch'" working with horizontal
sharding.
Adding a few more result.scalar_one() types of methods
as scalar_one() seems like what is normally desired.
Fixes: #5160
Change-Id: I8001ebdad089da34119eb459709731ba6c0ba975
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch contains a variety of ORM and expression layer
tweaks to support ORM constructs in select() statements,
without the 1.3.x requiremnt in Query that a full
_compile_context() + new select() is needed in order to
get a working statement object.
Includes such tweaks as the ability to implement
aliased class of an aliased class,
as we are looking to fully support ACs against subqueries,
as well as the ability to access anonymously-labeled
ColumnProperty expressions within subqueries by
naming the ".key" of the label after the property
key. Some tuning to query.join() as well
as ORMJoin internals to allow things to work more
smoothly.
Change-Id: Id810f485c5f7ed971529489b84694e02a3356d6d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit includes that we've removed the "_orm_query"
attribute from compile state as well as query context.
The attribute created reference cycles and also added
method call overhead. As part of this change,
the interface for ORMExecuteState changes a bit, as well
as the interface for the horizontal sharding extension
which now deprecates the "query_chooser" callable
in favor of "execute_chooser", which receives the contextual
object. This will also work more nicely when we implement
the new execution path for bulk updates and deletes.
Pre-merge execution options for statement, connection,
arguments all up front in Connection. that way they
can be passed to the before_execute / after_execute events,
and the ExecutionContext doesn't have to merge as second
time. Core execute is pretty close to 1.3 now.
baked wasn't using the new one()/first()/one_or_none() methods,
fixed that.
Convert non-buffered cursor strategy to be a stateless
singleton. inline all the paths by which the strategy
gets chosen, oracle and SQL Server dialects make use of the
already-invoked post_exec() hook to establish the alternate
strategies, and this is actually much nicer than it was before.
Add caching to mapper instance processor for getters.
Identified a reference cycle per query that was showing
up as a lot of gc cleanup, fixed that.
After all that, performance not budging much. Even
test_baked_query now runs with significantly fewer function
calls than 1.3, still 40% slower.
Basically something about the new patterns just makes
this slower and while I've walked a whole bunch of them
back, it hardly makes a dent. that said, the performance
issues are relatively small, in the 20-40% time increase
range, and the new caching feature
does provide for regular ORM and Core queries that
are cached, and they are faster than non-cached.
Change-Id: I7b0b0d8ca550c05f79e82f75cd8eff0bbfade053
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces the ORM execution flow with a
single pathway through Session.execute() for all queries,
including Core and ORM.
Currently included is full support for ORM Query,
Query.from_statement(), select(), as well as the
baked query and horizontal shard systems. Initial
changes have also been made to the dogpile caching
example, which like baked query makes use of a
new ORM-specific execution hook that replaces the
use of both QueryEvents.before_compile() as well
as Query._execute_and_instances() as the central
ORM interception hooks.
select() and Query() constructs alike can be passed to
Session.execute() where they will return ORM
results in a Results object. This API is currently
used internally by Query. Full support for
Session.execute()->results to behave in a fully
2.0 fashion will be in later changesets.
bulk update/delete with ORM support will also
be delivered via the update() and delete()
constructs, however these have not yet been adapted
to the new system and may follow in a subsequent
update.
Performance is also beginning to lag as of this
commit and some previous ones. It is hoped that
a few central functions such as the coercions
functions can be rewritten in C to re-gain
performance. Additionally, query caching
is now available and some subsequent patches
will attempt to cache more of the per-execution
work from the ORM layer, e.g. column getters
and adapters.
This patch also contains initial "turn on" of the
caching system enginewide via the query_cache_size
parameter to create_engine(). Still defaulting at
zero for "no caching". The caching system still
needs adjustments in order to gain adequate performance.
Change-Id: I047a7ebb26aa85dc01f6789fac2bff561dcd555d
|
|
|
|
|
|
|
| |
Remove a bunch of unnecessary functions for this case.
add test coverage to ensure uniqueness logic works.
Change-Id: I2e6232c5667a3277b0ec8d7e47085a267f23d75f
|
|
|
|
|
|
|
|
|
|
|
| |
A few small mistakes led to huge callcounts. Additionally,
the warn-on-get behavior which is attempting to warn for
deprecated access in SQLAlchemy 2.0 is very expensive; it's not clear
if its feasible to have this warning or to somehow alter how it
works.
Fixes: #5340
Change-Id: I73bdd2d7b6f1b25cc0222accabd585cf761a5af4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
step one, do away with __connection attribute and using
awkward AttributeError logic
step two, move all management of "connection._transaction"
into the transaction objects themselves where it's easier
to follow.
build MarkerTransaction that takes the role of
"do-nothing block"
new connection datamodel is: connection._transaction, always
a root, connection._nested_transaction, always a nested.
nested transactions still chain to each other as this
is still sort of necessary but they consider the root
transaction separately, and the marker transactions
not at all.
introduce new InvalidRequestError subclass
PendingRollbackError. Apply to connection and session
for all cases where a transaction needs to be rolled
back before continuing. Within Connection,
both PendingRollbackError as well as ResourceClosedError
are now raised directly without being handled by
handle_dbapi_error(); this removes these two exception
cases from the handle_error event handler as well as
from StatementError wrapping, as these two exceptions are
not statement oriented and are instead programmatic
issues, that the application is failing to handle database
errors properly.
Revise savepoints so that when a release fails, they set
themselves as inactive so that their rollback() method
does not throw another exception.
Give savepoints another go on MySQL, can't get release working
however get support for basic round trip going
Fixes: #5327
Change-Id: Ia3cbbf56d4882fcc7980f90519412f1711fae74d
|
|
As progress is made on the _future.Result, including breaking
it out such that DBAPI behaviors are local to specific
implementations, it becomes apparent that the Result object
is a functional superset of ResultProxy and that basic
operations like fetchone(), fetchall(), and fetchmany()
behave pretty much exactly the same way on the new object.
Reorganize things so that ResultProxy is now referred to
as LegacyCursorResult, which subclasses CursorResult
that represents the DBAPI-cursor version of Result,
making use of a multiple inheritance pattern so that
the functionality of Result is also available in non-DBAPI
contexts, as will be necessary for some ORM
patterns.
Additionally propose the composition system for Result
that will form the basis for ORM-alternative result
systems such as horizontal sharding and dogpile cache.
As ORM results will soon be coming directly from
instances of Result, these extensions will instead
build their own ResultFetchStrategies that perform
the special steps to create composed or cached
result sets.
Also considering at the moment not emitting deprecation
warnings for fetchXYZ() methods; the immediate issue
is Keystone tests are calling upon it, but as the
implementations here are proving to be not in any
kind of conflict with how Result works, there's
not too much issue leaving them around and deprecating
at some later point.
References: #5087
References: #4395
Fixes: #4959
Change-Id: I8091919d45421e3f53029b8660427f844fee0228
|