| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented the "cartesian product warning" for UPDATE and DELETE
statements, those which include multiple tables that are not correlated
together in some way.
Fixed issue where :func:`_dml.update` construct that included multiple
tables and no VALUES clause would raise with an internal error. Current
behavior for :class:`_dml.Update` with no values is to generate a SQL
UPDATE statement with an empty "set" clause, so this has been made
consistent for this specific sub-case.
Fixes: #9721
Change-Id: I556639811cc930d2e37532965d2ae751882af921
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed regression caused by the fix for :ticket:`9618` where floating point
values would lose precision being inserted in bulk, using either the
psycopg2 or psycopg drivers.
Implemented the :class:`_sqltypes.Double` type for SQL Server, having it
resolve to ``REAL``, or :class:`_mssql.REAL`, at DDL rendering time.
Fixed issue in Oracle dialects where ``Decimal`` returning types such as
:class:`_sqltypes.Numeric` would return floating point values, rather than
``Decimal`` objects, when these columns were used in the
:meth:`_dml.Insert.returning` clause to return INSERTed values.
Fixes: #9701
Change-Id: I8865496a6ccac6d44c19d0ca2a642b63c6172da9
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improved row processing performance for "binary" datatypes by making the
"bytes" handler conditional on a per driver basis. As a result, the
"bytes" result handler has been disabled for nearly all drivers other than
psycopg2, all of which in modern forms support returning Python "bytes"
directly. Pull request courtesy J. Nick Koston.
Fixes: #9680
Closes: #9681
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9681
Pull-request-sha: 4f2fd88bd9af54c54438a3b72a2f30384b0f8898
Change-Id: I394bdcbebaab272e63b13cc02f60813b7aa76839
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
|
|
|
| |
Removed versionadded and versionchanged for version prior to 1.2 since they
are no longer useful.
Change-Id: I5c53d1188bc5fec3ab4be39ef761650ed8fa6d3e
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In #9618 we both can look to re-enable insertmanyvalues
for SQL Server, and also likely *disable* its use for the
ORM unit of work specifically, since that's really where the
only problem is, and it will likely be for all dialects,
not just SQL Server. An approach using sentinel columns will
be rolled out for the unit of work use case.
Change-Id: I3358e30839491769db95b4ac042a661271df3929
References: #9618
References: #9603
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we will keep trying to find workarounds, however this
patch is the "turn it off" patch
Due to a critical bug identified in SQL Server, the SQLAlchemy
"insertmanyvalues" feature which allows fast INSERT of many rows while also
supporting RETURNING unfortunately needs to be disabled for SQL Server. SQL
Server is apparently unable to guarantee that the order of rows inserted
matches the order in which they are sent back by OUTPUT inserted when
table-valued rows are used with INSERT in conjunction with OUTPUT inserted.
We are trying to see if Microsoft is able to confirm this undocumented
behavior however there is no known workaround, other than it's not safe to
use table-valued expressions with OUTPUT inserted for now.
Fixes: #9603
Change-Id: I4b932fb8774390bbdf4e870a1f6cfe9a78c4b105
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed the bulk INSERT strategy used for SQL Server "executemany" with
pyodbc when ``fast_executemany`` is set to ``True`` by using
``fast_executemany`` / ``cursor.executemany()`` for bulk INSERT that does
not include RETURNING, restoring the same behavior as was used in
SQLAlchemy 1.4 when this parameter is set. For INSERT statements that use
RETURNING, the "insertmanyvalues" strategy continues to be used as it is
the only current strategy that supports RETURNING with bulk INSERT.
Previously, SQLAlchemy 2.0 would use "insertmanyvalues" for all INSERT
statements when ``use_insertmanyvalues`` was left at its default of
``False``, ignoring if ``fast_executemany`` was set.
New performance details from end users have shown that ``fast_executemany``
is still much faster for very large datasets as it uses ODBC commands that
can receive all rows in a single round trip, allowing for much larger
datasizes than the batches that can be sent by the current
"insertmanyvalues" strategy.
Fixes: #9586
Change-Id: I85955a10ba77c26cdc0c22e362a827d7aaef2852
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pymssql seems to be maintained again and seems to be working
completely, so let's try re-enabling it.
Fixed issue in the new :class:`.Uuid` datatype which prevented it from
working with the pymssql driver. As pymssql seems to be maintained again,
restored testing support for pymssql.
Tweaked the pymssql dialect to take better advantage of
RETURNING for INSERT statements in order to retrieve last inserted primary
key values, in the same way as occurs for the mssql+pyodbc dialect right
now.
Identified that the ``sqlite`` and ``mssql+pyodbc`` dialects are now
compatible with the SQLAlchemy ORM's "versioned rows" feature, since
SQLAlchemy now computes rowcount for a RETURNING statement in this specific
case by counting the rows returned, rather than relying upon
``cursor.rowcount``. In particular, the ORM versioned rows use case
(documented at :ref:`mapper_version_counter`) should now be fully
supported with the SQL Server pyodbc dialect.
Change-Id: I38a0666587212327aecf8f98e86031ab25d1f14d
References: #5321
Fixes: #9414
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The newly added comment reflection and rendering capability of the MSSQL
dialect, added in :ticket:`7844`, will now be disabled by default if it
cannot be determined that an unsupported backend such as Azure Synapse may
be in use; this backend does not support table and column comments and does
not support the SQL Server routines in use to generate them as well as to
reflect them. A new parameter ``supports_comments`` is added to the dialect
which defaults to ``None``, indicating that comment support should be
auto-detected. When set to ``True`` or ``False``, the comment support is
either enabled or disabled unconditionally.
Fixes: #9142
Change-Id: Ib5cac31806185e7353e15b3d83b580652d304b3b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where a schema name given with brackets, but no dots inside the
name, for parameters such as :paramref:`_schema.Table.schema` would not be
interpreted within the context of the SQL Server dialect's documented
behavior of interpreting explicit brackets as token delimiters, first added
in 1.2 for #2626, when referring to the schema name in reflection
operations. The original assumption for #2626's behavior was that the
special interpretation of brackets was only significant if dots were
present, however in practice, the brackets are not included as part of the
identifier name for all SQL rendering operations since these are not valid
characters within regular or delimited identifiers. Pull request courtesy
Shan.
Fixes: #9133
Closes: #9134
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9134
Pull-request-sha: 5dac87c82cd3063dd8e50f0075c7c00330be6439
Change-Id: I7a507bc38d75a04ffcb7e920298775baae22c6d1
|
|
|
|
|
|
| |
Follow up of I07b72e6620bb64e329d6b641afa27631e91c4f16
Change-Id: I1f61974bf9cdc3da5317e546d4f9b649c2029e4d
|
|
|
|
|
|
| |
change {opensql} to {printsql} in prints, add missing markers
Change-Id: I07b72e6620bb64e329d6b641afa27631e91c4f16
|
|
|
|
| |
Change-Id: I625af65b3fb1815b1af17dc2ef47dd697fdc3fb1
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Added test support to ensure that all compiler ``visit_xyz()`` methods
across all :class:`.Compiler` implementations in SQLAlchemy accept a
``**kw`` parameter, so that all compilers accept additional keyword
arguments under all circumstances.
Fixes: #8988
Change-Id: I1cefc313e4e64a10ee7dd14400137fbe02ce9523
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new default value for the :paramref:`.Mapper.eager_defaults`
parameter "auto", which will automatically fetch table default values
during a unit of work flush, if the dialect supports RETURNING for the
INSERT being run, as well as
:ref:`insertmanyvalues <engine_insertmanyvalues>` available. Eager fetches
for server-side UPDATE defaults, which are very uncommon, continue to only
take place if :paramref:`.Mapper.eager_defaults` is set to ``True``, as
there is no batch-RETURNING form for UPDATE statements.
Fixes: #8889
Change-Id: I84b91092a37c4cd216e060513acde3eb0298abe9
|
|
|
|
|
|
|
|
| |
command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>"
pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not
exists in sqlalchemy fixtures
Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The RETURNING clause now renders columns using the routine as that of the
:class:`.Select` to generate labels, which will include disambiguating
labels, as well as that a SQL function surrounding a named column will be
labeled using the column name itself. This is a more comprehensive change
than a similar one made for the 1.4 series that adjusted the function label
issue only.
includes 1.4's changelog for the backported version which also
fixes an Oracle issue independently of the 2.0 series.
Fixes: #8770
Change-Id: I2ab078a214a778ffe1720dbd864ae4c105a0691d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new parameter :paramref:`.PoolEvents.reset.reset_state` parameter to
the :meth:`.PoolEvents.reset` event, with deprecation logic in place that
will continue to accept event hooks using the previous set of arguments.
This indicates various state information about how the reset is taking
place and is used to allow custom reset schemes to take place with full
context given.
Within this change a fix that's also backported to 1.4 is included which
re-enables the :meth:`.PoolEvents.reset` event to continue to take place
under all circumstances, including when :class:`.Connection` has already
"reset" the connection.
The two changes together allow custom reset schemes to be implemented using
the :meth:`.PoolEvents.reset` event, instead of the
:meth:`.PoolEvents.checkin` event (which continues to function as it always
has).
Change-Id: Ie17c4f55d02beb6f570b9de6b3044baffa7d6df6
Fixes: #8717
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue with :meth:`.Inspector.has_table` when used against a temporary
table for the SQL Server dialect would fail an invalid object name error on
some Azure variants, due to an unnecessary information schema query that is
not supported on those server versions. Pull request courtesy Mike Barry.
the patch also fills out test support for has_table()
against temp tables, temp views, adding to the has_table() support just
added for views in #8700.
Fixes: #8714
Closes: #8716
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8716
Pull-request-sha: e2ac7a52e2b09a349a703ba1e1a2911f4d3c0912
Change-Id: Ia73e4e9e977a2d6b7e100abd2f81a8c8777dc9bb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :class:`.Sequence` construct restores itself to the DDL behavior it
had prior to the 1.4 series, where creating a :class:`.Sequence` with
no additional arguments will emit a simple ``CREATE SEQUENCE`` instruction
**without** any additional parameters for "start value". For most backends,
this is how things worked previously in any case; **however**, for
MS SQL Server, the default value on this database is
``-2**63``; to prevent this generally impractical default
from taking effect on SQL Server, the :paramref:`.Sequence.start` parameter
should be provided. As usage of :class:`.Sequence` is unusual
for SQL Server which for many years has standardized on ``IDENTITY``,
it is hoped that this change has minimal impact.
Fixes: #7211
Change-Id: I1207ea10c8cb1528a1519a0fb3581d9621c27b31
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the feature is enabled for all built in backends
when RETURNING is used,
except for Oracle that doesn't need it, and on
psycopg2 and mssql+pyodbc it is used for all INSERT statements,
not just those that use RETURNING.
third party dialects would need to opt in to the new feature
by setting use_insertmanyvalues to True.
Also adds dialect-level guards against using returning
with executemany where we dont have an implementation to
suit it. execute single w/ returning still defers to the
server without us checking.
Fixes: #6047
Fixes: #7907
Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed yet another regression in SQL Server isolation level fetch (see
:ticket:`8231`, :ticket:`8475`), this time with "Microsoft Dynamics CRM
Database via Azure Active Directory", which apparently lacks the
``system_views`` view entirely. Error catching has been extended that under
no circumstances will this method ever fail, provided database connectivity
is present.
Fixes: #8525
Change-Id: I76a429e3329926069a0367d2e77ca1124b9a059d
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by the fix for :ticket:`8231` released in 1.4.40
where connection would fail if the user does not have permission to query
the dm_exec_sessions or dm_pdw_nodes_exec_sessions system view when trying
to determine the current transaction isolation level.
Fixes: #8475
Change-Id: Ie2bcda92f2ef2d12360ddda47eb6e896313c71f2
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented reflection of the "clustered index" flag ``mssql_clustered``
for the SQL Server dialect. Pull request courtesy John Lennox.
Fixes: #8288
Closes: #8289
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8289
Pull-request-sha: 1bb57352e3e31d8fb7de69ab5e60e5464949f640
Change-Id: Ife367066328f9e47ad823e4098647964a18e21e8
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support table and column comments on MSSQL when
creating a table. Added support for reflecting table comments.
Thanks to Daniel Hall for the help in this pull request.
Fixes: #7844
Closes: #8225
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8225
Pull-request-sha: 540f4eb6395f9feed4b4240e3d22f539021948e9
Change-Id: I69f48c6dda4e00ec3d82fdeff13f3df9d735b7b0
|
|
|
|
|
| |
Change-Id: I446105028539a34da90d6b8ae4812965cc398ee5
(cherry picked from commit c539ee35229b03d61f2a10e9f5ab613201341e19)
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed issue in ORM enabled UPDATE when the statement is created against a
joined-inheritance subclass, updating only local table columns, where the
"fetch" synchronization strategy would not render the correct RETURNING
clause for databases that use RETURNING for fetch synchronization.
Also adjusts the strategy used for RETURNING in UPDATE FROM and
DELETE FROM statements.
Also fixes MariaDB which does not support RETURNING with
DELETE..USING. this was not caught in tests because
"fetch" strategy wasn't tested. so also adjust the ORMDMLState
classes to look for "extra froms" first before adding
RETURNING, add new parameters to interfaces for
"update_returning_multitable" and "delete_returning_multitable".
A new execution option is_delete_using=True, described in the
changelog message, is added to allow the ORM to know up front
if a certain statement should have a SELECT up front
for "fetch" strategy.
Fixes: #8344
Change-Id: I3dcdb68e6e97ab0807a573c2fdb3d53c16d063ba
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where the SQL Server dialect's query for the current isolation
level would fail on Azure Synapse Analytics, due to the way in which this
database handles transaction rollbacks after an error has occurred. The
initial query has been modified to no longer rely upon catching an error
when attempting to detect the appropriate system view. Additionally, to
better support this database's very specific "rollback" behavior,
implemented new parameter ``ignore_no_transaction_on_rollback`` indicating
that a rollback should ignore Azure Synapse error 'No corresponding
transaction found. (111214)', which is raised if no transaction is present
in conflict with the Python DBAPI.
Fixes: #8231
Closes: #8233
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8233
Pull-request-sha: c48bd44a9f53d00e5e94f1b8bf996711b6419562
Change-Id: I6407a03148f45cc9eba8fe1d31d4f59ebf9c7ef7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Oracle will now use FETCH FIRST N ROWS / OFFSET syntax for limit/offset
support by default for Oracle 12c and above. This syntax was already
available when :meth:`_sql.Select.fetch` were used directly, it's now
implied for :meth:`_sql.Select.limit` and :meth:`_sql.Select.offset` as
well.
I'm currently setting this up so that the new syntax renders
in Oracle using POSTCOMPILE binds. I really have no indication
if Oracle's SQL optimizer would be better with params
here, so that it can cache the SQL plan, or if it expects
hardcoded numbers for these. Since we had reports that the previous
ROWNUM thing really needed hardcoded ints, let's guess
for now that hardcoded ints would be preferable. it can be turned
off with a single boolean if users report that they'd prefer
real bound values.
Fixes: #8221
Change-Id: I812ec24ffc947199866947b666d6ec6e6a690f22
|
|
|
|
|
| |
Change-Id: I06eaede9e021eb0790929168e9bedb0c8b58140a
References: #8252
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issues that prevented the new usage patterns for using DML with ORM
objects presented at :ref:`orm_dml_returning_objects` from working
correctly with the SQL Server pyodbc dialect.
Here we add a step to look in compile_state._dict_values more thoroughly
for the keys we need to determine "identity insert" or not, and also
add a new compiler variable dml_compile_state so that we can skip the
ORM's compile_state if present.
Fixes: #8210
Change-Id: Idbd76bb3eb075c647dc6c1cb78f7315c821e15f7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rearchitected the schema reflection API to allow some dialects to make use
of high performing batch queries to reflect the schemas of many tables at
once using much fewer queries. The new performance features are targeted
first at the PostgreSQL and Oracle backends, and may be applied to any
dialect that makes use of SELECT queries against system catalog tables to
reflect tables (currently this omits the MySQL and SQLite dialects which
instead make use of parsing the "CREATE TABLE" statement, however these
dialects do not have a pre-existing performance issue with reflection. MS
SQL Server is still a TODO).
The new API is backwards compatible with the previous system, and should
require no changes to third party dialects to retain compatibility;
third party dialects can also opt into the new system by implementing
batched queries for schema reflection.
Along with this change is an updated reflection API that is fully
:pep:`484` typed, features many new methods and some changes.
Fixes: #4379
Change-Id: I897ec09843543aa7012bcdce758792ed3d415d08
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As almost every dialect supports RETURNING now, RETURNING
is also made more of a default assumption.
* the default compiler generates a RETURNING clause now
when specified; CompileError is no longer raised.
* The dialect-level implicit_returning parameter now has
no effect. It's not fully clear if there are real world
cases relying on the dialect-level parameter, so we will see
once 2.0 is released. ORM-level RETURNING can be disabled
at the table level, and perhaps "implicit returning" should
become an ORM-level option at some point as that's where
it applies.
* Altered ORM update() / delete() to respect table-level
implicit returning for fetch.
* Since MariaDB doesnt support UPDATE returning, "full_returning"
is now split into insert_returning, update_returning, delete_returning
* Crazy new thing. Dialects that have *both* cursor.lastrowid
*and* returning. so now we can pick between them for SQLite
and mariadb. so, we are trying to keep it on .lastrowid for
simple inserts with an autoincrement column, this helps with
some edge case test scenarios and i bet .lastrowid is faster
anyway. any return_defaults() / multiparams etc then we
use returning
* SQLite decided they dont want to return rows that match in
ON CONFLICT. this is flat out wrong, but for now we need to
work with it.
Fixes: #6195
Fixes: #7011
Closes: #7047
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7047
Pull-request-sha: d25d5ea3abe094f282c53c7dd87f5f53a9e85248
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Change-Id: I9908ce0ff7bdc50bd5b27722081767c31c19a950
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new backend-agnostic :class:`_types.Uuid` datatype generalized from
the PostgreSQL dialects to now be a core type, as well as migrated
:class:`_types.UUID` from the PostgreSQL dialect. Thanks to Trevor Gross
for the help on this.
also includes:
* corrects some missing behaviors in the suite literal fixtures
test where row round trips weren't being correctly asserted.
* fixes some of the ISO literal date rendering added in
952383f9ee0 for #5052 to truncate datetime strings for date/time
datatypes in the same way that drivers typically do for bound
parameters; this was not working fully and wasn't caught by the
broken test fixture
Fixes: #7212
Change-Id: I981ac6d34d278c18281c144430a528764c241b04
|
|
|
|
|
|
|
|
| |
Explicitly specify the collation when reflecting table columns using
MSSQL to prevent "collation conflict" errors.
Fixes: #8035
Change-Id: I4239a5ca8b041f56d7b3bba67b3357c176db31ee
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improved the construction of SQL binary expressions to allow for very long
expressions against the same associative operator without special steps
needed in order to avoid high memory use and excess recursion depth. A
particular binary operation ``A op B`` can now be joined against another
element ``op C`` and the resulting structure will be "flattened" so that
the representation as well as SQL compilation does not require recursion.
To implement this more cleanly, the biggest change here is that
column-oriented lists of things are broken away from ClauseList
in a new class ExpressionClauseList, that also forms the basis
of BooleanClauseList. ClauseList is still used for the generic
"comma-separated list" of things such as Tuple and things like
ORDER BY, as well as in some API endpoints.
Also adds __slots__ to the TypeEngine-bound Comparator
classes. Still can't really do __slots__ on ClauseElement.
Fixes: #7744
Change-Id: I81a8ceb6f8f3bb0fe52d58f3cb42e4b6c2bc9018
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added modified ISO-8601 rendering (i.e. ISO-8601 with the T converted to a
space) when using ``literal_binds`` with the SQL compilers provided by the
PostgreSQL, MySQL, MariaDB, MSSQL, Oracle dialects. For Oracle, the ISO
format is wrapped inside of an appropriate TO_DATE() function call.
Previously this rendering was not implemented for dialect-specific
compilation.
Fixes: #5052
Change-Id: I7af15a51fedf5c5a8e76e645f7c3be997ece35f0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Full "RETURNING" support is implemented for the cx_Oracle dialect, meaning
multiple RETURNING rows are now recived for DML statements that produce
more than one row for RETURNING.
cx_Oracle 7 is now the minimum version for cx_Oracle.
Getting Oracle to do multirow returning took about 5 minutes. however,
getting Oracle's RETURNING system to integrate with ORM-enabled
insert, update, delete, is a big deal because that architecture wasn't
really working very robustly, including some recent changes in 1.4
for FromStatement were done in a hurry, so this patch also cleans up
the FromStatement situation and begins to establish it more concretely
as the base for all ReturnsRows / TextClause ORM scenarios.
Fixes: #6245
Change-Id: I2b4e6007affa51ce311d2d5baa3917f356ab961f
|
|
|
|
|
|
|
| |
strict types type_api.py, including TypeDecorator,
NativeForEmulated, etc.
Change-Id: Ib2eba26de0981324a83733954cb7044a29bbd7db
|
|
|
|
|
| |
Fixes: #7812
Change-Id: Ic16eff9a9201d34515cb8eb884270eced4e1196a
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This method I would assume got committed during the
1.4 engine refactor, where we moved from different kinds of
ResultProxy implementations to different strategy
classes instead. These strategies are set up by
dialects by setting "self.cursor_fetch_strategy"
in the execution context. The method here was
likely a previous iteration of that which got merged
but was never used.
Change-Id: Iec292428f41c2c245bf7ae78beaa14786c28846c
|
|
|
|
|
| |
Fixes: #7243
Change-Id: I99880f429dbaac525bdf7d44438aaab6bc8d0ca6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
start applying foundational annotations to key
elements.
two main elements addressed here:
1. removal of public_factory() and replacement with
explicit functions. this just works much better with
typing.
2. typing support for column expressions and operators.
The biggest part of this involves stubbing out all the
ColumnOperators methods under ColumnElement in a
TYPE_CHECKING section. Took me a while to see this
method vs. much more complicated things I thought
I needed.
Also for this version implementing #7519, ColumnElement
types against the Python type and not TypeEngine. it is
hoped this leads to easier transferrence between ORM/Core
as well as eventual support for result set typing.
Not clear yet how well this approach will work and what
new issues it may introduce.
given the current approach we now get full, rich typing for
scenarios like this:
from sqlalchemy import column, Integer, String, Boolean
c1 = column('a', String)
c2 = column('a', Integer)
expr1 = c2.in_([1, 2, 3])
expr2 = c2 / 5
expr3 = -c2
expr4_a = ~(c2 == 5)
expr4_b = ~column('q', Boolean)
expr5 = c1 + 'x'
expr6 = c2 + 10
Fixes: #7519
Fixes: #6810
Change-Id: I078d9f57955549f6f7868314287175f6c61c44cb
|
|
|
|
| |
Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Black's `target-version` was still set to `['py27', 'py36']`. Set it to `[py37]` instead.
Also update Black and other pre-commit hooks and re-format with Black.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #7536
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7536
Pull-request-sha: b3aedf5570d7e0ba6c354e5989835260d0591b08
Change-Id: I8be85636fd2c9449b07a8626050c8bd35bd119d5
|