| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pymssql seems to be maintained again and seems to be working
completely, so let's try re-enabling it.
Fixed issue in the new :class:`.Uuid` datatype which prevented it from
working with the pymssql driver. As pymssql seems to be maintained again,
restored testing support for pymssql.
Tweaked the pymssql dialect to take better advantage of
RETURNING for INSERT statements in order to retrieve last inserted primary
key values, in the same way as occurs for the mssql+pyodbc dialect right
now.
Identified that the ``sqlite`` and ``mssql+pyodbc`` dialects are now
compatible with the SQLAlchemy ORM's "versioned rows" feature, since
SQLAlchemy now computes rowcount for a RETURNING statement in this specific
case by counting the rows returned, rather than relying upon
``cursor.rowcount``. In particular, the ORM versioned rows use case
(documented at :ref:`mapper_version_counter`) should now be fully
supported with the SQL Server pyodbc dialect.
Change-Id: I38a0666587212327aecf8f98e86031ab25d1f14d
References: #5321
Fixes: #9414
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the very small plugin flake8-import-single which
will prevent us from having an import with more than one symbol
on a line.
Flake8 by itself prevents this pattern with E401:
import collections, os, sys
However does not do anything with this:
from sqlalchemy import Column, text
Both statements have the same issues generating merge artifacts
as well as presenting a manual decision to be made. While
zimports generally cleans up such imports at the top level, we
don't enforce zimports / pre-commit use.
the plugin finds the same issue for imports that are inside of
test methods. We shouldn't usually have imports in test methods
so most of them here are moved to be top level.
The version is pinned at 0.1.5; the project seems to have no
activity since 2019, however there are three 0.1.6dev releases
on pypi which stopped in September 2019, they seem to be
experiments with packaging. The source for 0.1.5
is extremely simple and only reveals one method to flake8
(the run() method).
Change-Id: Icea894e43bad9c0b5d4feb5f49c6c666d6ea6aa1
|
|
|
|
|
|
|
|
| |
Added public property :attr:`_sql.Table.autoincrement_column` that
returns the column identified as autoincrementing in the column.
Fixes: #9277
Change-Id: If60d6f92e0df94f57d00ff6d89d285c61b02f5a4
|
|
|
|
|
|
|
|
| |
command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>"
pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not
exists in sqlalchemy fixtures
Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :class:`.Sequence` construct restores itself to the DDL behavior it
had prior to the 1.4 series, where creating a :class:`.Sequence` with
no additional arguments will emit a simple ``CREATE SEQUENCE`` instruction
**without** any additional parameters for "start value". For most backends,
this is how things worked previously in any case; **however**, for
MS SQL Server, the default value on this database is
``-2**63``; to prevent this generally impractical default
from taking effect on SQL Server, the :paramref:`.Sequence.start` parameter
should be provided. As usage of :class:`.Sequence` is unusual
for SQL Server which for many years has standardized on ``IDENTITY``,
it is hoped that this change has minimal impact.
Fixes: #7211
Change-Id: I1207ea10c8cb1528a1519a0fb3581d9621c27b31
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As almost every dialect supports RETURNING now, RETURNING
is also made more of a default assumption.
* the default compiler generates a RETURNING clause now
when specified; CompileError is no longer raised.
* The dialect-level implicit_returning parameter now has
no effect. It's not fully clear if there are real world
cases relying on the dialect-level parameter, so we will see
once 2.0 is released. ORM-level RETURNING can be disabled
at the table level, and perhaps "implicit returning" should
become an ORM-level option at some point as that's where
it applies.
* Altered ORM update() / delete() to respect table-level
implicit returning for fetch.
* Since MariaDB doesnt support UPDATE returning, "full_returning"
is now split into insert_returning, update_returning, delete_returning
* Crazy new thing. Dialects that have *both* cursor.lastrowid
*and* returning. so now we can pick between them for SQLite
and mariadb. so, we are trying to keep it on .lastrowid for
simple inserts with an autoincrement column, this helps with
some edge case test scenarios and i bet .lastrowid is faster
anyway. any return_defaults() / multiparams etc then we
use returning
* SQLite decided they dont want to return rows that match in
ON CONFLICT. this is flat out wrong, but for now we need to
work with it.
Fixes: #6195
Fixes: #7011
Closes: #7047
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7047
Pull-request-sha: d25d5ea3abe094f282c53c7dd87f5f53a9e85248
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Change-Id: I9908ce0ff7bdc50bd5b27722081767c31c19a950
|
|
|
|
|
|
|
|
| |
a whole bunch of errors were apparently blocked by 0.0.4
being installed.
Fixes: #8020
Change-Id: I22a0faeaabe03de501897893391946d677c2df7e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
implement strict typing for schema.py
this module has lots of public API, lots of old decisions
and very hard to follow construction sequences in many
cases, and is also where we get a lot of new feature requests,
so strict typing should help keep things clean.
among improvements here, fixed the pool .info getters
and also figured out how to get ColumnCollection and
related to be covariant so that we may set them up
as returning Column or ColumnClause without any conflicts.
DDL was affected, noting that superclasses of DDLElement
(_DDLCompiles, added recently) can now be passed into
"ddl_if" callables; reorganized ddl into ExecutableDDLElement
as a new name for DDLElement and _DDLCompiles renamed to
BaseDDLElement.
setting up strict also located an API use case that
is completely broken, which is connection.execute(some_default)
returns a scalar value. This case has been deprecated
and new paths have been set up so that connection.scalar()
may be used. This likely wasn't possible in previous
versions because scalar() would assume a CursorResult.
The scalar() change also impacts Session as we have explicit
support (since someone had reported it as a regression)
for session.execute(Sequence()) to work. They will get the
same deprecation message (which omits the word "Connection",
just uses ".execute()" and ".scalar()") and they can then
use Session.scalar() as well. Getting this to type
correctly while still supporting ORM use cases required
some refactoring, and I also set up a keyword only delimeter
for Session.execute() and related as execution_options /
bind_arguments should always be keyword only, applied these
changes to AsyncSession as well.
Additionally simpify Table __init__ now that we are Python
3 only, we can have positional plus explicit kwargs finally.
Simplify Column.__init__ as well again taking advantage
of kw only arguments.
Fill in most/all __init__ methods in sqltypes.py as
the constructor for types is most of the API. should
likely do this for dialect-specific types as well.
Apply _InfoType for all info attributes as should have been
done originally and update descriptor decorators.
Change-Id: I3f9f8ff3f1c8858471ff4545ac83d68c88107527
|
|
|
|
|
|
|
| |
non-strict checking for mostly internal or semi-internal
code
Change-Id: Ib91b47f1a8ccc15e666b94bad1ce78c4ab15b0ec
|
|
|
|
|
| |
References: #4600
Change-Id: I2a62ddfe00bc562720f0eae700a497495d7a987a
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The :paramref:`_sa.create_engine.implicit_returning` parameter is
deprecated on the :func:`_sa.create_engine` function only; the parameter
remains available on the :class:`_schema.Table` object. This parameter was
originally intended to enable the "implicit returning" feature of
SQLAlchemy when it was first developed and was not enabled by default.
Under modern use, there's no reason this parameter should be disabled, and
it has been observed to cause confusion as it degrades performance and
makes it more difficult for the ORM to retrieve recently inserted server
defaults. The parameter remains available on :class:`_schema.Table` to
specifically suit database-level edge cases which make RETURNING
infeasible, the sole example currently being SQL Server's limitation that
INSERT RETURNING may not be used on a table that has INSERT triggers on it.
Also removed from the Oracle dialect some logic that would upgrade
an Oracle 8/8i server version to use implicit returning if the
parameter were explictly passed; these versions of Oracle
still support RETURNING so the feature is now enabled for all
Oracle versions.
Fixes: #6962
Change-Id: Ib338e300cd7c8026c3083043f645084a8211aed8
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed here includes:
* convert_unicode parameters
* encoding create_engine() parameter
* description encoding support
* "non-unicode fallback" modes under Python 2
* String symbols regarding Python 2 non-unicode fallbacks
* any concept of DBAPIs that don't accept unicode
statements, unicode bound parameters, or that return bytes
for strings anywhere except an explicit Binary / BLOB
type
* unicode processors in Python / C
Risk factors:
* Whether all DBAPIs do in fact return Unicode objects for
all entries in cursor.description now
* There was logic for mysql-connector trying to determine
description encoding. A quick test shows Unicode coming
back but it's not clear if there are still edge cases where
they return bytes. if so, these are bugs in that driver,
and at most we would only work around it in the mysql-connector
DBAPI itself (but we won't do that either).
* It seems like Oracle 8 was not expecting unicode bound parameters.
I'm assuming this was all Python 2 stuff and does not apply
for modern cx_Oracle under Python 3.
* third party dialects relying upon built in unicode encoding/decoding
but it's hard to imagine any non-SQLAlchemy database driver not
dealing exclusively in Python unicode strings in Python 3
Change-Id: I97d762ef6d4dd836487b714d57d8136d0310f28a
References: #7257
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major action here is to lift and move future.Connection
and future.Engine fully into sqlalchemy.engine.base. This
removes lots of engine concepts, including:
* autocommit
* Connection running without a transaction, autobegin
is now present in all cases
* most "autorollback" is obsolete
* Core-level subtransactions (i.e. MarkerTransaction)
* "branched" connections, copies of connections
* execution_options() returns self, not a new connection
* old argument formats, distill_params(), simplifies calling
scheme between engine methods
* before/after_execute() events (oriented towards compiled constructs)
don't emit for exec_driver_sql(). before/after_cursor_execute()
is still included for this
* old helper methods superseded by context managers, connection.transaction(),
engine.transaction() engine.run_callable()
* ancient engine-level reflection methods has_table(), table_names()
* sqlalchemy.testing.engines.proxying_engine
References: #7257
Change-Id: Ib20ed816642d873b84221378a9ec34480e01e82c
|
|
|
|
|
|
|
|
|
|
| |
The :class:`.TypeDecorator` class will now emit a warning when used in SQL
compilation with caching unless the ``.cache_ok`` flag is set to ``True``
or ``False``. ``.cache_ok`` indicates that all the parameters passed to the
object are safe to be used as a cache key, ``False`` means they are not.
Fixes: #6436
Change-Id: Ib1bb7dc4b124e38521d615c2e2e691e4915594fb
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where the introduction of the INSERT syntax "INSERT...
VALUES (DEFAULT)" was not supported on some backends that do however
support "INSERT..DEFAULT VALUES", including SQLite. The two syntaxes are
now each individually supported or non-supported for each dialect, for
example MySQL supports "VALUES (DEFAULT)" but not "DEFAULT VALUES".
Support for Oracle is still not enabled as there are unresolved issues
in using RETURNING at the same time.
Fixes: #6254
Change-Id: I47959bc826e3d9d2396ccfa290eb084841b02e77
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed ORM unit of work regression where an errant "assert primary_key"
statement interferes with primary key generation sequences that don't
actually consider the columns in the table to use a real primary key
constraint, instead using :paramref:`_orm.mapper.primary_key` to establish
certain columns as "primary".
Also remove errant "identity" requirement which does not seem to
represent any current backend and is applied to
test/sql/test_defaults.py->AutoIncrementTest, but these tests work
on all backends.
Fixes: #5867
Change-Id: I4502ca5079d824d7b4d055194947aa1a00effde7
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
continuing with producing a SQLAlchemy 1.4.0b2 that internally
does not emit any of its own 2.0 deprecation warnings,
migrate the *args and **kwargs passed to execute() methods
that now must be a single list or dictionary.
Alembic 1.5 is again waiting on this internal consistency to
be present so that it can pass all tests with no 2.0
deprecation warnings.
Change-Id: If6b792e57c8c5dff205419644ab68e631575a2fa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow the "connection" pytest fixture and others work
correctly in conjunction with setup/teardown that expects
to be external to the transaction, remove and prevent any usage
of "xdist" style names that are hardcoded by pytest to run
inside of fixtures, even function level ones. Instead use
pytest autouse fixtures to implement our own
r"setup|teardown_test(?:_class)?" methods so that we can ensure
function-scoped fixtures are run within them. A new more
explicit flow is set up within plugin_base and pytestplugin
such that the order of setup/teardown steps, which there are now
many, is fully documented and controllable. New granularity
has been added to the test teardown phase to distinguish
between "end of the test" when lock-holding structures on
connections should be released to allow for table drops,
vs. "end of the test plus its teardown steps" when we can
perform final cleanup on connections and run assertions
that everything is closed out.
From there we can remove most of the defensive "tear down everything"
logic inside of engines which for many years would frequently dispose
of pools over and over again, creating for a broken and expensive
connection flow. A quick test shows that running test/sql/ against
a single Postgresql engine with the new approach uses 75% fewer new
connections, creating 42 new connections total, vs. 164 new
connections total with the previous system.
As part of this, the new fixtures metadata/connection/future_connection
have been integrated such that they can be combined together
effectively. The fixture_session(), provide_metadata() fixtures
have been improved, including that fixture_session() now strongly
references sessions which are explicitly torn down before
table drops occur afer a test.
Major changes have been made to the
ConnectionKiller such that it now features different "scopes" for
testing engines and will limit its cleanup to those testing
engines corresponding to end of test, end of test class, or
end of test session. The system by which it tracks DBAPI
connections has been reworked, is ultimately somewhat similar to
how it worked before but is organized more clearly along
with the proxy-tracking logic. A "testing_engine" fixture
is also added that works as a pytest fixture rather than a
standalone function. The connection cleanup logic should
now be very robust, as we now can use the same global
connection pools for the whole suite without ever disposing
them, while also running a query for PostgreSQL
locks remaining after every test and assert there are no open
transactions leaking between tests at all. Additional steps
are added that also accommodate for asyncio connections not
explicitly closed, as is the case for legacy sync-style
tests as well as the async tests themselves.
As always, hundreds of tests are further refined to use the
new fixtures where problems with loose connections were identified,
largely as a result of the new PostgreSQL assertions,
many more tests have moved from legacy patterns into the newest.
An unfortunate discovery during the creation of this system is that
autouse fixtures (as well as if they are set up by
@pytest.mark.usefixtures) are not usable at our current scale with pytest
4.6.11 running under Python 2. It's unclear if this is due
to the older version of pytest or how it implements itself for
Python 2, as well as if the issue is CPU slowness or just large
memory use, but collecting the full span of tests takes over
a minute for a single process when any autouse fixtures are in
place and on CI the jobs just time out after ten minutes.
So at the moment this patch also reinvents a small version of
"autouse" fixtures when py2k is running, which skips generating
the real fixture and instead uses two global pytest fixtures
(which don't seem to impact performance) to invoke the
"autouse" fixtures ourselves outside of pytest.
This will limit our ability to do more with fixtures
until we can remove py2k support.
py.test is still observed to be much slower in collection in the
4.6.11 version compared to modern 6.2 versions, so add support for new
TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that
will run the suite for fewer backends under Python 2. For Python 3
pin pytest to modern 6.2 versions where performance for collection
has been improved greatly.
Includes the following improvements:
Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would
be raised rather than :class:`.exc.TimeoutError`. Also repaired the
:paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using
the async engine, which previously would ignore the timeout and block
rather than timing out immediately as is the behavior with regular
:class:`.QueuePool`.
For asyncio the connection pool will now also not interact
at all with an asyncio connection whose ConnectionFairy is
being garbage collected; a warning that the connection was
not properly closed is emitted and the connection is discarded.
Within the test suite the ConnectionKiller is now maintaining
strong references to all DBAPI connections and ensuring they
are released when tests end, including those whose ConnectionFairy
proxies are GCed.
Identified cx_Oracle.stmtcachesize as a major factor in Oracle
test scalability issues, this can be reset on a per-test basis
rather than setting it to zero across the board. the addition
of this flag has resolved the long-standing oracle "two task"
error problem.
For SQL Server, changed the temp table style used by the
"suite" tests to be the double-pound-sign, i.e. global,
variety, which is much easier to test generically. There
are already reflection tests that are more finely tuned
to both styles of temp table within the mssql test
suite. Additionally, added an extra step to the
"dropfirst" mechanism for SQL Server that will remove
all foreign key constraints first as some issues were
observed when using this flag when multiple schemas
had not been torn down.
Identified and fixed two subtle failure modes in the
engine, when commit/rollback fails in a begin()
context manager, the connection is explicitly closed,
and when "initialize()" fails on the first new connection
of a dialect, the transactional state on that connection
is still rolled back.
Fixes: #5826
Fixes: #5827
Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure no autocommit warnings occur internally or
within tests.
Also includes fixes for SQL Server full text tests
which apparently have not been working at all for a long
time, as it used long removed APIs. CI has not had
fulltext running for some years and is now installed.
Change-Id: Id806e1856c9da9f0a9eac88cebc7a94ecc95eb96
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :class:`_schema.Table` class now raises a deprecation warning
when columns with the same name are defined. To replace a column a new
parameter :paramref:`_schema.Table.append_column.replace_existing` was
added to the :meth:`_schema.Table.append_column` method.
The :meth:`_expression.ColumnCollection.contains_column` will now
raises an error when called with a string, suggesting the caller
to use ``in`` instead.
Co-authored-by: Federico Caselli <cfederico87@gmail.com>
Change-Id: I1d58c8ebe081079cb669e7ead60886ffc1b1a7f5
|
|
|
|
|
|
|
| |
It's better, the majority of these changes look more readable to me.
also found some docstrings that had formatting / quoting issues.
Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the test suite has widespread use of many patterns
that are deprecated, enable SQLALCHEMY_WARN_20 globally
for the test suite but then break the warnings filter
out into a whole list of all the individual warnings
we are looking for. this way individual changesets
can target a specific class of warning, as many of these
warnings will indivdidually affect dozens of files
and potentially hundreds of lines of code.
Many warnings are also resolved here as this
patch started out that way. From this point
forward there should be changesets that target a
subset of the warnings at a time.
For expediency, updates some migration 2.0 docs
for ORM as well.
Change-Id: I98b8defdf7c37b818b3824d02f7668e3f5f31c94
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Make optional sequences render as identity in mssql
Remove unused dialect option sequence_default_column_type
Change-Id: I821eeffcb442f8d1b69186a9b798b15c3d8d6ff3
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change includes mainly that the bracketed use within
select() is moved to positional, and keyword arguments are
removed from calls to the select() function. it does not
yet fully address other issues such as keyword arguments passed
to the table.select().
Additionally, allows False / None to both be considered
as "disable" for all of select.correlate(), select.correlate_except(),
query.correlate(), which establishes consistency with
passing of ``False`` for the legact select(correlate=False)
argument.
Change-Id: Ie6c6e6abfbd3d75d4c8de504c0cf0159e6999108
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using the approach introduced at
https://gist.github.com/zzzeek/6287e28054d3baddc07fa21a7227904e
We can now create asyncio endpoints that are then handled
in "implicit IO" form within the majority of the Core internals.
Then coroutines are re-exposed at the point at which we call
into asyncpg methods.
Patch includes:
* asyncpg dialect
* asyncio package
* engine, result, ORM session classes
* new test fixtures, tests
* some work with pep-484 and a short plugin for the
pyannotate package, which seems to have so-so results
Change-Id: Idbcc0eff72c4cad572914acdd6f40ddb1aef1a7d
Fixes: #3414
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The psycopg2 dialect now defaults to using the very performant
``execute_values()`` psycopg2 extension for compiled INSERT statements,
and also impements RETURNING support when this extension is used. This
allows INSERT statements that even include an autoincremented SERIAL
or IDENTITY value to run very fast while still being able to return the
newly generated primary key values. The ORM will then integrate this
new feature in a separate change.
Implements RETURNING for insert with executemany
Adds support to return_defaults() mode and inserted_primary_key
to support mutiple INSERTed rows, via return_defauls_rows
and inserted_primary_key_rows accessors.
within default execution context, new cached compiler
getters are used to fetch primary keys from rows
inserted_primary_key now returns a plain tuple. this
is not yet a row-like object however this can be
added.
Adds distinct "values_only" and "batch" modes, as
"values" has a lot of benefits but "batch" breaks
cursor.rowcount
psycopg2 minimum version 2.7 so we can remove the
large number of checks for very old versions of
psycopg2
simplify tests to no longer distinguish between
native and non-native json
Fixes: #5401
Change-Id: Ic08fd3423d4c5d16ca50994460c0c234868bd61c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for "CREATE SEQUENCE" and full :class:`.Sequence` support for
Microsoft SQL Server. This removes the deprecated feature of using
:class:`.Sequence` objects to manipulate IDENTITY characteristics which
should now be performed using ``mssql_identity_start`` and
``mssql_identity_increment`` as documented at :ref:`mssql_identity`. The
change includes a new parameter :paramref:`.Sequence.data_type` to
accommodate SQL Server's choice of datatype, which for that backend
includes INTEGER and BIGINT. The default starting value for SQL Server's
version of :class:`.Sequence` has been set at 1; this default is now
emitted within the CREATE SEQUENCE DDL for all backends.
Fixes: #4235
Fixes: #4633
Change-Id: I6aa55c441e8146c2f002e2e201a7f645e667b916
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented the SQLAlchemy 2 :func:`.future.create_engine` function which
is used for forwards compatibility with SQLAlchemy 2. This engine
features always-transactional behavior with autobegin.
Allow execution options per statement execution. This includes
that the before_execute() and after_execute() events now accept
an additional dictionary with these options, empty if not
passed; a legacy event decorator is added for backwards compatibility
which now also emits a deprecation warning.
Add some basic tests for execution, transactions, and
the new result object. Build out on a new testing fixture
that swaps in the future engine completely to start with.
Change-Id: I70e7338bb3f0ce22d2f702537d94bb249bd9fb0a
Fixes: #4644
|
|
|
|
|
|
|
|
|
|
| |
towards the goal of reducing verbosity and repetition
in test fixtures as well as that we are moving to
connection only for execution, move the insert_data()
classmethod to accept a connection and adjust all
fixtures to use it.
Change-Id: I3bf534acca0d5f4cda1d4da8ae91f1155b829b09
|
|
|
|
|
|
|
|
|
|
|
| |
In I27cac9bd265c86ff2a3381ff9f844f60ef991cfc we modernized
the default tests and converted the "a in b" CTE tests to combinations,
however apparently the existing tests were not testing all
combinations and had repeats instead. The combinations
decorator has made this much easier to spot, so use
the correct combinations that were originally intended.
Change-Id: Icd904887bff00c31525497d0b1508fabaf052dc9
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use modern execution patterns, goal is so that these same tests
can work for the future engine
break sequence tests into test_sequences suite
sequence tests that are testing implicit execution patterns
at least move into their own suite that will go into test_deprecations
eventually.
Change-Id: I27cac9bd265c86ff2a3381ff9f844f60ef991cfc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Execution of literal sql string is deprecated in the
:meth:`.Connection.execute` and a warning is raised when used stating
that it will be coerced to :func:`.text` in a future release.
To execute a raw sql string the new connection method
:meth:`.Connection.exec_driver_sql` was added, that will retain the previous
behavior, passing the string to the DBAPI driver unchanged.
Usage of scalar or tuple positional parameters in :meth:`.Connection.execute`
is also deprecated.
Fixes: #4848
Fixes: #5178
Change-Id: I2830181054327996d594f7f0d59c157d477c3aa9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Targeting select / insert / update / delete, the goal
is to minimize overhead of construction and generative methods
so that only the raw arguments passed are handled. An interim
stage that converts the raw state into more compiler-ready state
is added, which is analogous to the ORM QueryContext which will
also be rolled in to be a similar concept, as is currently
being prototyped in I19e05b3424b07114cce6c439b05198ac47f7ac10.
the ORM update/delete BulkUD concept is also going to be rolled
onto this idea. So while the compiler-ready state object,
here called DMLState, looks a little thin, it's the
base of a bigger pattern that will allow for ORM functionality
to embed itself directly into the compiler, execution
context, and result set objects.
This change targets the DML objects, primarily focused on the
values() method which is the most complex process. The
work done by values() is minimized as much as possible
while still being able to create a cache key. Additional
computation is then offloaded to a new object ValuesState
that is handled by the compiler.
Architecturally, a big change here is that insert.values()
and update.values() will generate BindParameter objects for
the values now, which are then carefully received by crud.py
so that they generate the expected names. This is so that
the values() portion of these constructs is cacheable.
for the "multi-values" version of Insert, this is all skipped
and the plan right now is that a multi-values insert is
not worth caching (can always be revisited).
Using the
coercions system in values() also gets us nicer validation
for free, we can remove the NotAClauseElement thing from
schema, and we also now require scalar_subquery() is called
for an insert/update that uses a SELECT as a column value,
1.x deprecation path is added.
The traversal system is then applied to the DML objects
including tests so that they have traversal, cloning, and
cache key support. cloning is not a use case for DML however
having it present allows better validation of the structure
within the tests.
Special per-dialect DML is explicitly not cacheable at the moment,
more as a proof of concept that third party DML constructs can
exist as gracefully not-cacheable rather than producing an
incomplete cache key.
A few selected performance improvements have been added as well,
simplifying the immutabledict.union() method and adding
a new SQLCompiler function that can generate delimeter-separated
clauses like WHERE and ORDER BY without having to build
a ClauseList object at all. The use of ClauseList will
be removed from Select in an upcoming commit. Overall,
ClaustList is unnecessary for internal use and only adds
overhead to statement construction and will likely be removed
as much as possible except for explcit use of conjunctions like
and_() and or_().
Change-Id: I408e0b8be91fddd77cf279da97f55020871f75a9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 9fca5d827d we attempted to deprecate the "inline=True" flag
and add a generative inline() method, however failed to include
any tests and the method was implemented incorrectly such that
it would get overwritten with the boolean flag immediately.
Rename the internal "inline" flag to "_inline" and add test
support both for the method as well as deprecated support
for the flag, including a fixture addition to assert the expected
value of the flag as it generally does not affect the
actual compiled SQL string.
Change-Id: I0450049f17f1f0d91e22d27f1a973a2b6c0e59f7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This builds on cc718cccc0bf8a01abdf4068c7ea4f3 which moved
RowProxy to Row, allowing Row to be more like a named tuple.
- KeyedTuple in ORM is replaced with Row
- ResultSetMetaData broken out into "simple" and "cursor" versions
for ORM and Core, as well as LegacyCursor version.
- Row now has _mapping attribute that supplies full mapping behavior.
Row and SimpleRow both have named tuple behavior otherwise.
LegacyRow has some mapping features on the tuple which emit
deprecation warnings (e.g. keys(), values(), etc). the biggest
change for mapping->tuple is the behavior of __contains__ which
moves from testing of "key in row" to "value in row".
- ResultProxy breaks into ResultProxy and FutureResult (interim),
the latter has the newer APIs. Made available to dialects
using execution options.
- internal reflection methods and most tests move off of implicit
Row mapping behavior and move to row._mapping, result.mappings()
method using future result
- a new strategy system for cursor handling replaces the various
subclasses of RowProxy
- some execution context adjustments. We will leave EC in but
refined things like get_result_proxy() and out parameter handling.
Dialects for 1.4 will need to adjust from get_result_proxy()
to get_result_cursor_strategy(), if they are using this method
- out parameter handling now accommodated by get_out_parameter_values()
EC method. Oracle changes for this. external dialect for
DB2 for example will also need to adjust for this.
- deprecate case_insensitive flag for engine / result, this
feature is not used
mapping-methods on Row are deprecated, and replaced with
Row._mapping.<meth>, including:
row.keys() -> use row._mapping.keys()
row.items() -> use row._mapping.items()
row.values() -> use row._mapping.values()
key in row -> use key in row._mapping
int in row -> use int < len(row)
Fixes: #4710
Fixes: #4878
Change-Id: Ieb9085e9bcff564359095b754da9ae0af55679f0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :meth:`.Connection.connect` method is deprecated as is the concept of
"connection branching", which copies a :class:`.Connection` into a new one
that has a no-op ".close()" method. This pattern is oriented around the
"connectionless execution" concept which is also being removed in 2.0.
As part of this change we begin to move the internals away from
"connectionless execution" overall. Remove the "connectionless
execution" concept from the reflection internals and replace with
explicit patterns at the Inspector level.
Fixes: #5131
Change-Id: Id23d28a9889212ac5ae7329b85136157815d3e6f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added "from linting" as a built-in feature to the SQL compiler. This
allows the compiler to maintain graph of all the FROM clauses in a
particular SELECT statement, linked by criteria in either the WHERE
or in JOIN clauses that link these FROM clauses together. If any two
FROM clauses have no path between them, a warning is emitted that the
query may be producing a cartesian product. As the Core expression
language as well as the ORM are built on an "implicit FROMs" model where
a particular FROM clause is automatically added if any part of the query
refers to it, it is easy for this to happen inadvertently and it is
hoped that the new feature helps with this issue.
The original recipe is from:
https://github.com/sqlalchemy/sqlalchemy/wiki/FromLinter
The linter is now enabled for all tests in the test suite as well.
This has necessitated that a lot of the queries be adjusted to
not include cartesian products. Part of the rationale for the
linter to not be enabled for statement compilation only was to reduce
the need for adjustment for the many test case statements throughout
the test suite that are not real-world statements.
This gerrit is adapted from Ib5946e57c9dba6da428c4d1dee6760b3e978dda0.
Fixes: #4737
Change-Id: Ic91fd9774379f895d021c3ad564db6062299211c
Closes: #4830
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/4830
Pull-request-sha: f8a21aa6262d1bcc9ff0d11a2616e41fba97a47a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for use of the :class:`.Sequence` construct with MariaDB 10.3
and greater, as this is now supported by this database. The construct
integrates with the :class:`.Table` object in the same way that it does for
other databases like PostrgreSQL and Oracle; if is present on the integer
primary key "autoincrement" column, it is used to generate defaults. For
backwards compatibility, to support a :class:`.Table` that has a
:class:`.Sequence` on it to support sequence only databases like Oracle,
while still not having the sequence fire off for MariaDB, the optional=True
flag should be set, which indicates the sequence should only be used to
generate the primary key if the target database offers no other option.
Fixes: #4976
Closes: #4996
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/4996
Pull-request-sha: cb2e1426ea0b6bc6c93dbe8f033a11df9d8c4915
Change-Id: I507bc405eee6cae2c5991345d0eac53a37fe7512
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A major refactoring of all the functions handle all detection of
Core argument types as well as perform coercions into a new class hierarchy
based on "roles", each of which identify a syntactical location within a
SQL statement. In contrast to the ClauseElement hierarchy that identifies
"what" each object is syntactically, the SQLRole hierarchy identifies
the "where does it go" of each object syntactically. From this we define
a consistent type checking and coercion system that establishes well
defined behviors.
This is a breakout of the patch that is reorganizing select()
constructs to no longer be in the FromClause hierarchy.
Also includes a rename of as_scalar() into scalar_subquery(); deprecates
automatic coercion to scalar_subquery().
Partially-fixes: #4617
Change-Id: I26f1e78898693c6b99ef7ea2f4e7dfd0e8e1a1bd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fully removed the behavior of strings passed directly as components of a
:func:`.select` or :class:`.Query` object being coerced to :func:`.text`
constructs automatically; the warning that has been emitted is now an
ArgumentError or in the case of order_by() / group_by() a CompileError.
This has emitted a warning since version 1.0 however its presence continues
to create concerns for the potential of mis-use of this behavior.
Note that public CVEs have been posted for order_by() / group_by() which
are resolved by this commit: CVE-2019-7164 CVE-2019-7548
Added "SQL phrase validation" to key DDL phrases that are accepted as plain
strings, including :paramref:`.ForeignKeyConstraint.on_delete`,
:paramref:`.ForeignKeyConstraint.on_update`,
:paramref:`.ExcludeConstraint.using`,
:paramref:`.ForeignKeyConstraint.initially`, for areas where a series of SQL
keywords only are expected.Any non-space characters that suggest the phrase
would need to be quoted will raise a :class:`.CompileError`. This change
is related to the series of changes committed as part of :ticket:`4481`.
Fixed issue where using an uppercase name for an index type (e.g. GIST,
BTREE, etc. ) or an EXCLUDE constraint would treat it as an identifier to
be quoted, rather than rendering it as is. The new behavior converts these
types to lowercase and ensures they contain only valid SQL characters.
Quoting is applied to :class:`.Function` names, those which are usually but
not necessarily generated from the :attr:`.sql.func` construct, at compile
time if they contain illegal characters, such as spaces or punctuation. The
names are as before treated as case insensitive however, meaning if the
names contain uppercase or mixed case characters, that alone does not
trigger quoting. The case insensitivity is currently maintained for
backwards compatibility.
Fixes: #4481
Fixes: #4473
Fixes: #4467
Change-Id: Ib22a27d62930e24702e2f0f7c74a0473385a08eb
|
|
|
|
|
|
|
|
|
|
|
|
| |
This affects mostly docstrings, except in orm/events.py::dispose_collection()
where one parameter gets renamed: given that the method is
empty, it seemed reasonable to me to fix that too.
Closes: #4440
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/4440
Pull-request-sha: 779ed75acb6142e1f1daac467b5b14134529bb4b
Change-Id: Ic0553fe97853054b09c2453af76d96363de6eb0e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A large change throughout the library has ensured that all objects, parameters,
and behaviors which have been noted as deprecated or legacy now emit
``DeprecationWarning`` warnings when invoked. As the Python 3 interpreter now
defaults to displaying deprecation warnings, as well as that modern test suites
based on tools like tox and pytest tend to display deprecation warnings,
this change should make it easier to note what API features are obsolete.
See the notes added to the changelog and migration notes for further
details.
Fixes: #4393
Change-Id: If0ea11a1fc24f9a8029352eeadfc49a7a54c0a1b
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Applied on top of a pure run of black -l 79 in
I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9, this set of changes
resolves all remaining flake8 conditions for those codes
we have enabled in setup.cfg.
Included are resolutions for all remaining flake8 issues
including shadowed builtins, long lines, import order, unused
imports, duplicate imports, and docstring issues.
Change-Id: I4f72d3ba1380dd601610ff80b8fb06a2aff8b0fe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a straight reformat run using black as is, with no edits
applied at all.
The black run will format code consistently, however in
some cases that are prevalent in SQLAlchemy code it produces
too-long lines. The too-long lines will be resolved in the
following commit that will resolve all remaining flake8 issues
including shadowed builtins, long lines, import order, unused
imports, duplicate imports, and docstring issues.
Change-Id: I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this test was using sysdate() and current_timestamp() together
in conjunction with a truncation to DAY, however for four hours
on saturday night (see commit time :) ) these two values will
have a different value if one side is EDT and the other is UTC.
tox does not transmit environment variables including TZ by
default, so even if the server is set up for EDT, running tox
will not set TZ and at least Oracle client seems to use this
value, producing UTC for session time but the database on CI
was configured for EDT, producing EDT for sysdate.
Change-Id: I56602d2402a475a0c4fdf61c1c5fc2618c82f915
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where a :class:`.Sequence` would be dropped explicitly before any
:class:`.Table` that refers to it, which breaks in the case when the
sequence is also involved in a server-side default for that table, when
using :meth:`.MetaData.drop_all`. The step which processes sequences
to be dropped via non server-side column default functions is now invoked
after the table itself is dropped.
Change-Id: I185f2cc76d2011ad4dd3ba9bde5d8aef0ec335ae
Fixes: #4300
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in MySQLdb dialect and variants such as PyMySQL where an
additional "unicode returns" check upon connection makes explicit use of
the "utf8" character set, which in MySQL 8.0 emits a warning that utf8mb4
should be used. This is now replaced with a utf8mb4 equivalent.
Documentation is also updated for the MySQL dialect to specify utf8mb4 in
all examples. Additional changes have been made to the test suite to use
utf8mb3 charsets and databases (there seem to be collation issues in some
edge cases with utf8mb4), and to support configuration default changes made
in MySQL 8.0 such as explicit_defaults_for_timestamp as well as new errors
raised for invalid MyISAM indexes.
Change-Id: Ib596ea7de4f69f976872a33bffa4c902d17dea25
Fixes: #4283
Fixes: #4192
|