| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Drops support for cx_Oracle prior to version 5.x, reworks
numeric and binary support.
Fixes: #4064
Change-Id: Ib9ae9aba430c15cd2a6eeb4e5e3fd8e97b5fe480
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new class of "rowcount support" for dialects that is specific to
when "RETURNING", which on SQL Server looks like "OUTPUT inserted", is in
use, as the PyODBC backend isn't able to give us rowcount on an UPDATE or
DELETE statement when OUTPUT is in effect. This primarily affects the ORM
when a flush is updating a row that contains server-calcluated values,
raising an error if the backend does not return the expected row count.
PyODBC now states that it supports rowcount except if OUTPUT.inserted is
present, which is taken into account by the ORM during a flush as to
whether it will look for a rowcount.
ORM tests are implicit in existing tests run against PyODBC
Fixes: #4062
Change-Id: Iff17cbe4c7a5742971ed85a4d58660c18cc569c2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new kind of :func:`.bindparam` called "expanding". This is
for use in ``IN`` expressions where the list of elements is rendered
into individual bound parameters at statement execution time, rather
than at statement compilation time. This allows both a single bound
parameter name to be linked to an IN expression of multiple elements,
as well as allows query caching to be used with IN expressions. The
new feature allows the related features of "select in" loading and
"polymorphic in" loading to make use of the baked query extension
to reduce call overhead. This feature should be considered to be
**experimental** for 1.2.
Fixes: #3953
Change-Id: Ie708414a3ab9c0af29998a2c7f239ff7633b1f6e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed the mechanics of :class:`.ResultProxy` to unconditionally
delay the "autoclose" step until the :class:`.Connection` is done
with the object; in the case where Postgresql ON CONFLICT with
RETURNING returns no rows, autoclose was occurring in this previously
non-existent use case, causing the usual autocommit behavior that
occurs unconditionally upon INSERT/UPDATE/DELETE to fail.
Change-Id: I235a25daf4381b31f523331f810ea04450349722
Fixes: #3955
(cherry picked from commit 8ee363e4917b0dcd64a83b6d26e465c9e61e0ea5)
(cherry picked from commit f52fb5282a046d26b6ee2778e03b995eb117c2ee)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where a SQL-oriented Python-side column default could fail to
be executed properly upon INSERT in the "pre-execute" codepath, if the
SQL itself were an untyped expression, such as plain text. The "pre-
execute" codepath is fairly uncommon however can apply to non-integer
primary key columns with SQL defaults when RETURNING is not used.
Tests exist here to ensure typing is applied to
a typed expression for default, but in the case of
an untyped SQL value, we know the type from the column,
so apply this.
Change-Id: I5d8b391611c137b9f700115a50a2bf5b30abfe94
Fixes: #3923
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added native "pessimistic disconnection" handling to the :class:`.Pool`
object. The new parameter :paramref:`.Pool.pre_ping`, available from
the engine as :paramref:`.create_engine.pool_pre_ping`, applies an
efficient form of the "pre-ping" recipe featured in the pooling
documentation, which upon each connection check out, emits a simple
statement, typically "SELECT 1", to test the connection for liveness.
If the existing connection is no longer able to respond to commands,
the connection is transparently recycled, and all other connections
made prior to the current timestamp are invalidated.
Change-Id: I89700d0075e60abd2250e54b9bd14daf03c71c00
Fixes: #3919
|
|
|
|
|
|
|
| |
After bump minimum supported version to 2.7 (1da9d3752160430c91534a8868ceb8c5ad1451d4), we can use new syntax.
Change-Id: Ib064c75a00562e641d132f9c57e5e69744200e05
Pull-request: https://github.com/zzzeek/sqlalchemy/pull/347
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for SQL comments on :class:`.Table` and :class:`.Column`
objects, via the new :paramref:`.Table.comment` and
:paramref:`.Column.comment` arguments. The comments are included
as part of DDL on table creation, either inline or via an appropriate
ALTER statement, and are also reflected back within table reflection,
as well as via the :class:`.Inspector`. Supported backends currently
include MySQL, Postgresql, and Oracle.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Fixes: #1546
Change-Id: Ib90683850805a2b4ee198e420dc294f32f15d35d
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where in the unusual case of passing a
:class:`.Compiled` object directly to :meth:`.Connection.execute`,
the dialect with which the :class:`.Compiled` object were generated
was not consulted for the paramstyle of the string statement, instead
assuming it would match the dialect-level paramstyle, causing
mismatches to occur.
Change-Id: I114e4db2183fbb75bb7c0b0641f5a161855696ee
Fixes: #3938
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The longstanding behavior of the :meth:`.Operators.in_` and
:meth:`.Operators.not_in_` operators emitting a warning when
the right-hand condition is an empty sequence has been revised;
a new flag :paramref:`.create_engine.empty_in_strategy` allows an
empty "IN" expression to generate a simple boolean expression, or
to invoke the previous behavior of dis-equating the expression to
itself, with or without a warning. The default behavior is now
to emit the simple boolean expression, allowing an empty IN to
be evaulated without any performance penalty.
Change-Id: I65cc37f2d7cf65a59bf217136c42fee446929352
Fixes: #3907
|
|
|
|
| |
Change-Id: I4e8c2aa8fe817bb2af8707410fa0201f938781de
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "extend_existing" option of :class:`.Table` reflection would
cause indexes and constraints to be doubled up in the case that the parameter
were used with :meth:`.MetaData.reflect` (as the automap extension does)
due to tables being reflected both within the foreign key path as well
as directly. A new de-duplicating set is passed through within the
:meth:`.MetaData.reflect` sequence to prevent double reflection in this
way.
Change-Id: Ibf6650c1e76a44ccbe15765fd79df2fa53d6bac7
Fixes: #3861
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows to skip buffering of the results on the client side, e.g.
the following snippet:
table = sa.Table(
'testtbl', sa.MetaData(),
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('a', sa.Integer),
sa.Column('b', sa.String(512))
)
table.create(eng, checkfirst=True)
with eng.connect() as conn:
result = conn.execute(table.select().limit(1)).fetchone()
if result is None:
for _ in range(1000):
conn.execute(
table.insert(),
[{'a': random.randint(1, 100000),
'b': ''.join(random.choice(string.ascii_letters) for _ in range(100))}
for _ in range(1000)]
)
with eng.connect() as conn:
for row in conn.execution_options(stream_results=True).execute(table.select()):
pass
now uses ~23 MB of memory instead of ~327 MB on CPython 3.5.2 and
PyMySQL 0.7.9.
psycopg2 implementation and execution options (stream_results,
server_side_cursors) are reused.
Change-Id: I4dc23ce3094f027bdff51b896b050361991c62e2
|
| |
|
|
|
|
|
|
|
|
| |
Compiler can now set up execution options and additionally
will propagate autocommit from embedded CTEs.
Change-Id: I19db7b8fe4d84549ea95342e8d2040189fed1bbe
Fixes: #3805
|
|
|
|
|
|
|
|
|
|
| |
An adjustment to ON CONFLICT such that the "inserted_primary_key"
logic is able to accommodate the case where there's no INSERT or
UPDATE and there's no net change. The value comes out as None
in this case, rather than failing on an exception.
Change-Id: I0794e95c3ca262cb1ab2387167d96b8984225fce
Fixes: #3813
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tests illustrate that exceptions like GreenletExit and
even KeyboardInterrupt can corrupt the state of a DBAPI
connection like that of pymysql and mysqlclient. Intercept
BaseException errors within the handle_error scheme and
invalidate just the connection alone in this case, but not
the whole pool.
The change is backwards-incompatible with a program that
currently intercepts ctrl-C within a database transaction
and wants to continue working on that transaction. Ensure
the event hook can be used to reverse this behavior.
Change-Id: Ifaa013c13826d123eef34e32b7e79fff74f1b21b
Fixes: #3803
|
|
|
|
|
|
|
|
|
|
| |
Raise a more descriptive exception / message when ClauseElement
or non-SQLAlchemy objects that are not "executable" are erroneously
passed to ``.execute()``; a new exception ObjectNotExecutableError
is raised consistently in all cases.
Change-Id: I2dd393121e2c7e5b6b9e40286a2f25670876e8e4
Fixes: #3786
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in new CTE feature for update/insert/delete stated
as a CTE inside of an enclosing statement (typically SELECT) whereby
oninsert and onupdate values weren't called upon for the embedded
statement.
This is accomplished by consulting prefetch
for all statements. The collection is also broken into
separate insert/update collections so that we don't need to
consult toplevel self.isinsert to determine if the prefetch
is for an insert or an update. What we don't yet test for
are CTE combinations that have both insert/update in one
statement, though these should now work in theory provided
the underlying database supports such a statement.
Change-Id: I3b6a860e22c86743c91c56a7ec751ff706f66f64
Fixes: #3745
|
| |
|
|
|
|
|
|
|
|
|
|
| |
when the construct contains non-standard sql elements such as
returning, array index operations, or dialect-specific or custom
datatypes. a string is now returned in these cases rendering an
approximation of the construct (typically the postgresql-style
version of it) rather than raising an error. fixes #3631
- add within_group to top-level imports
- add eq_ignore_whitespace to sqlalchemy.testing imports
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
method, and its interaction with result-row processing, now allows
the columns passed to the method to be positionally matched with the
result columns in the statement, rather than matching on name alone.
The advantage to this includes that when linking a textual SQL statement
to an ORM or Core table model, no system of labeling or de-duping of
common column names needs to occur, which also means there's no need
to worry about how label names match to ORM columns and so-forth. In
addition, the :class:`.ResultProxy` has been further enhanced to
map column and string keys to a row with greater precision in some
cases. fixes #3501
- reorganize the initialization of ResultMetaData for readability
and complexity; use the name "cursor_description", define the
task of "merging" cursor_description with compiled column information
as its own function, and also define "name extraction" as a separate task.
- fully change the name we use in the "ambiguous column" error to be the
actual name that was ambiguous, modify the C ext also
|
|
|
|
| |
back by using an attrgetter for the default case
|
|
|
|
|
|
|
|
|
| |
This supports the use case of an application that uses the same set of
:class:`.Table` objects in many schemas, such as schema-per-user.
A new execution option
:paramref:`.Connection.execution_options.schema_translate_map` is
added. fixes #2685
- latest tox doesn't like the {posargs} in the profile rerunner
|
|
|
|
|
|
|
|
| |
no longer called for an UPDATE or DELETE statement emitted via plain
text or via the :func:`.text` construct, affecting those drivers
that erase cursor.rowcount once the cursor is closed such as SQL
Server ODBC and Firebird drivers.
fixes #3622
|
|
|
|
|
|
|
|
| |
:func:`.engine_from_config` were not being parsed correctly;
these included ``pool_threadlocal`` and the psycopg2 argument
``use_native_unicode``. fixes #3435
- add legacy_schema_aliasing config parsing for mssql
- move use_native_unicode config arg to the psycopg2 dialect
|
|
|
|
|
|
|
|
|
|
|
| |
pep-249 exception names linked to exception classes of an entirely
different name, preventing SQLAlchemy's own exception wrapping from
wrapping the error appropriately.
The SQLAlchemy dialect in use needs to implement a new
accessor :attr:`.DefaultDialect.dbapi_exception_translation_map`
to support this feature; this is implemented now for the py-postgresql
dialect.
fixes #3421
|
|
|
|
|
|
| |
fail to store the correct value for MSSQL on an INSERT where the
primary key value was present in the insert params before execution.
fixes #3360
|
|
|
|
|
|
|
|
|
|
|
| |
That is, after exhausing all rows using the fetch methods, the
DBAPI cursor is released as before and the object may be safely
discarded, but the fetch methods may continue to be called for which
they will return an end-of-result object (None for fetchone, empty list
for fetchmany and fetchall). Only if :meth:`.ResultProxy.close`
is called explicitly will these methods raise the "result is closed"
error.
fixes #3330 fixes #3329
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "wrapping" employed by the mssql and oracle dialects using the
"iswrapper" argument was not being used intelligently by the compiler,
and the result map was being written incorrectly, using
*more* columns in the result map than were actually returned by
the statement, due to "row number" columns that are inside the
subquery. The compiler now writes out result map on the
"top level" select in all cases
fully, and for the mssql/oracle wrapping case extracts out
the "proxied" columns in a second step, which only includes
those columns that are proxied outwards to the top level.
This change might have implications for 3rd party dialects that
might be imitating oracle's approach. They can safely continue
to use the "iswrapper" kw which is now ignored, but they may
need to also add the _select_wraps argument as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
such that they are matched to the received result set positionally,
rather than by name. Originally, this was seen as a way to handle
cases where we had columns returned with difficult-to-predict names,
though in modern use that issue has been overcome by anonymous
labeling. In this version, the approach basically reduces function
call count per-result by a few dozen calls, or more for larger
sets of result columns. The approach still degrades into a modern
version of the old approach if textual elements modify the result
map, or if any discrepancy in size exists between
the compiled set of columns versus what was received, so there's no
issue for partially or fully textual compilation scenarios where these
lists might not line up. fixes #918
- callcounts still need to be adjusted down for this so zoomark
tests won't pass at the moment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with :meth:`.Connection.execution_options` when a :class:`.Transaction`
is in play; DBAPIs and/or SQLAlchemy dialects such as psycopg2,
MySQLdb may implicitly rollback or commit the transaction, or
not change the setting til next transaction, so this is never safe.
- Added new parameter :paramref:`.Session.connection.execution_options`
which may be used to set up execution options on a :class:`.Connection`
when it is first checked out, before the transaction has begun.
This is used to set up options such as isolation level on the
connection before the transaction starts.
- added new documentation section
detailing best practices for setting transaction isolation with
sessions.
fixes #3296
|
|
|
|
|
|
|
|
|
|
| |
logic to some degree in DefaultExecutionContext. In particular
we are removing post_insert() which doesn't appear to be used
based on a survey of prominent third party dialects. Callcounts
aren't added to existing execute profiling tests and inserts might be
a little better.
- simplify the execution_options join in DEC. Callcounts don't
appear affected.
|
|
|
|
|
|
|
|
|
|
|
|
| |
repaired to work more usefully with tables that have Python-
side default values and/or functions, as well as server-side
defaults. The feature will now work with a dialect that uses
"positional" parameters; a Python callable will also be
invoked individually for each row just as is the case with an
"executemany" style invocation; a server- side default column
will no longer implicitly receive the value explicitly
specified for the first row, instead refusing to invoke
without an explicit value. fixes #3288
|
| |
|
| |
|
|
|
|
| |
to get all flake8 passing
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it for DELETE would fail to target the correct row for DELETE.
Then to compound matters, basic "number of rows matched" checks were
not being performed. Both issues are fixed, however note that the
"rows matched" check requires so-called "sane multi-row count"
functionality; the DBAPI's executemany() method must count up the
rows matched by individual statements and SQLAlchemy's dialect must
mark this feature as supported, currently applies to some mysql dialects,
psycopg2, sqlite only. fixes #3006
- Enabled "sane multi-row count" checking for the psycopg2 DBAPI, as
this seems to be supported as of psycopg2 2.0.9.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
MySQL, to correctly render the SET clause among multiple columns
with the same name across tables. This also changes the name used for
the bound parameter in the SET clause to "<tablename>_<colname>" for
the non-primary table only; as this parameter is typically specified
using the :class:`.Column` object directly this should not have an
impact on applications. The fix takes effect for both
:meth:`.Table.update` as well as :meth:`.Query.update` in the ORM.
[ticket:2912]
|
|
|
|
|
| |
arguments; [ticket:2866]
- add dialect specific kwarg functionality to ForeignKeyConstraint, ForeignKey
|
|
|
|
|
|
| |
double close
- ensure no iterator changed size issues in testing.engines
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
type such as "charset" and "collation". While MySQL wants all character-
based CAST calls to use the CHAR type, we now create a real CHAR
object at CAST time and copy over all the parameters it has, so that
an expression like ``cast(x, mysql.TEXT(charset='utf8'))`` will
render ``CAST(t.col AS CHAR CHARACTER SET utf8)``.
- Added new "unicode returns" detection to the MySQL dialect and
to the default dialect system overall, such that any dialect
can add extra "tests" to the on-first-connect "does this DBAPI
return unicode directly?" detection. In this case, we are
adding a check specifically against the "utf8" encoding with
an explicit "utf8_bin" collation type (after checking that
this collation is available) to test for some buggy unicode
behavior observed with MySQLdb version 1.2.3. While MySQLdb
has resolved this issue as of 1.2.4, the check here should
guard against regressions. The change also allows the "unicode"
checks to log in the engine logs, which was not previously
the case. [ticket:2906]
|
|
|
|
| |
except for the changelog part
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
we will be able to parse dialect-specific arguments from string
configuration dictionaries. Dialect classes can now provide their
own list of parameter types and string-conversion routines.
The feature is not yet used by the built-in dialects, however.
[ticket:2875]
|