| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Re-implement c version immutabledict / processors / resultproxy / utils with cython.
Performance is in general in par or better than the c version
Added a collection module that has cython version of OrderedSet and IdentitySet
Added a new test/perf file to compare the implementations.
Run ``python test/perf/compiled_extensions.py all`` to execute the comparison test.
See results here: https://docs.google.com/document/d/1nOcDGojHRtXEkuy4vNXcW_XOJd9gqKhSeALGG3kYr6A/edit?usp=sharing
Fixes: #7256
Change-Id: I2930ef1894b5048210384728118e586e813f6a76
Signed-off-by: Federico Caselli <cfederico87@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is so that dialect methods that are called within init
can assume the same argument structure as when they are called
in other places; we can nail down the type of object as well.
This change seems to mostly impact the isolation level routines
in the dialects, as these are called during initialize()
as well as on established connections. these methods can now
assume a non-proxied DBAPI connection object in all cases,
as it is commonly required that attributes like ".autocommit"
are set on the object which don't work well in a proxied
situation.
Other changes:
* adds an interface for the "connectionfairy" concept
called PoolProxiedConnection.
* Removes ``Connectable`` superclass of Connection.
``Connectable`` was originally meant to provide for the
"method which accepts connection or engine" theme. As this
pattern is greatly reduced in 2.0 and Engine no longer extends
from it, the ``Connectable`` superclass doesnt serve any real
purpose.
Leading from that, to set this in I also applied pep 484 annotations
to the Dialect base, and then in the interests of seeing some
of the typing information show up in my IDE did a little bit for Engine,
Connection and others. I hope that it's feasible that we can
add annotations to specific classes and attributes ahead of when we
actually try to mass-populate the whole library. This was
the original spirit of pep-484 that we can apply annotations
gradually. I do of course want to try to do a mass-populate
although i think even in that case we will end up doing a lot
of manual work anyway (in particular for the changes here which
are distinct from what the stubs have).
Fixes: #7122
Change-Id: I5dd7fbff8a7ae520a81c165091af12a6a68826db
|
|
|
|
|
|
|
| |
Both sync and async versions are supported.
Fixes: #6842
Change-Id: I57751c5028acebfc6f9c43572562405453a2f2a4
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add a new system so that PostgreSQL and other dialects have a
reliable way to add casts to bound parameters in SQL statements,
replacing previous use of setinputsizes() for PG dialects.
rationale:
1. psycopg3 will be using the same SQLAlchemy-side "setinputsizes"
as asyncpg, so we will be seeing a lot more of this
2. the full rendering that SQLAlchemy's compilation is performing
is in the engine log as well as error messages. Without this,
we introduce three levels of SQL rendering, the compiler, the
hidden "setinputsizes" in SQLAlchemy, and then whatever the DBAPI
driver does. With this new approach, users reporting bugs etc.
will be less confused that there are as many as two separate
layers of "hidden rendering"; SQLAlchemy's rendering is again
fully transparent
3. calling upon a setinputsizes() method for every statement execution
is expensive. this way, the work is done behind the caching layer
4. for "fast insertmany()", I also want there to be a fast approach
towards setinputsizes. As it was, we were going to be taking
a SQL INSERT with thousands of bound parameter placeholders and
running a whole second pass on it to apply typecasts. this way,
we will at least be able to build the SQL string once without a huge
second pass over the whole string
5. psycopg2 can use this same system for its ARRAY casts
6. the general need for PostgreSQL to have lots of type casts
is now mostly in the base PostgreSQL dialect and works independently
of a DBAPI being present. dependence on DBAPI symbols that aren't
complete / consistent / hashable is removed
I was originally going to try to build this into bind_expression(),
but it was revealed this worked poorly with custom bind_expression()
as well as empty sets. the current impl also doesn't need to
run a second expression pass over the POSTCOMPILE sections, which
came out better than I originally thought it would.
Change-Id: I363e6d593d059add7bcc6d1f6c3f91dd2e683c0c
|
|/
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generalized the :paramref:`_sa.create_engine.isolation_level` parameter to
the base dialect so that it is no longer dependent on individual dialects
to be present. This parameter sets up the "isolation level" setting to
occur for all new database connections as soon as they are created by the
connection pool, where the value then stays set without being reset on
every checkin.
The :paramref:`_sa.create_engine.isolation_level` parameter is essentially
equivalent in functionality to using the
:paramref:`_engine.Engine.execution_options.isolation_level` parameter via
:meth:`_engine.Engine.execution_options` for an engine-wide setting. The
difference is in that the former setting assigns the isolation level just
once when a connection is created, the latter sets and resets the given
level on each connection checkout.
Fixes: #6342
Change-Id: Id81d6b1c1a94371d901ada728a610696e09e9741
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed here includes:
* convert_unicode parameters
* encoding create_engine() parameter
* description encoding support
* "non-unicode fallback" modes under Python 2
* String symbols regarding Python 2 non-unicode fallbacks
* any concept of DBAPIs that don't accept unicode
statements, unicode bound parameters, or that return bytes
for strings anywhere except an explicit Binary / BLOB
type
* unicode processors in Python / C
Risk factors:
* Whether all DBAPIs do in fact return Unicode objects for
all entries in cursor.description now
* There was logic for mysql-connector trying to determine
description encoding. A quick test shows Unicode coming
back but it's not clear if there are still edge cases where
they return bytes. if so, these are bugs in that driver,
and at most we would only work around it in the mysql-connector
DBAPI itself (but we won't do that either).
* It seems like Oracle 8 was not expecting unicode bound parameters.
I'm assuming this was all Python 2 stuff and does not apply
for modern cx_Oracle under Python 3.
* third party dialects relying upon built in unicode encoding/decoding
but it's hard to imagine any non-SQLAlchemy database driver not
dealing exclusively in Python unicode strings in Python 3
Change-Id: I97d762ef6d4dd836487b714d57d8136d0310f28a
References: #7257
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added overridable methods ``PGDialect_asyncpg.setup_asyncpg_json_codec``
and ``PGDialect_asyncpg.setup_asyncpg_jsonb_codec`` codec, which handle the
required task of registering JSON/JSONB codecs for these datatypes when
using asyncpg. The change is that methods are broken out as individual,
overridable methods to support third party dialects that need to alter or
disable how these particular codecs are set up.
Fixes: #7284
Change-Id: I3eac258fea61f3975bd03c428747f788813ce45e
|
|
|
|
|
|
|
|
| |
This reverts commit 96c294da8a50d692b3f0b8e508dbbca5d9c22f1b.
I have another approach that is more obvious, easier to override explicitly and also I can test it more easily.
Change-Id: I11a3be7700dbc6f25d436e450b6fb8e8f6c4fd16
|
|
|
|
| |
Change-Id: I7352bed0115b8fcdb4708e012d83e81d1ae494ed
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes: #7284
Modify the on_connect() method of PGDialect_asyncpg to
gracefully degrade unsupported types instead of throwing a
ValueError. Useful for third-party dialects that derive
from PGDialect_asyncpg but whose databases do not support
all types (e.g., CockroachDB supports JSONB but not JSON).
Change-Id: Ibb7cc8c3de632d27b9716a93d83956a590b2a2b0
|
|/
|
|
|
| |
Fixes: #7283
Change-Id: I5402a72617b7f9bc366d64bc5ce8669374839984
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where "expanding IN" would fail to function correctly with
datatypes that use the :meth:`_types.TypeEngine.bind_expression` method,
where the method would need to be applied to each element of the
IN expression rather than the overall IN expression itself.
Fixed issue where IN expressions against a series of array elements, as can
be done with PostgreSQL, would fail to function correctly due to multiple
issues within the "expanding IN" feature of SQLAlchemy Core that was
standardized in version 1.4. The psycopg2 dialect now makes use of the
:meth:`_types.TypeEngine.bind_expression` method with :class:`_types.ARRAY`
to portably apply the correct casts to elements. The asyncpg dialect was
not affected by this issue as it applies bind-level casts at the driver
level rather than at the compiler level.
as part of this commit the "bind translate" feature has been
simplified and also applies to the names in the POSTCOMPILE tag to
accommodate for brackets.
Fixes: #7177
Change-Id: I08c703adb0a9bd6f5aeee5de3ff6f03cccdccdc5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improve the interface used by adapted drivers, like the asyncio ones,
to access the actual connection object returned by the driver.
The :class:`_engine._ConnectionRecord` and
:class:`_engine._ConnectionFairy` now have two new attributes:
* ``dbapi_connection`` always represents a DBAPI compatible
object. For pep-249 drivers, this is the DBAPI connection as it always
has been, previously accessed under the ``.connection`` attribute.
For asyncio drivers that SQLAlchemy adapts into a pep-249 interface,
the returned object will normally be a SQLAlchemy adaption object
called :class:`_engine.AdaptedConnection`.
* ``driver_connection`` always represents the actual connection object
maintained by the third party pep-249 DBAPI or async driver in use.
For standard pep-249 DBAPIs, this will always be the same object
as that of the ``dbapi_connection``. For an asyncio driver, it will be
the underlying asyncio-only connection object.
The ``.connection`` attribute remains available and is now a legacy alias
of ``.dbapi_connection``.
Fixes: #6832
Change-Id: Ib72f97deefca96dce4e61e7c38ba430068d6a82e
|
|
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
|
|
|
|
|
|
|
|
|
|
| |
Added accessors ``.sqlstate`` and synonym ``.pgcode`` to the ``.orig``
attribute of the SQLAlchemy exception class raised by the asyncpg DBAPI
adapter, that is, the intermediary exception object that wraps on top of
that raised by the asyncpg library itself, but below the level of the
SQLAlchemy dialect.
Fixes: #6199
Change-Id: Ie0f1ffaaff47c7a50dd1fbccdbe588cdc5322b70
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new flag to the :class:`_engine.Dialect` class called
:attr:`_engine.Dialect.supports_statement_cache`. This flag now needs to be present
directly on a dialect class in order for SQLAlchemy's
:ref:`query cache <sql_caching>` to take effect for that dialect. The
rationale is based on discovered issues such as :ticket:`6173` revealing
that dialects which hardcode literal values from the compiled statement,
often the numerical parameters used for LIMIT / OFFSET, will not be
compatible with caching until these dialects are revised to use the
parameters present in the statement only. For third party dialects where
this flag is not applied, the SQL logging will show the message "dialect
does not support caching", indicating the dialect should seek to apply this
flag once they have verified that no per-statement literal values are being
rendered within the compilation phase.
Fixes: #6184
Change-Id: I6fd5b5d94200458d4cb0e14f2f556dbc25e27e22
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added an ``asyncio.Lock()`` within SQLAlchemy's emulated DBAPI cursor,
local to the connection, for the asyncpg dialect, so that the space between
the call to ``prepare()`` and ``fetch()`` is prevented from allowing
concurrent executions on the connection from causing interface error
exceptions, as well as preventing race conditions when starting a new
transaction. Other PostgreSQL DBAPIs are threadsafe at the connection level
so this intends to provide a similar behavior, outside the realm of server
side cursors.
Apply the same idea to the aiomysql dialect which also would
otherwise be subject to corruption if the connection were used
concurrently.
While this is an issue which can also occur with the threaded
connection libraries, we anticipate asyncio users are more likely
to attempt using the same connection in multiple awaitables
at a time, even though this won't achieve concurrency for that
use case, as the asyncio programming style is very encouraging
of this. As the failure modes are also more complicated under
asyncio, we'd rather not have this being reported.
Fixes: #5967
Change-Id: I3670ba0c8f0b593c587c5aa7c6c61f9e8c5eb93a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continued with the improvement made as part of :ticket:`5653` to further
support bound parameter names, including those generated against column
names, for names that include colons, parenthesis, and question marks, as
well as improved test support, so that bound parameter names even if they
are auto-derived from column names should have no problem including for
parenthesis in psycopg2's "pyformat" style.
As part of this change, the format used by the asyncpg DBAPI adapter (which
is local to SQLAlchemy's asyncpg diaelct) has been changed from using
"qmark" paramstyle to "format", as there is a standard and internally
supported SQL string escaping style for names that use percent signs with
"format" style (i.e. to double percent signs), as opposed to names that use
question marks with "qmark" style (where an escaping system is not defined
by pep-249 or Python).
Fixes: #5941
Change-Id: Id86f5af81903d7063a8e3505e60df56490f85358
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in asyncpg dialect where a failure during a "commit" or less
likely a "rollback" should cancel the entire transaction; it's no longer
possible to emit rollback. Previously the connection would continue to
await a rollback that could not succeed as asyncpg would reject it.
Fixes: #5824
Change-Id: I5a4916740c269b410f4d1a78ed25191de344b9d0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow the "connection" pytest fixture and others work
correctly in conjunction with setup/teardown that expects
to be external to the transaction, remove and prevent any usage
of "xdist" style names that are hardcoded by pytest to run
inside of fixtures, even function level ones. Instead use
pytest autouse fixtures to implement our own
r"setup|teardown_test(?:_class)?" methods so that we can ensure
function-scoped fixtures are run within them. A new more
explicit flow is set up within plugin_base and pytestplugin
such that the order of setup/teardown steps, which there are now
many, is fully documented and controllable. New granularity
has been added to the test teardown phase to distinguish
between "end of the test" when lock-holding structures on
connections should be released to allow for table drops,
vs. "end of the test plus its teardown steps" when we can
perform final cleanup on connections and run assertions
that everything is closed out.
From there we can remove most of the defensive "tear down everything"
logic inside of engines which for many years would frequently dispose
of pools over and over again, creating for a broken and expensive
connection flow. A quick test shows that running test/sql/ against
a single Postgresql engine with the new approach uses 75% fewer new
connections, creating 42 new connections total, vs. 164 new
connections total with the previous system.
As part of this, the new fixtures metadata/connection/future_connection
have been integrated such that they can be combined together
effectively. The fixture_session(), provide_metadata() fixtures
have been improved, including that fixture_session() now strongly
references sessions which are explicitly torn down before
table drops occur afer a test.
Major changes have been made to the
ConnectionKiller such that it now features different "scopes" for
testing engines and will limit its cleanup to those testing
engines corresponding to end of test, end of test class, or
end of test session. The system by which it tracks DBAPI
connections has been reworked, is ultimately somewhat similar to
how it worked before but is organized more clearly along
with the proxy-tracking logic. A "testing_engine" fixture
is also added that works as a pytest fixture rather than a
standalone function. The connection cleanup logic should
now be very robust, as we now can use the same global
connection pools for the whole suite without ever disposing
them, while also running a query for PostgreSQL
locks remaining after every test and assert there are no open
transactions leaking between tests at all. Additional steps
are added that also accommodate for asyncio connections not
explicitly closed, as is the case for legacy sync-style
tests as well as the async tests themselves.
As always, hundreds of tests are further refined to use the
new fixtures where problems with loose connections were identified,
largely as a result of the new PostgreSQL assertions,
many more tests have moved from legacy patterns into the newest.
An unfortunate discovery during the creation of this system is that
autouse fixtures (as well as if they are set up by
@pytest.mark.usefixtures) are not usable at our current scale with pytest
4.6.11 running under Python 2. It's unclear if this is due
to the older version of pytest or how it implements itself for
Python 2, as well as if the issue is CPU slowness or just large
memory use, but collecting the full span of tests takes over
a minute for a single process when any autouse fixtures are in
place and on CI the jobs just time out after ten minutes.
So at the moment this patch also reinvents a small version of
"autouse" fixtures when py2k is running, which skips generating
the real fixture and instead uses two global pytest fixtures
(which don't seem to impact performance) to invoke the
"autouse" fixtures ourselves outside of pytest.
This will limit our ability to do more with fixtures
until we can remove py2k support.
py.test is still observed to be much slower in collection in the
4.6.11 version compared to modern 6.2 versions, so add support for new
TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that
will run the suite for fewer backends under Python 2. For Python 3
pin pytest to modern 6.2 versions where performance for collection
has been improved greatly.
Includes the following improvements:
Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would
be raised rather than :class:`.exc.TimeoutError`. Also repaired the
:paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using
the async engine, which previously would ignore the timeout and block
rather than timing out immediately as is the behavior with regular
:class:`.QueuePool`.
For asyncio the connection pool will now also not interact
at all with an asyncio connection whose ConnectionFairy is
being garbage collected; a warning that the connection was
not properly closed is emitted and the connection is discarded.
Within the test suite the ConnectionKiller is now maintaining
strong references to all DBAPI connections and ensuring they
are released when tests end, including those whose ConnectionFairy
proxies are GCed.
Identified cx_Oracle.stmtcachesize as a major factor in Oracle
test scalability issues, this can be reset on a per-test basis
rather than setting it to zero across the board. the addition
of this flag has resolved the long-standing oracle "two task"
error problem.
For SQL Server, changed the temp table style used by the
"suite" tests to be the double-pound-sign, i.e. global,
variety, which is much easier to test generically. There
are already reflection tests that are more finely tuned
to both styles of temp table within the mssql test
suite. Additionally, added an extra step to the
"dropfirst" mechanism for SQL Server that will remove
all foreign key constraints first as some issues were
observed when using this flag when multiple schemas
had not been torn down.
Identified and fixed two subtle failure modes in the
engine, when commit/rollback fails in a begin()
context manager, the connection is explicitly closed,
and when "initialize()" fails on the first new connection
of a dialect, the transactional state on that connection
is still rolled back.
Fixes: #5826
Fixes: #5827
Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
|
|
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enhanced the performance of the asyncpg dialect by caching the asyncpg
PreparedStatement objects on a per-connection basis. For a test case that
makes use of the same statement on a set of pooled connections this appears
to grant a 10-20% speed improvement. The cache size is adjustable and may
also be disabled.
Unfortunately the caching gets more complicated when there are
schema changes present. An invalidation scheme is now also added
to accommodate for prepared statements as well as asyncpg cached OIDs.
However, the exception raises cannot be prevented if DDL has changed
database structures that were cached for a particular asyncpg
connection. Logic is added to clear the caches when these errors occur.
Change-Id: Icf02aa4871eb192f245690f28be4e9f9c35656c6
|
|
|
|
| |
Change-Id: I4940d184a4dc790782fcddfb9873af3cca844398
|
|
|
|
|
|
|
|
|
|
|
|
| |
the test suite failed to find that we started accessing
a non-existent slot "_isolation_setting" added by me in
9b779611f9, as the test suite makes use of the
AsyncAdaptFallback_asyncpg_connection subclass, which didn't
include __slots__ and therefore didn't catch that
_isolation_setting wasn't added to slots.
Fixes: #5739
Change-Id: Ibbbedc2ee0f1d1c9d91ba7898d755812deccb380
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where a connection pool event specified with a keyword,
most notably ``insert=True``, would be lost when the event were set up.
This would prevent startup events that need to fire before dialect-level
events from working correctly.
The internal mechanics of the engine connection routine has been altered
such that it's now guaranteed that a user-defined event handler for the
:meth:`_pool.PoolEvents.connect` handler, when established using
``insert=True``, will allow an event handler to run that is definitely
invoked **before** any dialect-specific initialization starts up, most
notably when it does things like detect default schema name.
Previously, this would occur in most cases but not unconditionally.
A new example is added to the schema documentation illustrating how to
establish the "default schema name" within an on-connect event
(upcoming as part of I882edd5bbe06ee5b4d0a9c148854a57b2bcd4741)
Addiional changes to support setting default schema name:
The Oracle dialect now uses
``select sys_context( 'userenv', 'current_schema' ) from dual`` to get
the default schema name, rather than ``SELECT USER FROM DUAL``, to
accommodate for changes to the session-local schema name under Oracle.
Added a read/write ``.autocommit`` attribute to the DBAPI-adaptation layer
for the asyncpg dialect. This so that when working with DBAPI-specific
schemes that need to use "autocommit" directly with the DBAPI connection,
the same ``.autocommit`` attribute which works with both psycopg2 as well
as pg8000 is available.
Fixes: #5716
Fixes: #5708
Change-Id: I7dce56b4345ffc720e25e2aaccb7e42bb29e5671
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
it's a typo fix
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #5689
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5689
Pull-request-sha: 3823b2552da2a7b7a345979ad6283d848c0ec7a5
Change-Id: I170e7bea60182ebec8867499b2ea171d813fc49a
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reworked the "setinputsizes()" set of dialect hooks to be correctly
extensible for any arbirary DBAPI, by allowing dialects individual hooks
that may invoke cursor.setinputsizes() in the appropriate style for that
DBAPI. In particular this is intended to support pyodbc's style of usage
which is fundamentally different from that of cx_Oracle. Added support
for pyodbc.
Fixes: #5649
Change-Id: I9f1794f8368bf3663a286932cfe3992dae244a10
|
|
|
|
|
|
|
|
|
| |
Add support to ``FETCH {FIRST | NEXT} [ count ] {ROW | ROWS}
{ONLY | WITH TIES}`` in the select for the supported backends,
currently PostgreSQL, Oracle and MSSQL.
Fixes: #5576
Change-Id: Ibb5871a457c0555f82b37e354e7787d15575f1f7
|
|
|
|
|
|
|
| |
It's better, the majority of these changes look more readable to me.
also found some docstrings that had formatting / quoting issues.
Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
|
|
|
|
|
|
|
|
|
| |
Set default type codec for ``json`` and ``jsonb`` types when using
the asyncpg driver. By default asyncpg will not decode them and return
strings instead.
Fixes: #5584
Change-Id: I41348eff8096ccf87b952d7e797c0694c6c4b5c4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This codec was used to ensure the "pg_attribute.generated"
column comes back as a string and not bytes, matching how
other PG drivers treat this datatype. However, this breaks
asyncpg's internal implementation of set_type_codec going forward
and the "char" datatype is actually a bytes in any case.
So at the moment it appears psycopg2/pg8000 are broken for mis-treatment
of the datatype and asyncpg is broken in that it was allowing
us to change a codec that it appears to rely upon internally.
Fixes: #5586
Change-Id: I937eba315904721aa4e2726b95432910a8affe5f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The server_side_cursors engine-wide feature relies upon
regexp parsing of statements a well as general guessing as
to when the feature should be used. This is not within the
2.0 way of doing things and should be removed.
Additionally, mariadbconnector defaults to unbuffered cursors;
add new cursor hooks so that mariadbconnector can specify
buffered or unbuffered cursors without too much difficulty.
This will also correctly default mariadbconnector to buffered
cursors which should repair the segfaults we've been getting.
Try to restore the assert_raises that was removed in
5b6dfc0c38bf1f01da4b8 to see if mariadbconnector segfaults
are resolved.
Change-Id: I77f1c972c742e40694972f578140bb0cac8c39eb
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for PostgreSQL "readonly" and "deferrable" flags for all of
psycopg2, asyncpg and pg8000 dialects. This takes advantage of a newly
generalized version of the "isolation level" API to support other kinds of
session attributes set via execution options that are reliably reset
when connections are returned to the connection pool.
Fixes: #5549
Change-Id: I0ad6d7a095e49d331618274c40ce75c76afdc7dd
|
|
Using the approach introduced at
https://gist.github.com/zzzeek/6287e28054d3baddc07fa21a7227904e
We can now create asyncio endpoints that are then handled
in "implicit IO" form within the majority of the Core internals.
Then coroutines are re-exposed at the point at which we call
into asyncpg methods.
Patch includes:
* asyncpg dialect
* asyncio package
* engine, result, ORM session classes
* new test fixtures, tests
* some work with pep-484 and a short plugin for the
pyannotate package, which seems to have so-so results
Change-Id: Idbcc0eff72c4cad572914acdd6f40ddb1aef1a7d
Fixes: #3414
|