| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Removed the previously deprecated ``case_sensitive`` parameter from
:func:`_sa.create_engine`, which would impact only the lookup of string
column names in Core-only result set rows; it had no effect on the behavior
of the ORM. The effective behavior of what ``case_sensitive`` refers
towards remains at its default value of ``True``, meaning that string names
looked up in ``row._mapping`` will match case-sensitively, just like any
other Python mapping.
Change-Id: I0dc4be3fac37d30202b1603db26fa10a110b618d
|
|
|
|
| |
Change-Id: Iafb50de7e28626d9cee755db9c05ac7189b4d963
|
|
|
|
|
|
|
|
|
| |
Prevent any reading of this parameter that would omit that it
is not used under Python 3 and in Python 2 is not used very
much either.
Fixes: #7050
Change-Id: Iaf619f1ee164fc58afe710d11627ed6368d74343
|
|
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fix for pysqlcipher released in version 1.4.3 :ticket:`5848` was
unfortunately non-working, in that the new ``on_connect_url`` hook was
erroneously not receiving a ``URL`` object under normal usage of
:func:`_sa.create_engine` and instead received a string that was unhandled;
the test suite failed to fully set up the actual conditions under which
this hook is called. This has been fixed.
Fixes: #6586
Change-Id: I3bf738daec35877a10fdad740f08dca9e7420829
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed critical regression caused by the change in :ticket`5497` where the
connection pool "init" phase no longer occurred within mutexed isolation,
allowing other threads to proceed with the dialect uninitialized, which
could then impact the compilation of SQL statements.
This issue is essentially the same regression which was fixed many years
ago in :ticket:`2964` in dd32540dabbee0678530fb1b0868d1eb41572dca,
which was missed this time as the test suite fo
that issue only tested the pool in isolation, and assumed the
"first_connect" event would be used by the Engine. However
:ticket:`5497` stopped using "first_connect" and no test detected
the lack of mutexing, that has been resolved here through
the addition of more tests.
This fix also identifies what is probably a bug in earlier versions
of SQLAlchemy where the "first_connect" handler would be cancelled
if the initializer failed; this is evidenced by
test_explode_in_initializer which was doing a reconnect due to
c.rollback() yet wasn't hanging. We now solve this issue by
preventing the manufactured Connection from ever reconnecting
inside the first_connect handler.
Also remove the "_sqla_unwrap" test attribute; this is almost
not used anymore however we can use a more targeted
wrapper supplied by the testing.engines.proxying_engine
function.
See if we can also open up Oracle for "ad hoc engines" tests
now that we have better connection management logic.
Fixes: #6337
Change-Id: I4a3476625c4606f1a304dbc940d500325e8adc1a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ``pysqlcipher`` dialect now imports the ``sqlcipher3`` module
for Python 3 by default. Regressions have been repaired such that
the connection routine was not working.
To better support the post-connection steps of the pysqlcipher
dialect, a new hook Dialect.on_connect_url() is added, which
supersedes Dialect.on_connect() and is passed the URL object.
The dialect now pulls the passphrase and other cipher args
from the URL directly without including them in the
"connect" args. This will allow any user-defined extensibility
to connecting to work as it would for other dialects.
The commit also builds upon the extended routines in
sqlite/provisioning.py to better support running tests against
multiple simultaneous SQLite database files. Additionally enables
backend for test_sqlite which was skipping everything
for aiosqlite too, fortunately everything there is passing.
Fixes: #5848
Change-Id: I43f53ebc62298a84a4abe149e1eb699a027b7915
|
|
|
|
|
| |
Fixes: #6075
Change-Id: Ia3f6109e3a038ddcf513d3e887b4cad0f776f0a6
|
|
|
|
|
| |
Fixes: #5715
Change-Id: I2ac16541d34f49b25070e00c43596bcd71aff72d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new execution option
:paramref:`_engine.Connection.execution_options.logging_token`. This option
will add an additional per-message token to log messages generated by the
:class:`_engine.Connection` as it executes statements. This token is not
part of the logger name itself (that part can be affected using the
existing :paramref:`_sa.create_engine.logging_name` parameter), so is
appropriate for ad-hoc connection use without the side effect of creating
many new loggers. The option can be set at the level of
:class:`_engine.Connection` or :class:`_engine.Engine`.
Fixes: #5911
Change-Id: Iec9c39b868b3578fcedc1c094dace5b6f64bacea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow the "connection" pytest fixture and others work
correctly in conjunction with setup/teardown that expects
to be external to the transaction, remove and prevent any usage
of "xdist" style names that are hardcoded by pytest to run
inside of fixtures, even function level ones. Instead use
pytest autouse fixtures to implement our own
r"setup|teardown_test(?:_class)?" methods so that we can ensure
function-scoped fixtures are run within them. A new more
explicit flow is set up within plugin_base and pytestplugin
such that the order of setup/teardown steps, which there are now
many, is fully documented and controllable. New granularity
has been added to the test teardown phase to distinguish
between "end of the test" when lock-holding structures on
connections should be released to allow for table drops,
vs. "end of the test plus its teardown steps" when we can
perform final cleanup on connections and run assertions
that everything is closed out.
From there we can remove most of the defensive "tear down everything"
logic inside of engines which for many years would frequently dispose
of pools over and over again, creating for a broken and expensive
connection flow. A quick test shows that running test/sql/ against
a single Postgresql engine with the new approach uses 75% fewer new
connections, creating 42 new connections total, vs. 164 new
connections total with the previous system.
As part of this, the new fixtures metadata/connection/future_connection
have been integrated such that they can be combined together
effectively. The fixture_session(), provide_metadata() fixtures
have been improved, including that fixture_session() now strongly
references sessions which are explicitly torn down before
table drops occur afer a test.
Major changes have been made to the
ConnectionKiller such that it now features different "scopes" for
testing engines and will limit its cleanup to those testing
engines corresponding to end of test, end of test class, or
end of test session. The system by which it tracks DBAPI
connections has been reworked, is ultimately somewhat similar to
how it worked before but is organized more clearly along
with the proxy-tracking logic. A "testing_engine" fixture
is also added that works as a pytest fixture rather than a
standalone function. The connection cleanup logic should
now be very robust, as we now can use the same global
connection pools for the whole suite without ever disposing
them, while also running a query for PostgreSQL
locks remaining after every test and assert there are no open
transactions leaking between tests at all. Additional steps
are added that also accommodate for asyncio connections not
explicitly closed, as is the case for legacy sync-style
tests as well as the async tests themselves.
As always, hundreds of tests are further refined to use the
new fixtures where problems with loose connections were identified,
largely as a result of the new PostgreSQL assertions,
many more tests have moved from legacy patterns into the newest.
An unfortunate discovery during the creation of this system is that
autouse fixtures (as well as if they are set up by
@pytest.mark.usefixtures) are not usable at our current scale with pytest
4.6.11 running under Python 2. It's unclear if this is due
to the older version of pytest or how it implements itself for
Python 2, as well as if the issue is CPU slowness or just large
memory use, but collecting the full span of tests takes over
a minute for a single process when any autouse fixtures are in
place and on CI the jobs just time out after ten minutes.
So at the moment this patch also reinvents a small version of
"autouse" fixtures when py2k is running, which skips generating
the real fixture and instead uses two global pytest fixtures
(which don't seem to impact performance) to invoke the
"autouse" fixtures ourselves outside of pytest.
This will limit our ability to do more with fixtures
until we can remove py2k support.
py.test is still observed to be much slower in collection in the
4.6.11 version compared to modern 6.2 versions, so add support for new
TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that
will run the suite for fewer backends under Python 2. For Python 3
pin pytest to modern 6.2 versions where performance for collection
has been improved greatly.
Includes the following improvements:
Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would
be raised rather than :class:`.exc.TimeoutError`. Also repaired the
:paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using
the async engine, which previously would ignore the timeout and block
rather than timing out immediately as is the behavior with regular
:class:`.QueuePool`.
For asyncio the connection pool will now also not interact
at all with an asyncio connection whose ConnectionFairy is
being garbage collected; a warning that the connection was
not properly closed is emitted and the connection is discarded.
Within the test suite the ConnectionKiller is now maintaining
strong references to all DBAPI connections and ensuring they
are released when tests end, including those whose ConnectionFairy
proxies are GCed.
Identified cx_Oracle.stmtcachesize as a major factor in Oracle
test scalability issues, this can be reset on a per-test basis
rather than setting it to zero across the board. the addition
of this flag has resolved the long-standing oracle "two task"
error problem.
For SQL Server, changed the temp table style used by the
"suite" tests to be the double-pound-sign, i.e. global,
variety, which is much easier to test generically. There
are already reflection tests that are more finely tuned
to both styles of temp table within the mssql test
suite. Additionally, added an extra step to the
"dropfirst" mechanism for SQL Server that will remove
all foreign key constraints first as some issues were
observed when using this flag when multiple schemas
had not been torn down.
Identified and fixed two subtle failure modes in the
engine, when commit/rollback fails in a begin()
context manager, the connection is explicitly closed,
and when "initialize()" fails on the first new connection
of a dialect, the transactional state on that connection
is still rolled back.
Fixes: #5826
Fixes: #5827
Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
|
|
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes: #5719
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
Make it explicit in the documentation and in the default value for the 'timeout'
parameter that `timeout` can be a float. Because Python timing is not
very accurate, warn about the precision.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [x] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #5710
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5710
Pull-request-sha: 5f4eef8b4aba756d32e14ea41f71ef2919c26b84
Change-Id: I462524b1624ca5cc76d083a1d58e5dc89501c1a9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where a connection pool event specified with a keyword,
most notably ``insert=True``, would be lost when the event were set up.
This would prevent startup events that need to fire before dialect-level
events from working correctly.
The internal mechanics of the engine connection routine has been altered
such that it's now guaranteed that a user-defined event handler for the
:meth:`_pool.PoolEvents.connect` handler, when established using
``insert=True``, will allow an event handler to run that is definitely
invoked **before** any dialect-specific initialization starts up, most
notably when it does things like detect default schema name.
Previously, this would occur in most cases but not unconditionally.
A new example is added to the schema documentation illustrating how to
establish the "default schema name" within an on-connect event
(upcoming as part of I882edd5bbe06ee5b4d0a9c148854a57b2bcd4741)
Addiional changes to support setting default schema name:
The Oracle dialect now uses
``select sys_context( 'userenv', 'current_schema' ) from dual`` to get
the default schema name, rather than ``SELECT USER FROM DUAL``, to
accommodate for changes to the session-local schema name under Oracle.
Added a read/write ``.autocommit`` attribute to the DBAPI-adaptation layer
for the asyncpg dialect. This so that when working with DBAPI-specific
schemes that need to use "autocommit" directly with the DBAPI connection,
the same ``.autocommit`` attribute which works with both psycopg2 as well
as pg8000 is available.
Fixes: #5716
Fixes: #5708
Change-Id: I7dce56b4345ffc720e25e2aaccb7e42bb29e5671
|
|
|
|
|
|
|
|
|
| |
Added the "future" keyword to the list of words that are known by the
:func:`_sa.engine_from_config` function, so that the values "true" and
"false" may be configured as "boolean" values when using a key such
as ``sqlalchemy.future = true`` or ``sqlalchemy.future = false``.
Change-Id: Ib4bba748497cc68e4c913dde54c23a4bb08b4deb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix subclass traversals to not run classes multiple times
* switch compiler visitor to use an attrgetter, to avoid
an eval() at startup time
* don't pre-generate traversal functions, there's lots of these
which are expensive to generate at once and most applications
won't use them all; have it generate them on first use instead
* Some ideas about removing asyncio imports, they don't seem to
be too signficant, apply some more simplicity to the overall
"greenlet fallback" situation
Fixes: #5681
Change-Id: Ib564ddaddb374787ce3e11ff48026e99ed570933
|
|
|
|
|
|
|
| |
the text here was a little confusing and didn't refer to major
configurational elements such as hide_parameters.
Change-Id: I4e2179e5a64c326d30b65a8871b924725c41b453
|
|
|
|
|
| |
Fixes: #5563
Change-Id: I29204fdf679d750c66ed17daf70bc8d7cb1b7f65
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it's not really correct that URL is mutable and doesn't do
any argument checking. propose replacing it with an immutable
named tuple with rich copy-and-mutate methods.
At the moment this makes a hard change to the CreateEnginePlugin
docs that previously recommended url.query.pop(). I can't find
any plugins on github other than my own that are using this
feature, so see if we can just make a hard change on this one.
Fixes: #5526
Change-Id: I28a0a471d80792fa8c28f4fa573d6352966a4a79
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using the approach introduced at
https://gist.github.com/zzzeek/6287e28054d3baddc07fa21a7227904e
We can now create asyncio endpoints that are then handled
in "implicit IO" form within the majority of the Core internals.
Then coroutines are re-exposed at the point at which we call
into asyncpg methods.
Patch includes:
* asyncpg dialect
* asyncio package
* engine, result, ORM session classes
* new test fixtures, tests
* some work with pep-484 and a short plugin for the
pyannotate package, which seems to have so-so results
Change-Id: Idbcc0eff72c4cad572914acdd6f40ddb1aef1a7d
Fixes: #3414
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adjusted the dialect initialization process such that the
:meth:`_engine.Dialect.on_connect` is not called a second time on the first
connection. The hook is called first, then the
:meth:`_engine.Dialect.initialize` is called if that connection is the
first for that dialect, then no more events are called. This eliminates
the two calls to the "on_connect" function which can produce very difficult
debugging situations.
Fixes: #5497
Change-Id: Icefc2e884e30ee7b4ac84b99dc54bf992a6085e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in order to accommodate relationship loaders
with lambda caching, a lot more is needed. This is
a full refactor of the lambda system such that it
now has two levels of caching; the first level caches what
can be known from the __code__ element, then the next level
of caching is against the lambda itself and the contents
of __closure__. This allows for the elements inside
the lambdas, like columns and entities, to change and
then be part of the cache key. Lazy/selectinloads' use of
baked queries had to add distinct cache key elements,
which was attempted here but overall things needed to be
more robust than that.
This commit is broken out from the very long and sprawling
commit at Id6b5c03b1ce9ddb7b280f66792212a0ef0a1c541 .
Change-Id: I29a513c98917b1d503abfdd61e6b6e8800851aa8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several weeks of using the future_select() construct
has led to the proposal there be just one select() construct
again which features the new join() method, and otherwise accepts
both the 1.x and 2.x argument styles. This would make
migration simpler and reduce confusion.
However, confusion may be increased by the fact that select().join()
is different Current thinking is we may be better off
with a few hard behavioral changes to old and relatively unknown APIs
rather than trying to play both sides within two extremely similar
but subtly different APIs. At the moment, the .join() thing seems
to be the only behavioral change that occurs without the user
taking any explicit steps. Session.execute() will still
behave the old way as we are adding a future flag.
This change also adds the "future" flag to Session() and
session.execute(), so that interpretation of the incoming statement,
as well as that the new style result is returned, does not
occur for existing applications unless they add the use
of this flag.
The change in general is moving the "removed in 2.0" system
further along where we want the test suite to fully pass
even if the SQLALCHEMY_WARN_20 flag is set.
Get many tests to pass when SQLALCHEMY_WARN_20 is set; this
should be ongoing after this patch merges.
Improve the RemovedIn20 warning; these are all deprecated
"since" 1.4, so ensure that's what the messages read.
Make sure the inforamtion link is on all warnings.
Add deprecation warnings for parameters present and
add warnings to all FromClause.select() types of methods.
Fixes: #5379
Fixes: #5284
Change-Id: I765a0b912b3dcd0e995426427d8bb7997cbffd51
References: #5159
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The coercions system allows us to add in lambdas as arguments
to Core and ORM elements without changing them at all. By allowing
the lambda to produce a deterministic cache key where we can also
cheat and yank out literal parameters means we can move towards
having 90% of "baked" functionality in a clearer way right in
Core / ORM.
As a second step, we can have whole statements inside the lambda,
and can then add generation with __add__(), so then we have
100% of "baked" functionality with full support of ad-hoc
literal values.
Adds some more short_selects tests for the moment for comparison.
Other tweaks inside cache key generation as we're trying to
approach a certain level of performance such that we can
remove the use of "baked" from the loader strategies.
As we have not yet closed #4639, however the caching feature
has been fully integrated as of
b0cfa7379cf8513a821a3dbe3028c4965d9f85bd, we will also
add complete caching documentation here and close that issue
as well.
Closes: #4639
Fixes: #5380
Change-Id: If91f61527236fd4d7ae3cad1f24c38be921c90ba
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note the PR has a few remaining doc linking issues
listed in the comment that must be addressed separately.
Signed-off-by: aplatkouski <5857672+aplatkouski@users.noreply.github.com>
Closes: #5371
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5371
Pull-request-sha: 7e7d233cf3a0c66980c27db0fcdb3c7d93bc2510
Change-Id: I9c36e8d8804483950db4b42c38ee456e384c59e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A variety of caching issues found by running
all tests with statement caching turned on.
The cache system now has a more conservative approach where
any subclass of a SQL element will by default invalidate
the cache key unless it adds the flag inherit_cache=True
at the class level, or if it implements its own caching.
Add working caching to a few elements that were
omitted previously; fix some caching implementations
to suit lesser used edge cases such as json casts
and array slices.
Refine the way BaseCursorResult and CursorMetaData
interact with caching; to suit cases like Alembic
modifying table structures, don't cache the
cursor metadata if it were created against a
cursor.description using non-positional matching,
e.g. "select *". if a table re-ordered its columns
or added/removed, now that data is obsolete.
Additionally we have to adapt the cursor metadata
_keymap regardless of if we just processed
cursor.description, because if we ran against
a cached SQLCompiler we won't have the right
columns in _keymap.
Other refinements to how and when we do this
adaption as some weird cases
were exposed in the Postgresql dialect,
a text() construct that names just one column that
is not actually in the statement. Fixed that
also as it looks like a cut-and-paste artifact
that doesn't actually affect anything.
Various issues with re-use of compiled result maps
and cursor metadata in conjunction with tables being
changed, such as change in order of columns.
mappers can be cleared but the class remains, meaning
a mapper has to use itself as the cache key not the class.
lots of bound parameter / literal issues, due to Alembic
creating a straight subclass of bindparam that renders
inline directly. While we can update Alembic to not
do this, we have to assume other people might be doing
this, so bindparam() implements the inherit_cache=True
logic as well that was a bit involved.
turn on cache stats in logging.
Includes a fix to subqueryloader which moves all setup to
the create_row_processor() phase and elminates any storage
within the compiled context. This includes some changes
to create_row_processor() signature and a revising of the
technique used to determine if the loader can participate
in polymorphic queries, which is also applied to
selectinloading.
DML update.values() and ordered_values() now coerces the
keys as we have tests that pass an arbitrary class here
which only includes __clause_element__(), so the
key can't be cached unless it is coerced. this in turn
changed how composite attributes support bulk update
to use the standard approach of ClauseElement with
annotations that are parsed in the ORM context.
memory profiling successfully caught that the Session
from Query was getting passed into _statement_20()
so that was a big win for that test suite.
Apparently Compiler had .execute() and .scalar() methods
stuck on it, these date back to version 0.4 and there
was a single test in the PostgreSQL dialect tests
that exercised it for no apparent reason. Removed
these methods as well as the concept of a Compiler
holding onto a "bind".
Fixes: #5386
Change-Id: I990b43aab96b42665af1b2187ad6020bee778784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces the ORM execution flow with a
single pathway through Session.execute() for all queries,
including Core and ORM.
Currently included is full support for ORM Query,
Query.from_statement(), select(), as well as the
baked query and horizontal shard systems. Initial
changes have also been made to the dogpile caching
example, which like baked query makes use of a
new ORM-specific execution hook that replaces the
use of both QueryEvents.before_compile() as well
as Query._execute_and_instances() as the central
ORM interception hooks.
select() and Query() constructs alike can be passed to
Session.execute() where they will return ORM
results in a Results object. This API is currently
used internally by Query. Full support for
Session.execute()->results to behave in a fully
2.0 fashion will be in later changesets.
bulk update/delete with ORM support will also
be delivered via the update() and delete()
constructs, however these have not yet been adapted
to the new system and may follow in a subsequent
update.
Performance is also beginning to lag as of this
commit and some previous ones. It is hoped that
a few central functions such as the coercions
functions can be rewritten in C to re-gain
performance. Additionally, query caching
is now available and some subsequent patches
will attempt to cache more of the per-execution
work from the ORM layer, e.g. column getters
and adapters.
This patch also contains initial "turn on" of the
caching system enginewide via the query_cache_size
parameter to create_engine(). Still defaulting at
zero for "no caching". The caching system still
needs adjustments in order to gain adequate performance.
Change-Id: I047a7ebb26aa85dc01f6789fac2bff561dcd555d
|
|
|
|
|
|
|
|
| |
two questions today involving creator / do_connect,
do_connect is not well known enough, ensure docs are present
and prominent.
Change-Id: I85d518b9fc7b9b069a18010969abefa360134fe9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The create_engine()->encoding parameter is mostly irrelevant
under Python 3. make it clear this parameter is not generally
useful anymore and refer readers to the dialect documenation.
Update cx_Oracle documentation to feature many examples of
the encoding / nencoding parameters, remove extra detail that
is not generally useful and reorganize information into
more specific sections, de-emphasizing legacy / Python 2
specific sections.
Change-Id: I42dafb85de0a585c9e05e1e3d787d8d6bfa632c0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented the SQLAlchemy 2 :func:`.future.create_engine` function which
is used for forwards compatibility with SQLAlchemy 2. This engine
features always-transactional behavior with autobegin.
Allow execution options per statement execution. This includes
that the before_execute() and after_execute() events now accept
an additional dictionary with these options, empty if not
passed; a legacy event decorator is added for backwards compatibility
which now also emits a deprecation warning.
Add some basic tests for execution, transactions, and
the new result object. Build out on a new testing fixture
that swaps in the future engine completely to start with.
Change-Id: I70e7338bb3f0ce22d2f702537d94bb249bd9fb0a
Fixes: #4644
|
|
|
|
|
|
|
| |
includes more replacements for create_engine(), Connection,
disambiguation of Result from future/baked
Change-Id: Icb60a79ee7a6c45ea9056c211ffd1be110da3b5e
|
|
|
|
|
|
|
|
| |
Replaces a wide array of Sphinx-relative doc references
with an abbreviated absolute form now supported by
zzzeeksphinx.
Change-Id: I94bffcc3f37885ffdde6238767224296339698a2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
zzzeeksphinx 1.1.2 in git can now convert short
prefix names in a configured lookup to fully qualified module
names, so that
we can have succinct and portable pyrefs
that still resolve absolutely.
It also includes a formatter that will format all pyrefs
in a fully consistent way regardless of the package path,
by unconditionally removing all package tokens but always
leaving class names in place including for methods, which
means we no longer have to deal with tildes in pyrefs.
The most immediate goal of the absolute prefixes is
that we have lots of
"ambiguous" names that appear in muliple places, like select(),
ARRAY, ENUM etc. With the incoming future packages there
is going to be lots of name overlap so it is necessary
that all names eventually use absolute package paths
when Sphinx receives them.
In multiple stages, pyrefs will be converted using the
zzzeeksphinx tools/fix_xrefs.py tool so that doclinks can
be made absolute using symbolic prefixes.
For this review, the actual search and replace of symbols
is not performed, instead some general cleanup to prepare
the docs as well as a lookup file used by the tool
to do the conversion. this relatively small patch will
be backported
with appropriate changes to 1.3, 1.2, 1.1 and the tool
can then be run on each branch individually. We are shooting
for almost no warnings at all for master (still a handful
I can't figure out which don't seem to have any impact)
, very few for 1.3,
and for 1.2 / 1.1 we hope for a significant reduction
in warnings.
Overall for all versions pyrefs should
always point to the correct target, if they are in fact
hyperlinked. it's better for a ref to go nowhere and
be plain text than go to the wrong thing. Right now,
hundreds of API links are pointing to the wrong thing
as they are ambiguous names such as refresh(), insert(),
update(), select(), join(), JSON etc. and Sphinx sends these all
to essesntially random destinations among as many as five
or six possible choices per symbol. A shorthand system
that allows us to use absolute refs without having
to type out a full blown absoulte module is the only
way this is going to work, and we should ultimately
seek to abandon any use of prefix dot for lookups. Everything
should be on an underscore token so at the very least the module
spaces can be reorganized without having to search and replace
the entire documentation every time.
Change-Id: I484a7329034af275fcdb322b62b6255dfeea9151
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This builds on cc718cccc0bf8a01abdf4068c7ea4f3 which moved
RowProxy to Row, allowing Row to be more like a named tuple.
- KeyedTuple in ORM is replaced with Row
- ResultSetMetaData broken out into "simple" and "cursor" versions
for ORM and Core, as well as LegacyCursor version.
- Row now has _mapping attribute that supplies full mapping behavior.
Row and SimpleRow both have named tuple behavior otherwise.
LegacyRow has some mapping features on the tuple which emit
deprecation warnings (e.g. keys(), values(), etc). the biggest
change for mapping->tuple is the behavior of __contains__ which
moves from testing of "key in row" to "value in row".
- ResultProxy breaks into ResultProxy and FutureResult (interim),
the latter has the newer APIs. Made available to dialects
using execution options.
- internal reflection methods and most tests move off of implicit
Row mapping behavior and move to row._mapping, result.mappings()
method using future result
- a new strategy system for cursor handling replaces the various
subclasses of RowProxy
- some execution context adjustments. We will leave EC in but
refined things like get_result_proxy() and out parameter handling.
Dialects for 1.4 will need to adjust from get_result_proxy()
to get_result_cursor_strategy(), if they are using this method
- out parameter handling now accommodated by get_out_parameter_values()
EC method. Oracle changes for this. external dialect for
DB2 for example will also need to adjust for this.
- deprecate case_insensitive flag for engine / result, this
feature is not used
mapping-methods on Row are deprecated, and replaced with
Row._mapping.<meth>, including:
row.keys() -> use row._mapping.keys()
row.items() -> use row._mapping.items()
row.values() -> use row._mapping.values()
key in row -> use key in row._mapping
int in row -> use int < len(row)
Fixes: #4710
Fixes: #4878
Change-Id: Ieb9085e9bcff564359095b754da9ae0af55679f0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added "from linting" as a built-in feature to the SQL compiler. This
allows the compiler to maintain graph of all the FROM clauses in a
particular SELECT statement, linked by criteria in either the WHERE
or in JOIN clauses that link these FROM clauses together. If any two
FROM clauses have no path between them, a warning is emitted that the
query may be producing a cartesian product. As the Core expression
language as well as the ORM are built on an "implicit FROMs" model where
a particular FROM clause is automatically added if any part of the query
refers to it, it is easy for this to happen inadvertently and it is
hoped that the new feature helps with this issue.
The original recipe is from:
https://github.com/sqlalchemy/sqlalchemy/wiki/FromLinter
The linter is now enabled for all tests in the test suite as well.
This has necessitated that a lot of the queries be adjusted to
not include cartesian products. Part of the rationale for the
linter to not be enabled for statement compilation only was to reduce
the need for adjustment for the many test case statements throughout
the test suite that are not real-world statements.
This gerrit is adapted from Ib5946e57c9dba6da428c4d1dee6760b3e978dda0.
Fixes: #4737
Change-Id: Ic91fd9774379f895d021c3ad564db6062299211c
Closes: #4830
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/4830
Pull-request-sha: f8a21aa6262d1bcc9ff0d11a2616e41fba97a47a
|
|
|
|
| |
Change-Id: I08440dc25e40ea1ccea1778f6ee9e28a00808235
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "expanding IN" feature, which generates IN expressions at query
execution time which are based on the particular parameters associated with
the statement execution, is now used for all IN expressions made against
lists of literal values. This allows IN expressions to be fully cacheable
independently of the list of values being passed, and also includes support
for empty lists. For any scenario where the IN expression contains
non-literal SQL expressions, the old behavior of pre-rendering for each
position in the IN is maintained. The change also completes support for
expanding IN with tuples, where previously type-specific bind processors
weren't taking effect.
As part of this change, a more explicit separation between
"literal execute" and "post compile" bound parameters is being made;
as the "ansi bind rules" feature is rendering bound parameters
inline, as we now support "postcompile" generically, these should
be used here, however we have to render literal values at
execution time even for "expanding" parameters. new test fixtures
etc. are added to assert everything goes to the right place.
Fixes: #4645
Change-Id: Iaa2b7bfbfaaf5b80799ee17c9b8507293cba6ed1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Remove a redundant assignment in the engine creation file.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #4944
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/4944
Pull-request-sha: 4a0e6206f0ae5ff5350ecaa6f998bc3dc0c26cdd
Change-Id: Ie491b071e3392334947d3b8ba84c7323c1b15b6e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new :func:`.create_engine` parameter
:paramref:`.create_engine.max_identifier_length`. This overrides the
dialect-coded "max identifier length" in order to accommodate for databases
that have recently changed this length and the SQLAlchemy dialect has
not yet been adjusted to detect for that version. This parameter interacts
with the existing :paramref:`.create_engine.label_length` parameter in that
it establishes the maximum (and default) value for anonymously generated
labels.
The Oracle dialect now emits a warning if Oracle version 12.2 or greater is
used, and the :paramref:`.create_engine.max_identifier_length` parameter is
not set. The version in this specific case defaults to that of the
"compatibility" version set in the Oracle server configuration, not the
actual server version. In version 1.4, the default max_identifier_length
for 12.2 or greater will move to 128 characters. In order to maintain
forwards compatibility, applications should set
:paramref:`.create_engine.max_identifier_length` to 30 in order to maintain
the same length behavior, or to 128 in order to test the upcoming behavior.
This length determines among other things how generated constraint names
are truncated for statements like ``CREATE CONSTRAINT`` and ``DROP
CONSTRAINT``, which means a the new length may produce a name-mismatch
against a name that was generated with the old length, impacting database
migrations.
Fixes: #4857
Change-Id: Ib62efb00c6180c375869029b57353d90385d7950
|
|
|
|
|
|
|
|
|
| |
Added new parameter :paramref:`.create_engine.hide_parameters` which when
set to True will cause SQL parameters to no longer be logged, nor rendered
in the string representation of a :class:`.StatementError` object.
Fixes: #4815
Change-Id: Ib87f868b6936cf6b42b192644e9d732ec24266c2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed an issue whereby if the dialect "initialize" process which occurs on
first connect would encounter an unexpected exception, the initialize
process would fail to complete and then no longer attempt on subsequent
connection attempts, leaving the dialect in an un-initialized, or partially
initialized state, within the scope of parameters that need to be
established based on inspection of a live connection. The "invoke once"
logic in the event system has been reworked to accommodate for this
occurrence using new, private API features that establish an "exec once"
hook that will continue to allow the initializer to fire off on subsequent
connections, until it completes without raising an exception. This does not
impact the behavior of the existing ``once=True`` flag within the event
system.
Fixes: #4807
Change-Id: Iec32999b61b6af4b38b6719e0c2651454619078c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dialects that support json are supposed to take arguments
``json_serializer`` and ``json_deserializer`` at the create_engine() level,
however the SQLite dialect calls them ``_json_serilizer`` and
``_json_deserilalizer``. The names have been corrected, the old names are
accepted with a change warning, and these parameters are now documented as
:paramref:`.create_engine.json_serializer` and
:paramref:`.create_engine.json_deserializer`.
Fixes: #4798
Change-Id: I1dbfe439b421fe9bb7ff3594ef455af8156f8851
|
|
The "threadlocal" execution strategy, deprecated in 1.3, has been
removed for 1.4, as well as the concept of "engine strategies" and the
``Engine.contextual_connect`` method. The "strategy='mock'" keyword
argument is still accepted for now with a deprecation warning; use
:func:`.create_mock_engine` instead for this use case.
Fixes: #4632
Change-Id: I8a351f9fa1f7dfa2a56eec1cd2d1a4b9d65765a2
(cherry picked from commit b368c49b44c5716d93c7428ab22b6761c6ca7cf5)
|