summaryrefslogtreecommitdiff
path: root/lib/sqlalchemy/engine/cursor.py
Commit message (Collapse)AuthorAgeFilesLines
* Performance improvement in RowFederico Caselli2023-04-261-18/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various performance improvements to Row instanciation - avoid passing processors if they are all None - improve processor logic in cython - improve tuplegetter using slices when contiguous indexes are used Some timing follow. In particular [base_]row_new_proc that tests using processors has a 25% improvement compared to before in cython. Looking at the [b]row_new_proc_none that test a list of processors all None, this has 50% improvement in cython when passing the none list, but in this patch it would usually be disabled by passing None, so the performance gain is actually 90%, since it would run the case [base_]row_new. Tuplegetter is a bit faster in the single item get and when getting sequential indexes (like indexes 1,2,3,4) at the cost of a bit longer creation time in python, cython is mostly the same. Current times | python | cython | cy / py | base_row_new | 0.639817400 | 0.118265500 | 0.184842582 | row_new | 0.680355100 | 0.129714600 | 0.190657202 | base_row_new_proc | 3.076538900 | 1.488428600 | 0.483799701 | row_new_proc | 3.119700100 | 1.532197500 | 0.491136151 | brow_new_proc_none | 1.917702300 | 0.475511500 | 0.247958977 | row_new_proc_none | 1.956253300 | 0.497803100 | 0.254467609 | tuplegetter_one | 0.152512600 | 0.148523900 | 0.973846751 | tuplegetter_many | 0.184394100 | 0.184511500 | 1.000636680 | tuplegetter_seq | 0.154832800 | 0.156270100 | 1.009282917 | tuplegetter_new_one | 0.523730000 | 0.343402200 | 0.655685563 | tuplegetter_new_many| 0.738924400 | 0.420961400 | 0.569694816 | tuplegetter_new_seq | 1.062036900 | 0.495462000 | 0.466520514 | Parent commit times | python | cython | cy / py | base_row_new | 0.643890800 | 0.113548300 | 0.176347138 | row_new | 0.674885900 | 0.124391800 | 0.184315304 | base_row_new_proc | 3.072020400 | 2.017367000 | 0.656690626 | row_new_proc | 3.109943400 | 2.048359400 | 0.658648450 | brow_new_proc_none | 1.967133700 | 1.006326000 | 0.511569702 | row_new_proc_none | 1.960814900 | 1.025217800 | 0.522852922 | tuplegetter_one | 0.197359900 | 0.205999000 | 1.043773330 | tuplegetter_many | 0.196575900 | 0.194888500 | 0.991416038 | tuplegetter_seq | 0.192723900 | 0.205635000 | 1.066992729 | tuplegetter_new_one | 0.534644500 | 0.414311700 | 0.774929322 | tuplegetter_new_many| 0.479376500 | 0.417448100 | 0.870814694 | tuplegetter_new_seq | 0.481580200 | 0.412697900 | 0.856966088 | Change-Id: I2ca1f49dca2beff625c283f1363c29c8ccc0c3f7
* Prebuild the row string to position lookup for RowsJ. Nick Koston2023-04-261-46/+40
| | | | | | | | | | | | | | | Improved :class:`_engine.Row` implementation to optimize ``__getattr__`` performance. The serialization of a :class:`_engine.Row` to pickle has changed with this change. Pickle saved by older SQLAlchemy versions can still be loaded, but new pickle saved by this version cannot be loaded by older ones. Fixes: #9678 Closes: #9668 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9668 Pull-request-sha: 86b8ccd1959dbd91b1208f7a648a91f217e1f866 Change-Id: Ia85c26a59e1a57ba2bf0d65578c6168f82a559f2
* add deterministic imv returning ordering using sentinel columnsMike Bayer2023-04-211-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Repaired a major shortcoming which was identified in the :ref:`engine_insertmanyvalues` performance optimization feature first introduced in the 2.0 series. This was a continuation of the change in 2.0.9 which disabled the SQL Server version of the feature due to a reliance in the ORM on apparent row ordering that is not guaranteed to take place. The fix applies new logic to all "insertmanyvalues" operations, which takes effect when a new parameter :paramref:`_dml.Insert.returning.sort_by_parameter_order` on the :meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults` methods, that through a combination of alternate SQL forms, direct correspondence of client side parameters, and in some cases downgrading to running row-at-a-time, will apply sorting to each batch of returned rows using correspondence to primary key or other unique values in each row which can be correlated to the input data. Performance impact is expected to be minimal as nearly all common primary key scenarios are suitable for parameter-ordered batching to be achieved for all backends other than SQLite, while "row-at-a-time" mode operates with a bare minimum of Python overhead compared to the very heavyweight approaches used in the 1.x series. For SQLite, there is no difference in performance when "row-at-a-time" mode is used. It's anticipated that with an efficient "row-at-a-time" INSERT with RETURNING batching capability, the "insertmanyvalues" feature can be later be more easily generalized to third party backends that include RETURNING support but not necessarily easy ways to guarantee a correspondence with parameter order. Fixes: #9618 References: #9603 Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
* Remove old versionadded and versionchangedFederico Caselli2023-04-121-4/+0
| | | | | | | Removed versionadded and versionchanged for version prior to 1.2 since they are no longer useful. Change-Id: I5c53d1188bc5fec3ab4be39ef761650ed8fa6d3e
* Fixed bug where :meth:`_engine.Row`s could not beFederico Caselli2023-03-041-6/+17
| | | | | | | unpickled by other processes. Fixes: #9423 Change-Id: Ie496e31158caff5f72e0a9069dddd55f3116e0b8
* Remove `typing.Self` workaroundYurii Karabas2023-02-081-4/+2
| | | | | | | | | | | | Remove ``typing.Self`` workaround, now using :pep:`673` for most methods that return ``Self``. Pull request courtesy Yurii Karabas. Fixes: #9254 Closes: #9255 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9255 Pull-request-sha: 2947df8ada79f5c3afe9c838e65993302199c2f7 Change-Id: Ic32015ad52e95a61f3913d43ea436aa9402804df
* happy new year 2023Mike Bayer2023-01-031-1/+1
| | | | Change-Id: I625af65b3fb1815b1af17dc2ef47dd697fdc3fb1
* Try running pyupgrade on the codeFederico Caselli2022-11-161-12/+4
| | | | | | | | command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>" pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not exists in sqlalchemy fixtures Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
* New ORM Query Guide featuring DML supportMike Bayer2022-09-251-1/+1
| | | | | | | | | | | | | | | | | reviewers: these docs publish periodically at: https://docs.sqlalchemy.org/en/gerrit/4042/orm/queryguide/index.html See the "last generated" timestamp near the bottom of the page to ensure the latest version is up Change includes some other adjustments: * small typing fixes for end-user benefit * removal of a bunch of old examples for patterns that nobody uses or aren't really what we promote now * modernization of some examples, including inheritance Change-Id: I9929daab7797be9515f71c888b28af1209e789ff
* ORM bulk insert via executeMike Bayer2022-09-241-54/+336
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * ORM Insert now includes "bulk" mode that will run essentially the same process as session.bulk_insert_mappings; interprets the given list of values as ORM attributes for key names * ORM UPDATE has a similar feature, without RETURNING support, for session.bulk_update_mappings * Added support for upserts to do RETURNING ORM objects as well * ORM UPDATE/DELETE with list of parameters + WHERE criteria is a not implemented; use connection * ORM UPDATE/DELETE defaults to "auto" synchronize_session; use fetch if RETURNING is present, evaluate if not, as "fetch" is much more efficient (no expired object SELECT problem) and less error prone if RETURNING is available UPDATE: howver this is inefficient! please continue to use evaluate for simple cases, auto can move to fetch if criteria not evaluable * "Evaluate" criteria will now not preemptively unexpire and SELECT attributes that were individually expired. Instead, if evaluation of the criteria indicates that the necessary attrs were expired, we expire the object completely (delete) or expire the SET attrs unconditionally (update). This keeps the object in the same unloaded state where it will refresh those attrs on the next pass, for this generally unusual case. (originally #5664) * Core change! update/delete rowcount comes from len(rows) if RETURNING was used. SQLite at least otherwise did not support this. adjusted test_rowcount accordingly * ORM DELETE with a list of parameters at all is also a not implemented as this would imply "bulk", and there is no bulk_delete_mappings (could be, but we dont have that) * ORM insert().values() with single or multi-values translates key names based on ORM attribute names * ORM returning() implemented for insert, update, delete; explcit returning clauses now interpret rows in an ORM context, with support for qualifying loader options as well * session.bulk_insert_mappings() assigns polymorphic identity if not set. * explicit RETURNING + synchronize_session='fetch' is now supported with UPDATE and DELETE. * expanded return_defaults() to work with DELETE also. * added support for composite attributes to be present in the dictionaries used by bulk_insert_mappings and bulk_update_mappings, which is also the new ORM bulk insert/update feature, that will expand the composite values into their individual mapped attributes the way they'd be on a mapped instance. * bulk UPDATE supports "synchronize_session=evaluate", is the default. this does not apply to session.bulk_update_mappings, just the new version * both bulk UPDATE and bulk INSERT, the latter with or without RETURNING, support *heterogenous* parameter sets. session.bulk_insert/update_mappings did this, so this feature is maintained. now cursor result can be both horizontally and vertically spliced :) This is now a long story with a lot of options, which in itself is a problem to be able to document all of this in some way that makes sense. raising exceptions for use cases we haven't supported is pretty important here too, the tradition of letting unsupported things just not work is likely not a good idea at this point, though there are still many cases that aren't easily avoidable Fixes: #8360 Fixes: #7864 Fixes: #7865 Change-Id: Idf28379f8705e403a3c6a937f6a798a042ef2540
* break out text() from TextualSelect for col matchingMike Bayer2022-09-191-3/+13
| | | | | | | | | | Fixed issue where mixing "*" with additional explicitly-named column expressions within the columns clause of a :func:`_sql.select` construct would cause result-column targeting to sometimes consider the label name or other non-repeated names to be an ambiguous target. Fixes: #8536 Change-Id: I3c845eaf571033e54c9208762344f67f4351ac3a
* Update to flake8 5.Federico Caselli2022-07-311-2/+4
| | | | Change-Id: I5a241a70efba68bcea9819ddce6aebc25703e68d
* repair yield_per for non-SS dialects and add new optionsMike Bayer2022-07-011-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented new :paramref:`_engine.Connection.execution_options.yield_per` execution option for :class:`_engine.Connection` in Core, to mirror that of the same :ref:`yield_per <orm_queryguide_yield_per>` option available in the ORM. The option sets both the :paramref:`_engine.Connection.execution_options.stream_results` option at the same time as invoking :meth:`_engine.Result.yield_per`, to provide the most common streaming result configuration which also mirrors that of the ORM use case in its usage pattern. Fixed bug in :class:`_engine.Result` where the usage of a buffered result strategy would not be used if the dialect in use did not support an explicit "server side cursor" setting, when using :paramref:`_engine.Connection.execution_options.stream_results`. This is in error as DBAPIs such as that of SQLite and Oracle already use a non-buffered result fetching scheme, which still benefits from usage of partial result fetching. The "buffered" strategy is now used in all cases where :paramref:`_engine.Connection.execution_options.stream_results` is set. Added :meth:`.FilterResult.yield_per` so that result implementations such as :class:`.MappingResult`, :class:`.ScalarResult` and :class:`.AsyncResult` have access to this method. Fixes: #8199 Change-Id: I6dde3cbe483a1bf81e945561b60f4b7d1c434750
* Merge "migrate labels to new tutorial" into mainmike bayer2022-06-071-1/+1
|\
| * migrate labels to new tutorialMike Bayer2022-06-071-1/+1
| | | | | | | | | | | | | | other org changes and some sections from old tutorial ported to new tutorial. Change-Id: Ic0fba60ec82fff481890887beef9ed0fa271875a
* | Generalize RETURNING and suppor for MariaDB / SQLiteDaniel Black2022-06-021-1/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As almost every dialect supports RETURNING now, RETURNING is also made more of a default assumption. * the default compiler generates a RETURNING clause now when specified; CompileError is no longer raised. * The dialect-level implicit_returning parameter now has no effect. It's not fully clear if there are real world cases relying on the dialect-level parameter, so we will see once 2.0 is released. ORM-level RETURNING can be disabled at the table level, and perhaps "implicit returning" should become an ORM-level option at some point as that's where it applies. * Altered ORM update() / delete() to respect table-level implicit returning for fetch. * Since MariaDB doesnt support UPDATE returning, "full_returning" is now split into insert_returning, update_returning, delete_returning * Crazy new thing. Dialects that have *both* cursor.lastrowid *and* returning. so now we can pick between them for SQLite and mariadb. so, we are trying to keep it on .lastrowid for simple inserts with an autoincrement column, this helps with some edge case test scenarios and i bet .lastrowid is faster anyway. any return_defaults() / multiparams etc then we use returning * SQLite decided they dont want to return rows that match in ON CONFLICT. this is flat out wrong, but for now we need to work with it. Fixes: #6195 Fixes: #7011 Closes: #7047 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7047 Pull-request-sha: d25d5ea3abe094f282c53c7dd87f5f53a9e85248 Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com> Change-Id: I9908ce0ff7bdc50bd5b27722081767c31c19a950
* fix most sphinx warnings (1.4)Mike Bayer2022-05-161-1/+1
| | | | | | | | | | | | | | | still can't figure out the warnings with some of the older changelog files. this cherry-picks the sphinx fixes from 1.4 and additionally fixes a small number of new issues in the 2.0 docs. However, 2.0 has many more errors to fix, primarily from the removal of the legacy tutorials left behind a lot of labels that need to be re-linked to the new tutorial. Fixes: #7946 Change-Id: Id657ab23008eed0b133fed65b2f9ea75a626215c (cherry picked from commit 9b55a423459236ca8a2ced713c9e93999dd18922)
* revenge of pep 484Mike Bayer2022-05-151-20/+96
| | | | | | trying to get remaining must-haves for ORM Change-Id: I66a3ecbbb8e5ba37c818c8a92737b576ecf012f7
* inline mypy config; files ignoring type errors for the momentMike Bayer2022-04-281-0/+1
| | | | | | | | | | | | | | | | | | | to simplify pyproject.toml change the remaining files that aren't going to be typed on this first pass (unless of course someone wants to type some of these) to include # mypy: ignore-errors. for the moment, only a handful of ORM modules are to have more type checking implemented. It's important that ignore-errors is used and not "# type: ignore", as in the latter case, mypy doesn't even read the existing types in the file, which makes it impossible to type any files that refer to those modules at all. to simplify ongoing typing work use inline mypy config for remaining files that are "done" for now, indicating the level of type checking they currently have. Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
* pep484 ORM / SQL result supportMike Bayer2022-04-271-6/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | after some experimentation it seems mypy is more amenable to the generic types being fully integrated rather than having separate spin-off types. so key structures like Result, Row, Select become generic. For DML Insert, Update, Delete, these are spun into type-specific subclasses ReturningInsert, ReturningUpdate, ReturningDelete, which is fine since the "row-ness" of these constructs doesn't happen until returning() is called in any case. a Tuple based model is then integrated so that these objects can carry along information about their return types. Overloads at the .execute() level carry through the Tuple from the invoked object to the result. To suit the issue of AliasedClass generating attributes that are dynamic, experimented with a custom subclass AsAliased, but then just settled on having aliased() lie to the type checker and return `Type[_O]`, essentially. will need some type-related accessors for with_polymorphic() also. Additionally, identified an issue in Update when used "mysql style" against a join(), it basically doesn't work if asked to UPDATE two tables on the same column name. added an error message to the specific condition where it happens with a very non-specific error message that we hit a thing we can't do right now, suggest multi-table update as a possible cause. Change-Id: I5eff7eefe1d6166ee74160b2785c5e6a81fa8b95
* pep-484: ORM public API, constructorsMike Bayer2022-04-201-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | for the moment, abandoning using @overload with relationship() and mapped_column(). The overloads are very difficult to get working at all, and the overloads that were there all wouldn't pass on mypy. various techniques of getting them to "work", meaning having right hand side dictate what's legal on the left, have mixed success and wont give consistent results; additionally, it's legal to have Optional / non-optional independent of nullable in any case for columns. relationship cases are less ambiguous but mypy was not going along with things. we have a comprehensive system of allowing left side annotations to drive the right side, in the absense of explicit settings on the right. so type-centric SQLAlchemy will be left-side driven just like dataclasses, and the various flags and switches on the right side will just not be needed very much. in other matters, one surprise, forgot to remove string support from orm.join(A, B, "somename") or do deprecations for it in 1.4. This is a really not-directly-used structure barely mentioned in the docs for many years, the example shows a relationship being used, not a string, so we will just change it to raise the usual error here. Change-Id: Iefbbb8d34548b538023890ab8b7c9a5d9496ec6e
* update flake8 noqa skips with proper syntaxFederico Caselli2022-04-111-1/+1
| | | | Change-Id: I42ed77f559e3ee5b8c600d98457ee37803ef0ea6
* pep 484 for typesMike Bayer2022-03-191-2/+6
| | | | | | | strict types type_api.py, including TypeDecorator, NativeForEmulated, etc. Change-Id: Ib2eba26de0981324a83733954cb7044a29bbd7db
* pep-484: sqlalchemy.sql pass oneMike Bayer2022-03-131-3/+5
| | | | | | | | | | | | | | | | | | sqlalchemy.sql will require many passes to get all modules even gradually typed. Will have to pick and choose what modules can be strictly typed vs. which can be gradual. in this patch, emphasis is on visitors.py, cache_key.py, annotations.py for strict typing, compiler.py is on gradual typing but has much more structure, in particular where it connects with the outside world. The work within compiler.py also reached back out to engine/cursor.py , default.py quite a bit. References: #6810 Change-Id: I6e8a29f6013fd216e43d45091bc193f8be0368fd
* additional mypy strictnessMike Bayer2022-03-121-84/+52
| | | | | | | | | | | | | | | | | | enable type checking within untyped defs. This allowed some more internals to be fixed up with assertions etc. some internals that were unnecessary or not even used at all were removed. BaseCursorResult was no longer necessary since we only have one kind of CursorResult now. The different ResultProxy subclasses that had alternate "strategies" dont appear to be used at all even in 1.4.x, as there's no code that accesses the _cursor_strategy_cls attribute, which is also removed. As these were mostly private constructs that weren't even functioning correctly in any case, it's fine to remove these over the 2.0 boundary. Change-Id: Ifd536987d104b1cd8b546cefdbd5c1e5d1801082
* pep-484 for engineMike Bayer2022-03-011-30/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All modules in sqlalchemy.engine are strictly typed with the exception of cursor, default, and reflection. cursor and default pass with non-strict typing, reflection is waiting on the multi-reflection refactor. Behavioral changes: * create_connect_args() methods return a tuple of list, dict, rather than a list of list, dict * removed allow_chars parameter from pyodbc connector ._get_server_version_info() method * the parameter list passed to do_executemany is now a list in all cases. previously, this was being run through dialect.execute_sequence_format, which defaults to tuple and was only intended for individual tuple params. * broke up dialect.dbapi into dialect.import_dbapi class method and dialect.dbapi module object. added a deprecation path for legacy dialects. it's not really feasible to type a single attr as a classmethod vs. module type. The "type_compiler" attribute also has this problem with greater ability to work around, left that one for now. * lots of constants changing to be Enum, so that we can type them. for fixed tuple-position constants in cursor.py / compiler.py (which are used to avoid the speed overhead of namedtuple), using Literal[value] which seems to work well * some tightening up in Row regarding __getitem__, which we can do since we are on full 2.0 style result use * altered the set_connection_execution_options and set_engine_execution_options event flows so that the dictionary of options may be mutated within the event hook, where it will then take effect as the actual options used. Previously, changing the dict would be silently ignored which seems counter-intuitive and not very useful. * A lot of DefaultDialect/DefaultExecutionContext methods and attributes, including underscored ones, move to interfaces. This is not fully ideal as it means the Dialect/ExecutionContext interfaces aren't publicly subclassable directly, but their current purpose is more of documentation for dialect authors who should (and certainly are) still be subclassing the DefaultXYZ versions in all cases Overall, Result was the most extremely difficult class hierarchy to type here as this hierarchy passes through largely amorphous "row" datatypes throughout, which can in fact by all kinds of different things, like raw DBAPI rows, or Row objects, or "scalar"/Any, but at the same time these types have meaning so I tried still maintaining some level of semantic markings for these, it highlights how complex Result is now, as it's trying to be extremely efficient and inlined while also being very open-ended and extensible. Change-Id: I98b75c0c09eab5355fc7a33ba41dd9874274f12a
* pep-484 for sqlalchemy.event; use future annotationsMike Bayer2022-02-151-0/+2
| | | | | | | | | | __future__.annotations mode allows us to use non-string annotations for argument and return types in most cases, but more importantly it removes a large amount of runtime overhead that would be spent in evaluating the annotations. Change-Id: I2f5b6126fe0019713fc50001be3627b664019ede References: #6810
* ensure exception raised for all stream w/ sync resultMike Bayer2022-02-041-0/+1
| | | | | | | | | | | | | | | | Fixed issue where the :meth:`_asyncio.AsyncSession.execute` method failed to raise an informative exception if the ``stream_results`` execution option were used, which is incompatible with a sync-style :class:`_result.Result` object. An exception is now raised in this scenario in the same way one is already raised when using ``stream_results`` in conjunction with the :meth:`_asyncio.AsyncConnection.execute` method. Additionally, for improved stability with state-sensitive dialects such as asyncmy, the cursor is now closed when this error condition is raised; previously with the asyncmy dialect, the connection would go into an invalid state with unconsumed server side results remaining. Fixes: #7667 Change-Id: I6eb7affe08584889b57423a90258295f8b7085dc
* Merge "Fix various source comment/doc typos" into mainmike bayer2022-01-071-1/+1
|\
| * Fix various source comment/doc typosluz paz2021-12-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ### Description Found via `codespell -q 3 -L ba,crate,datas,froms,gord,hist,inh,nd,selectin,strat,ue` Also added codespell to the pep8 tox env ### Checklist This pull request is: - [x] A documentation / typographical error fix - Good to go, no issue or tests are needed Closes: #7338 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7338 Pull-request-sha: 0deac2219396bc0eba7da53eb3a80932edbf2dd7 Change-Id: Icd61db31c8dc655d4a39d8a304194804d08555fe
* | happy new year 2022Mike Bayer2022-01-061-1/+1
| | | | | | | | Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
* | Properly type _generative, decorator, public_factoryFederico Caselli2021-12-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Good new is that pylance likes it and copies over the singature and everything. Bad news is that mypy does not support this yet https://github.com/python/mypy/issues/8645 Other minor bad news is that non_generative is not typed. I've tried using a protocol like the one in the comment but the signature is not ported over by pylance, so it's probably best to just live without it to have the correct signature. notes from mike: these three decorators are at the core of getting the library to be typed, more good news is that pylance will do all the things we like re: public_factory, see https://github.com/microsoft/pyright/issues/2758#issuecomment-1002788656 . For @_generative, we will likely move to using pep 673 once mypy supports it which may be soon. but overall having the explicit "return self" in the methods, while a little inconvenient, makes the typing more straightforward and locally present in the files rather than being decided at a distance. having "return self" present, or not, both have problems, so maybe we will be able to change it again if things change as far as decorator support. As it is, I feel like we are barely squeaking by with our decorators, the typing is already pretty out there. Change-Id: Ic77e13fc861def76a5925331df85c0aa48d77807 References: #6810
* | Merge "Replace raise_ with raise from" into mainFederico Caselli2021-12-271-25/+15
|\ \
| * | Replace raise_ with raise fromFederico Caselli2021-12-271-25/+15
| |/ | | | | | | | | Change-Id: I7aaeb5bc130271624335b79cf586581d6c6c34c7 References: #4600
* | cursor tweaksMike Bayer2021-12-271-81/+98
|/ | | | | | | tighten up creation of dictionaries and conditional logic within the creation of CursorResultMetaData objects Change-Id: I5538ecc343ab0cabcf58d7c92ae0a552d5ac1d8a
* provide connectionfairy on initializeMike Bayer2021-11-291-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is so that dialect methods that are called within init can assume the same argument structure as when they are called in other places; we can nail down the type of object as well. This change seems to mostly impact the isolation level routines in the dialects, as these are called during initialize() as well as on established connections. these methods can now assume a non-proxied DBAPI connection object in all cases, as it is commonly required that attributes like ".autocommit" are set on the object which don't work well in a proxied situation. Other changes: * adds an interface for the "connectionfairy" concept called PoolProxiedConnection. * Removes ``Connectable`` superclass of Connection. ``Connectable`` was originally meant to provide for the "method which accepts connection or engine" theme. As this pattern is greatly reduced in 2.0 and Engine no longer extends from it, the ``Connectable`` superclass doesnt serve any real purpose. Leading from that, to set this in I also applied pep 484 annotations to the Dialect base, and then in the interests of seeing some of the typing information show up in my IDE did a little bit for Engine, Connection and others. I hope that it's feasible that we can add annotations to specific classes and attributes ahead of when we actually try to mass-populate the whole library. This was the original spirit of pep-484 that we can apply annotations gradually. I do of course want to try to do a mass-populate although i think even in that case we will end up doing a lot of manual work anyway (in particular for the changes here which are distinct from what the stubs have). Fixes: #7122 Change-Id: I5dd7fbff8a7ae520a81c165091af12a6a68826db
* Added support for ``psycopg`` dialect.Federico Caselli2021-11-261-0/+1
| | | | | | | Both sync and async versions are supported. Fixes: #6842 Change-Id: I57751c5028acebfc6f9c43572562405453a2f2a4
* Clean up most py3k compatFederico Caselli2021-11-241-1/+1
| | | | Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
* Remove object in class definitionFederico Caselli2021-11-221-2/+2
| | | | | References: #4600 Change-Id: I2a62ddfe00bc562720f0eae700a497495d7a987a
* ensure soft_close occurs for fetchmany with server side cursorMike Bayer2021-11-021-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixed regression where the :meth:`_engine.CursorResult.fetchmany` method would fail to autoclose a server-side cursor (i.e. when ``stream_results`` or ``yield_per`` is in use, either Core or ORM oriented results) when the results were fully exhausted. All :class:`_result.Result` objects will now consistently raise :class:`_exc.ResourceClosedError` if they are used after a hard close, which includes the "hard close" that occurs after calling "single row or value" methods like :meth:`_result.Result.first` and :meth:`_result.Result.scalar`. This was already the behavior of the most common class of result objects returned for Core statement executions, i.e. those based on :class:`_engine.CursorResult`, so this behavior is not new. However, the change has been extended to properly accommodate for the ORM "filtering" result objects returned when using 2.0 style ORM queries, which would previously behave in "soft closed" style of returning empty results, or wouldn't actually "soft close" at all and would continue yielding from the underlying cursor. As part of this change, also added :meth:`_result.Result.close` to the base :class:`_result.Result` class and implemented it for the filtered result implementations that are used by the ORM, so that it is possible to call the :meth:`_engine.CursorResult.close` method on the underlying :class:`_engine.CursorResult` when the the ``yield_per`` execution option is in use to close a server side cursor before remaining ORM results have been fetched. This was again already available for Core result sets but the change makes it available for 2.0 style ORM results as well. Fixes: #7274 Change-Id: Id4acdfedbcab891582a7f8edd2e2e7d20d868e53
* remove case_sensitive create_engine parameterMike Bayer2021-11-011-35/+2
| | | | | | | | | | | | Removed the previously deprecated ``case_sensitive`` parameter from :func:`_sa.create_engine`, which would impact only the lookup of string column names in Core-only result set rows; it had no effect on the behavior of the ORM. The effective behavior of what ``case_sensitive`` refers towards remains at its default value of ``True``, meaning that string names looked up in ``row._mapping`` will match case-sensitively, just like any other Python mapping. Change-Id: I0dc4be3fac37d30202b1603db26fa10a110b618d
* 2.0 removals: LegacyRow, connectionless execution, close_with_resultMike Bayer2021-10-311-203/+40
| | | | | | | | | | | | | | | | | | | in order to remove LegacyRow / LegacyResult, we have to also lose close_with_result, which connectionless execution relies upon. also includes a new profiles.txt file that's all against py310, as that's what CI is on now. some result counts changed by one function call which was enough to fail the low-count result tests. Replaces Connectable as the common interface between Connection and Engine with EngineEventsTarget. Engine is no longer Connectable. Connection and MockConnection still are. References: #7257 Change-Id: Iad5eba0313836d347e65490349a22b061356896a
* labeling refactorMike Bayer2021-07-121-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To service #6718 and #6710, the system by which columns are given labels in a SELECT statement as well as the system that gives them keys in a .c or .selected_columns collection have been refactored to provide a single source of truth for both, in constrast to the previous approach that included similar logic repeated in slightly different ways. Main ideas: 1. ColumnElement attributes ._label, ._anon_label, ._key_label are renamed to include the letters "tq", meaning "table-qualified" - these labels are only used when rendering a SELECT that has LABEL_STYLE_TABLENAME_PLUS_COL for its label style; as this label style is primarily legacy, the "tq" names should be isolated so that in a 2.0 style application these aren't being used at all 2. The means by which the "labels" and "proxy keys" for the elements of a SELECT has been centralized to a single source of truth; previously, the three of _generate_columns_plus_names, _generate_fromclause_column_proxies, and _column_naming_convention all had duplicated rules between them, as well as that there were a little bit of labeling rules in compiler._label_select_column as well; by this we mean that the various "anon_label" "anon_key" methods on ColumnElement were called by all four of these methods, where there were many cases where it was necessary that one method comes up with the same answer as another of the methods. This has all been centralized into _generate_columns_plus_names for all the names except the "proxy key", which is generated by _column_naming_convention. 3. compiler._label_select_column has been rewritten to both not make any naming decisions nor any "proxy key" decisions, only whether to label or not to label; the _generate_columns_plus_names method gives it the information, where the proxy keys come from _column_naming_convention; previously, these proxy keys were matched based on restatement of similar (but not really the same) logic in two places. The heuristics of "whether to label or not to label" are also reorganized to be much easier to read and understand. 4. a new method compiler._label_returning_column is added for dialects to use in their "generate returning columns" methods. A github search reveals a small number of third party dialects also doing this using the prior _label_select_column method so we try to make sure _label_select_column continues to work the exact same way for that specific use case; for the "SELECT" use case it now needs 5. After some attempts to do it different ways, for the case where _proxy_key is giving us some kind of anon label, we are hard changing it to "_no_label" right now, as there's not currently a way to fully match anonymized labels from stmt.c or stmt.selected_columns to what will be in the result map. The idea of "_no_label" is to encourage the user to use label('name') for columns they want to be able to target by string name that don't have a natural name. Change-Id: I7a92a66f3a7e459ccf32587ac0a3c306650daf11
* Replace all http:// links to https://Federico Caselli2021-07-041-1/+1
| | | | | | Also replace http://pypi.python.org/pypi with https://pypi.org/project Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
* Export deferred columns but not col props; fix CTE labelingMike Bayer2021-06-221-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refined the behavior of ORM subquery rendering with regards to deferred columns and column properties to be more compatible with that of 1.3 while also providing for 1.4's newer features. As a subquery in 1.4 does not make use of loader options, including :func:`_orm.deferred`, a subquery that is against an ORM entity with deferred attributes will now render those deferred attributes that refer directly to mapped table columns, as these are needed in the outer SELECT if that outer SELECT makes use of these columns; however a deferred attribute that refers to a composed SQL expression as we normally do with :func:`_orm.column_property` will not be part of the subquery, as these can be selected explicitly if needed in the subquery. If the entity is being SELECTed from this subquery, the column expression can still render on "the outside" in terms of the derived subquery columns. This produces essentially the same behavior as when working with 1.3. However in this case the fix has to also make sure that the ``.selected_columns`` collection of an ORM-enabled :func:`_sql.select` also follows these rules, which in particular allows recursive CTEs to render correctly in this scenario, which were previously failing to render correctly due to this issue. As part of this change the _exported_columns_iterator() method has been removed and logic simplified to use ._all_selected_columns from any SelectBase object where _exported_columns_iterator() was used before. Additionally sets up UpdateBase to include ReturnsRows in its hierarchy; the literal point of ReturnsRows was to be a common base for UpdateBase and SelectBase so it was kind of weird it wasn't there. Fixes: #6661 Fixed issue in CTE constructs mostly relevant to ORM use cases where a recursive CTE against "anonymous" labels such as those seen in ORM ``column_property()`` mappings would render in the ``WITH RECURSIVE xyz(...)`` section as their raw internal label and not a cleanly anonymized name. Fixes: #6663 Change-Id: I26219d4d8e6c0915b641426e9885540f74fae4d2
* Establish deprecation path for CursorResult.keys()Mike Bayer2021-05-041-1/+16
| | | | | | | | | | | | | | | | | Established a deprecation path for calling upon the :meth:`_cursor.CursorResult.keys` method for a statement that returns no rows to provide support for legacy patterns used by the "records" package as well as any other non-migrated applications. Previously, this would raise :class:`.ResourceClosedException` unconditionally in the same way as it does when attempting to fetch rows. While this is the correct behavior going forward, the :class:`_cursor.LegacyCursorResult` object will now in this case return an empty list for ``.keys()`` as it did in 1.3, while also emitting a 2.0 deprecation warning. The :class:`_cursor.CursorResult`, used when using a 2.0-style "future" engine, will continue to raise as it does now. Fixes: #6427 Change-Id: I4148f28c88039e4141deeab28b1a5994e6d6e098
* Return Row for CursorResult.inserted_primary_keyMike Bayer2021-04-111-14/+20
| | | | | | | | | The tuple returned by :attr:`.CursorResult.inserted_primary_key` is now a :class:`_result.Row` object with a named tuple interface on top of the existing tuple interface. Fixes: #3314 Change-Id: I85677ef60d8329648f368bf497f634758f4e087b
* Document inserted_primary_key_rowsMike Bayer2021-04-041-8/+39
| | | | | | | | | | | This accessor was misleading in that it indicated a general capability to return inserted primary key values for multiple rows at once. Clarify this is not currently the case as the feature is only suppported by the psycopg2 dialect at the moment. Fixes: #6194 Change-Id: I2a9cf5f47082d948d52208d2a3bad2d7ab38710e
* document rowcount for ORM update/deleteMike Bayer2021-03-311-1/+6
| | | | Change-Id: I16f50cb50fc3cccc1bd7cae3a64a085b1ea68612
* Repair exception handling in CursorResultFederico Caselli2021-03-301-2/+2
| | | | | Fixes: #6138 Change-Id: I794a3da688fd8577fb06770ef02bf827a5c55397