summaryrefslogtreecommitdiff
path: root/test/dialect/oracle/test_reflection.py
Commit message (Collapse)AuthorAgeFilesLines
* Improve oracle index reflectionFederico Caselli2023-04-281-20/+92
| | | | | | | | Added reflection support in the Oracle dialect to expression based indexes and the ordering direction of index expressions. Fixes: #9597 Change-Id: I40e163496789774e9930f46823d2208c35eab6f8
* denormalize "public" schema to "PUBLIC"Mike Bayer2023-03-091-0/+117
| | | | | | | | | | | | | Fixed reflection bug where Oracle "name normalize" would not work correctly for reflection of symbols that are in the "PUBLIC" schema, such as synonyms, meaning the PUBLIC name could not be indicated as lower case on the Python side for the :paramref:`_schema.Table.schema` argument. Using uppercase "PUBLIC" would work, but would then lead to awkward SQL queries including a quoted ``"PUBLIC"`` name as well as indexing the table under uppercase "PUBLIC", which was inconsistent. Fixes: #9459 Change-Id: I989bd1e794a5b5ac9aae4f4a8702f14c56cd74c2
* reflect Oracle ROWIDMike Bayer2023-01-241-0/+13
| | | | | | | | Added :class:`_oracle.ROWID` to reflected types as this type may be used in a "CREATE TABLE" statement. Fixes: #5047 Change-Id: I818dcf68ed81419d0fd5df5e2d51d6fa0f1be7fc
* warn and skip for FKs that refer to invisible cols for OracleMike Bayer2023-01-091-9/+161
| | | | | | | | | | | | | | | | | | | | | Supported use case for foreign key constraints where the local column is marked as "invisible". The errors normally generated when a :class:`.ForeignKeyConstraint` is created that check for the target column are disabled when reflecting, and the constraint is skipped with a warning in the same way which already occurs for an :class:`.Index` with a similar issue. tests are added for indexes, unique constraints, and primary key constraints, which were already working; indexes and uniques warn, primary keys don't which we would assume is because we never see those PK columns in the first place. Constraints now raise an informative ConstraintColumnNotFoundError in the general case for strings in the "pending colargs" collection not being resolvable. Fixes: #9059 Change-Id: I400cf0bff6abba0e0c75f38b07617be1a8ec3453
* sort materialize view names in testMike Bayer2022-12-061-1/+1
| | | | | | saw this fail on CI with views in reverse order Change-Id: I953f379f36924673a49b94cf5e7f54d071044113
* Try running pyupgrade on the codeFederico Caselli2022-11-161-11/+6
| | | | | | | | command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>" pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not exists in sqlalchemy fixtures Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
* Ensure type lengths are int in oracleFederico Caselli2022-06-261-12/+36
| | | | | | | | | | | Repair change introduced by the multi reflection that caused char length of varchar like types or precisions in numberic like types to be set as float. This will fix the test errors in alembic that are currently broken, as shown in I9ad803df1d3ccf2a5111266b781061936717b8c8 Change-Id: Idd5975efaeadfe6327a1cd3b6667d82e836a2cb1
* rearchitect reflection for batched performanceFederico Caselli2022-06-181-2/+552
| | | | | | | | | | | | | | | | | | | | | | | Rearchitected the schema reflection API to allow some dialects to make use of high performing batch queries to reflect the schemas of many tables at once using much fewer queries. The new performance features are targeted first at the PostgreSQL and Oracle backends, and may be applied to any dialect that makes use of SELECT queries against system catalog tables to reflect tables (currently this omits the MySQL and SQLite dialects which instead make use of parsing the "CREATE TABLE" statement, however these dialects do not have a pre-existing performance issue with reflection. MS SQL Server is still a TODO). The new API is backwards compatible with the previous system, and should require no changes to third party dialects to retain compatibility; third party dialects can also opt into the new system by implementing batched queries for schema reflection. Along with this change is an updated reflection API that is fully :pep:`484` typed, features many new methods and some changes. Fixes: #4379 Change-Id: I897ec09843543aa7012bcdce758792ed3d415d08
* update flake8 noqa skips with proper syntaxFederico Caselli2022-04-111-1/+1
| | | | Change-Id: I42ed77f559e3ee5b8c600d98457ee37803ef0ea6
* bump black to 22.3.0Mike Bayer2022-03-281-1/+1
| | | | | | | | | both black and click were released in the past few hours, and black 21.5b1 seems to suddenly be failing on a missing symbol from click. just update to the latest Change-Id: Idf76732479a264f7f2245699a6bdaff018e3a123
* Implement generic Double and related fixed typeszeeeeeb2022-02-251-9/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added :class:`.Double`, :class:`.DOUBLE`, :class:`.DOUBLE_PRECISION` datatypes to the base ``sqlalchemy.`` module namespace, for explicit use of double/double precision as well as generic "double" datatypes. Use :class:`.Double` for generic support that will resolve to DOUBLE/DOUBLE PRECISION/FLOAT as needed for different backends. Implemented DDL and reflection support for ``FLOAT`` datatypes which include an explicit "binary_precision" value. Using the Oracle-specific :class:`_oracle.FLOAT` datatype, the new parameter :paramref:`_oracle.FLOAT.binary_precision` may be specified which will render Oracle's precision for floating point types directly. This value is interpreted during reflection. Upon reflecting back a ``FLOAT`` datatype, the datatype returned is one of :class:`_types.DOUBLE_PRECISION` for a ``FLOAT`` for a precision of 126 (this is also Oracle's default precision for ``FLOAT``), :class:`_types.REAL` for a precision of 63, and :class:`_oracle.FLOAT` for a custom precision, as per Oracle documentation. As part of this change, the generic :paramref:`_sqltypes.Float.precision` value is explicitly rejected when generating DDL for Oracle, as this precision cannot be accurately converted to "binary precision"; instead, an error message encourages the use of :meth:`_sqltypes.TypeEngine.with_variant` so that Oracle's specific form of precision may be chosen exactly. This is a backwards-incompatible change in behavior, as the previous "precision" value was silently ignored for Oracle. Fixes: #5465 Closes: #7674 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7674 Pull-request-sha: 5c68419e5aee2e27bf21a8ac9eb5950d196c77e5 Change-Id: I831f4af3ee3b23fde02e8f6393c83e23dd7cd34d
* dont use exception catches for warnings; modernize xdist detectionMike Bayer2022-01-221-3/+3
| | | | | | | | | | | | | | | | | Improvements to the test suite's integration with pytest such that the "warnings" plugin, if manually enabled, will not interfere with the test suite, such that third parties can enable the warnings plugin or make use of the ``-W`` parameter and SQLAlchemy's test suite will continue to pass. Additionally, modernized the detection of the "pytest-xdist" plugin so that plugins can be globally disabled using PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 without breaking the test suite if xdist were still installed. Warning filters that promote deprecation warnings to errors are now localized to SQLAlchemy-specific warnings, or within SQLAlchemy-specific sources for general Python deprecation warnings, so that non-SQLAlchemy deprecation warnings emitted from pytest plugins should also not impact the test suite. Fixes: #7599 Change-Id: Ibcf09af25228d39ee5a943fda82d8a9302433726
* Fix various source comment/doc typosluz paz2021-12-291-1/+1
| | | | | | | | | | | | | | | | | | | ### Description Found via `codespell -q 3 -L ba,crate,datas,froms,gord,hist,inh,nd,selectin,strat,ue` Also added codespell to the pep8 tox env ### Checklist This pull request is: - [x] A documentation / typographical error fix - Good to go, no issue or tests are needed Closes: #7338 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7338 Pull-request-sha: 0deac2219396bc0eba7da53eb3a80932edbf2dd7 Change-Id: Icd61db31c8dc655d4a39d8a304194804d08555fe
* Update black flak8 and zimportsFederico Caselli2021-05-121-1/+1
| | | | Change-Id: I488c9557eda390e4a88319affd4c8813ee274f80
* update execute() arg formats in modules and testsMike Bayer2021-01-151-2/+4
| | | | | | | | | | | | | continuing with producing a SQLAlchemy 1.4.0b2 that internally does not emit any of its own 2.0 deprecation warnings, migrate the *args and **kwargs passed to execute() methods that now must be a single list or dictionary. Alembic 1.5 is again waiting on this internal consistency to be present so that it can pass all tests with no 2.0 deprecation warnings. Change-Id: If6b792e57c8c5dff205419644ab68e631575a2fa
* reinvent xdist hooks in terms of pytest fixturesMike Bayer2021-01-131-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To allow the "connection" pytest fixture and others work correctly in conjunction with setup/teardown that expects to be external to the transaction, remove and prevent any usage of "xdist" style names that are hardcoded by pytest to run inside of fixtures, even function level ones. Instead use pytest autouse fixtures to implement our own r"setup|teardown_test(?:_class)?" methods so that we can ensure function-scoped fixtures are run within them. A new more explicit flow is set up within plugin_base and pytestplugin such that the order of setup/teardown steps, which there are now many, is fully documented and controllable. New granularity has been added to the test teardown phase to distinguish between "end of the test" when lock-holding structures on connections should be released to allow for table drops, vs. "end of the test plus its teardown steps" when we can perform final cleanup on connections and run assertions that everything is closed out. From there we can remove most of the defensive "tear down everything" logic inside of engines which for many years would frequently dispose of pools over and over again, creating for a broken and expensive connection flow. A quick test shows that running test/sql/ against a single Postgresql engine with the new approach uses 75% fewer new connections, creating 42 new connections total, vs. 164 new connections total with the previous system. As part of this, the new fixtures metadata/connection/future_connection have been integrated such that they can be combined together effectively. The fixture_session(), provide_metadata() fixtures have been improved, including that fixture_session() now strongly references sessions which are explicitly torn down before table drops occur afer a test. Major changes have been made to the ConnectionKiller such that it now features different "scopes" for testing engines and will limit its cleanup to those testing engines corresponding to end of test, end of test class, or end of test session. The system by which it tracks DBAPI connections has been reworked, is ultimately somewhat similar to how it worked before but is organized more clearly along with the proxy-tracking logic. A "testing_engine" fixture is also added that works as a pytest fixture rather than a standalone function. The connection cleanup logic should now be very robust, as we now can use the same global connection pools for the whole suite without ever disposing them, while also running a query for PostgreSQL locks remaining after every test and assert there are no open transactions leaking between tests at all. Additional steps are added that also accommodate for asyncio connections not explicitly closed, as is the case for legacy sync-style tests as well as the async tests themselves. As always, hundreds of tests are further refined to use the new fixtures where problems with loose connections were identified, largely as a result of the new PostgreSQL assertions, many more tests have moved from legacy patterns into the newest. An unfortunate discovery during the creation of this system is that autouse fixtures (as well as if they are set up by @pytest.mark.usefixtures) are not usable at our current scale with pytest 4.6.11 running under Python 2. It's unclear if this is due to the older version of pytest or how it implements itself for Python 2, as well as if the issue is CPU slowness or just large memory use, but collecting the full span of tests takes over a minute for a single process when any autouse fixtures are in place and on CI the jobs just time out after ten minutes. So at the moment this patch also reinvents a small version of "autouse" fixtures when py2k is running, which skips generating the real fixture and instead uses two global pytest fixtures (which don't seem to impact performance) to invoke the "autouse" fixtures ourselves outside of pytest. This will limit our ability to do more with fixtures until we can remove py2k support. py.test is still observed to be much slower in collection in the 4.6.11 version compared to modern 6.2 versions, so add support for new TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that will run the suite for fewer backends under Python 2. For Python 3 pin pytest to modern 6.2 versions where performance for collection has been improved greatly. Includes the following improvements: Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would be raised rather than :class:`.exc.TimeoutError`. Also repaired the :paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using the async engine, which previously would ignore the timeout and block rather than timing out immediately as is the behavior with regular :class:`.QueuePool`. For asyncio the connection pool will now also not interact at all with an asyncio connection whose ConnectionFairy is being garbage collected; a warning that the connection was not properly closed is emitted and the connection is discarded. Within the test suite the ConnectionKiller is now maintaining strong references to all DBAPI connections and ensuring they are released when tests end, including those whose ConnectionFairy proxies are GCed. Identified cx_Oracle.stmtcachesize as a major factor in Oracle test scalability issues, this can be reset on a per-test basis rather than setting it to zero across the board. the addition of this flag has resolved the long-standing oracle "two task" error problem. For SQL Server, changed the temp table style used by the "suite" tests to be the double-pound-sign, i.e. global, variety, which is much easier to test generically. There are already reflection tests that are more finely tuned to both styles of temp table within the mssql test suite. Additionally, added an extra step to the "dropfirst" mechanism for SQL Server that will remove all foreign key constraints first as some issues were observed when using this flag when multiple schemas had not been torn down. Identified and fixed two subtle failure modes in the engine, when commit/rollback fails in a begin() context manager, the connection is explicitly closed, and when "initialize()" fails on the first new connection of a dialect, the transactional state on that connection is still rolled back. Fixes: #5826 Fixes: #5827 Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
* remove metadata.bind use from test suiteMike Bayer2021-01-031-75/+66
| | | | | | | | | | | | | | importantly this means we can remove bound metadata from the fixtures that are used by Alembic's test suite. hopefully this is the last one that has to happen to allow Alembic to be fully 1.4/2.0. Start moving from @testing.provide_metadata to a pytest metadata fixture. This does not seem to have any negative effects even though TablesTest uses a "self.metadata" attribute. Change-Id: Iae6ab95938a7e92b6d42086aec534af27b5577d3
* correct for "autocommit" deprecation warningMike Bayer2020-12-111-115/+106
| | | | | | | | | | | | Ensure no autocommit warnings occur internally or within tests. Also includes fixes for SQL Server full text tests which apparently have not been working at all for a long time, as it used long removed APIs. CI has not had fulltext running for some years and is now installed. Change-Id: Id806e1856c9da9f0a9eac88cebc7a94ecc95eb96
* Convert to autoload_with internallyMike Bayer2020-11-071-45/+55
| | | | | | | | | Fixed bug where the now-deprecated ``autoload`` parameter was being called internally within the reflection routines when a related table were reflected. Fixes: #5684 Change-Id: I6ab439a2f49ff1ae2d3c7a15b531cbafbc3cf594
* Add reflection for Identity columnsFederico Caselli2020-09-281-1/+40
| | | | | | | | | | | | | | | | | | | | | | | | | Added support for reflecting "identity" columns, which are now returned as part of the structure returned by :meth:`_reflection.Inspector.get_columns`. When reflecting full :class:`_schema.Table` objects, identity columns will be represented using the :class:`_schema.Identity` construct. Fixed compilation error on oracle for sequence and identity column ``nominvalue`` and ``nomaxvalue`` options that require no space in them. Improved test compatibility with oracle 18. As part of the support for reflecting :class:`_schema.Identity` objects, the method :meth:`_reflection.Inspector.get_columns` no longer returns ``mssql_identity_start`` and ``mssql_identity_increment`` as part of the ``dialect_options``. Use the information in the ``identity`` key instead. The mssql dialect will assume that at least MSSQL 2005 is used. There is no hard exception raised if a previous version is detected, but operations may fail for older versions. Fixes: #5527 Fixes: #5324 Change-Id: If039fe637c46b424499e6bac54a2cbc0dc54cb57
* upgrade to black 20.8b1Mike Bayer2020-09-281-1/+6
| | | | | | | It's better, the majority of these changes look more readable to me. also found some docstrings that had formatting / quoting issues. Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
* Emit v2.0 deprecation warning for "implicit autocommit"Gord Thompson2020-08-281-24/+35
| | | | | | | | | | | | | | | | | | | | | | | | | "Implicit autocommit", which is the COMMIT that occurs when a DML or DDL statement is emitted on a connection, is deprecated and won't be part of SQLAlchemy 2.0. A 2.0-style warning is emitted when autocommit takes effect, so that the calling code may be adjusted to use an explicit transaction. As part of this change, DDL methods such as :meth:`_schema.MetaData.create_all` when used against a :class:`_engine.Engine` or :class:`_engine.Connection` will run the operation in a BEGIN block if one is not started already. The MySQL and MariaDB dialects now query from the information_schema.tables system view in order to determine if a particular table exists or not. Previously, the "DESCRIBE" command was used with an exception catch to detect non-existent, which would have the undesirable effect of emitting a ROLLBACK on the connection. There appeared to be legacy encoding issues which prevented the use of "SHOW TABLES", for this, but as MySQL support is now at 5.0.2 or above due to :ticket:`4189`, the information_schema tables are now available in all cases. Fixes: #4846 Change-Id: I733a7e0e17477a63607fb9931c87c393bbd7ac57
* Add future=True to create_engine/Session; unify select()Mike Bayer2020-07-081-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several weeks of using the future_select() construct has led to the proposal there be just one select() construct again which features the new join() method, and otherwise accepts both the 1.x and 2.x argument styles. This would make migration simpler and reduce confusion. However, confusion may be increased by the fact that select().join() is different Current thinking is we may be better off with a few hard behavioral changes to old and relatively unknown APIs rather than trying to play both sides within two extremely similar but subtly different APIs. At the moment, the .join() thing seems to be the only behavioral change that occurs without the user taking any explicit steps. Session.execute() will still behave the old way as we are adding a future flag. This change also adds the "future" flag to Session() and session.execute(), so that interpretation of the incoming statement, as well as that the new style result is returned, does not occur for existing applications unless they add the use of this flag. The change in general is moving the "removed in 2.0" system further along where we want the test suite to fully pass even if the SQLALCHEMY_WARN_20 flag is set. Get many tests to pass when SQLALCHEMY_WARN_20 is set; this should be ongoing after this patch merges. Improve the RemovedIn20 warning; these are all deprecated "since" 1.4, so ensure that's what the messages read. Make sure the inforamtion link is on all warnings. Add deprecation warnings for parameters present and add warnings to all FromClause.select() types of methods. Fixes: #5379 Fixes: #5284 Change-Id: I765a0b912b3dcd0e995426427d8bb7997cbffd51 References: #5159
* Use index name to determine if an index is for the PKMike Bayer2020-06-251-0/+79
| | | | | | | | | | | | Fixed bug in Oracle dialect where indexes that contain the full set of primary key columns would be mistaken as the primary key index itself, which is omitted, even if there were multiples. The check has been refined to compare the name of the primary key constraint against the index name itself, rather than trying to guess based on the columns present in the index. Fixes: #5421 Change-Id: I47c2ccdd0b13977cfd9ef249d4de06371c4fb241
* Deprecate plain string in execute and introduce `exec_driver_sql`Federico Caselli2020-03-211-31/+43
| | | | | | | | | | | | | | | Execution of literal sql string is deprecated in the :meth:`.Connection.execute` and a warning is raised when used stating that it will be coerced to :func:`.text` in a future release. To execute a raw sql string the new connection method :meth:`.Connection.exec_driver_sql` was added, that will retain the previous behavior, passing the string to the DBAPI driver unchanged. Usage of scalar or tuple positional parameters in :meth:`.Connection.execute` is also deprecated. Fixes: #4848 Fixes: #5178 Change-Id: I2830181054327996d594f7f0d59c157d477c3aa9
* Include schema in all_tab_comments queryMike Bayer2020-03-121-0/+26
| | | | | | | | | | | Fixed regression / incorrect fix caused by fix for :ticket:`5146` where the Oracle dialect reads from the "all_tab_comments" view to get table comments but fails to accommodate for the current owner of the table being requested, causing it to read the wrong comment if multiple tables of the same name exist in multiple schemas. Fixes: #5146 Change-Id: Id79fbaa81b0e36cd4af60c48e4ab35c593ace057
* Reflect comments from any table accessible by the current userGord Thompson2020-03-101-0/+5
| | | | | | | | | | | | | Fixed a reflection bug where table comments could only be retrieved for tables actually owned by the user but not for tables visible to the user but owned by someone else. Pull request courtesy Dave Hirschfeld. Fixes: #5146 Closes: #5147 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5147 Pull-request-sha: 0651e3bed05923765203b37986a2506dac3e634e Change-Id: If970fda10d6adf04d926d38df1a567df1de9f7b9
* Post black reformattingMike Bayer2019-01-061-52/+25
| | | | | | | | | | | | | Applied on top of a pure run of black -l 79 in I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9, this set of changes resolves all remaining flake8 conditions for those codes we have enabled in setup.cfg. Included are resolutions for all remaining flake8 issues including shadowed builtins, long lines, import order, unused imports, duplicate imports, and docstring issues. Change-Id: I4f72d3ba1380dd601610ff80b8fb06a2aff8b0fe
* Run black -l 79 against all source filesMike Bayer2019-01-061-220/+333
| | | | | | | | | | | | | | This is a straight reformat run using black as is, with no edits applied at all. The black run will format code consistently, however in some cases that are prevalent in SQLAlchemy code it produces too-long lines. The too-long lines will be resolved in the following commit that will resolve all remaining flake8 issues including shadowed builtins, long lines, import order, unused imports, duplicate imports, and docstring issues. Change-Id: I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9
* Turn oracle BINARY_DOUBLE, BINARY_FLOAT, DOUBLE_PRECISION into floatsMike Bayer2018-05-251-1/+18
| | | | | | | | | | | | | | | | | The Oracle BINARY_FLOAT and BINARY_DOUBLE datatypes now participate within cx_Oracle.setinputsizes(), passing along NATIVE_FLOAT, so as to support the NaN value. Additionally, :class:`.oracle.BINARY_FLOAT`, :class:`.oracle.BINARY_DOUBLE` and :class:`.oracle.DOUBLE_PRECISION` now subclass :class:`.Float`, since these are floating point datatypes, not decimal. These datatypes were already defaulting the :paramref:`.Float.asdecimal` flag to False in line with what :class:`.Float` already does. Added reflection capabilities for the :class:`.oracle.BINARY_FLOAT`, :class:`.oracle.BINARY_DOUBLE` datatypes. Change-Id: Id99b912e83052654a17d07dc92b4dcb958cb7600 Fixes: #4264
* Reflect Oracle NUMBER(NULL, 0) as INTEGERKent Bower2018-04-121-2/+46
| | | | | | | | | The Oracle NUMBER datatype is reflected as INTEGER if the precision is NULL and the scale is zero, as this is how INTEGER values come back when reflected from Oracle's tables. Pull request courtesy Kent Bower. Change-Id: I4627febd46cab7085299c0a5700ee0f0bdca513c Pull-request: https://github.com/zzzeek/sqlalchemy/pull/437
* Refactor for cx_Oracle version 6oracle_numericMike Bayer2017-09-111-0/+536
Drops support for cx_Oracle prior to version 5.x, reworks numeric and binary support. Fixes: #4064 Change-Id: Ib9ae9aba430c15cd2a6eeb4e5e3fd8e97b5fe480