| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on :class:`.Insert`. This helps to fix a bug where an
INSERT...FROM SELECT construct would inadvertently be compiled
as "implicit returning" on supporting backends, which would
cause breakage in the case of an INSERT that inserts zero rows
(as implicit returning expects a row), as well as arbitrary
return data in the case of an INSERT that inserts multiple
rows (e.g. only the first row of many).
A similar change is also applied to an INSERT..VALUES
with multiple parameter sets; implicit RETURNING will no longer emit
for this statement either. As both of these constructs deal
with varible numbers of rows, the
:attr:`.ResultProxy.inserted_primary_key` accessor does not
apply. Previously, there was a documentation note that one
may prefer ``inline=True`` with INSERT..FROM SELECT as some databases
don't support returning and therefore can't do "implicit" returning,
but there's no reason an INSERT...FROM SELECT needs implicit returning
in any case. Regular explicit :meth:`.Insert.returning` should
be used to return variable numbers of result rows if inserted
data is needed.
fixes #3169
|
|
|
|
| |
_collect_update_commands and _collect_delete_commands
|
|
|
|
|
|
|
|
| |
debug logging message would not emit if the logging were set up using
``logging.setLevel()``, rather than using the ``echo_pool`` flag.
Tests to assert this logging have been added. This is a
regression that was introduced in 0.9.0.
fixes #3168
|
| |
|
| |
|
| |
|
|
|
|
|
| |
setting up given values vs. defaults. again trying to shoot for
making this of more general use
|
|
|
|
|
|
| |
narrow down argument lists and generator items for each function
down to just what each function needs. This will help for them
to be of more multipurpose use for bulk operations
|
|
|
|
|
|
| |
we only call upon the history API fully for primary key columns.
We also now skip the whole step of looking at PK columns and using
any history at all if no net changes are detected on the object.
|
|
|
|
|
|
|
|
| |
``@validates`` would have events triggered within the flush process,
when those columns were the targets of a "fetch and populate"
operation, such as an autoincremented primary key, a Python side
default, or a server-side default "eagerly" fetched via RETURNING.
fixes #3167
|
| |
|
|
|
|
| |
they can be used under xdist
|
|
|
|
| |
- add pg8000 version detection for the "sane multi rowcount" feature
|
|\ |
|
| |
| |
| |
| |
| | |
From pg8000-1.9.14 sane_multi_rowcount is supported so this commit
updates the dialect accordingly.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
- guard against some potential pytest snarkiness
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
now returns lists for ``items()`` and ``values()`` in Py3K.
Early porting to Py3K here had these returning iterators, when
they technically should be "iterable views"..for now, lists are OK.
|
| |
| |
| |
| |
| |
| |
| |
| | |
view. So copy collections.OrderedDict and use MutableMapping to set up
keys, items, values on our own OrderedDict.
Conflicts:
lib/sqlalchemy/engine/base.py
|
| | |
|
| |
| |
| |
| |
| | |
if we already have a table; this prevents reentrant calls and
we aren't supporting columns/etc being moved around between different parents
|
| | |
|
| |
| |
| |
| | |
are a little more crazy under xdist mode
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
into more performant executemany() call, similarly to how INSERT
statements can be batched; this will be invoked within flush
to the degree that subsequent UPDATE statements for the
same mapping and table involve the identical columns within the
VALUES clause, as well as that no VALUES-level SQL expressions
are embedded.
- some other inlinings within persistence.py
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
for an INSERT or UPDATE are now sorted when they contribute towards
the "compiled cache" cache key. These keys were previously not
deterministically ordered, meaning the same statement could be
cached multiple times on equivalent keys, costing both in terms of
memory as well as performance.
fixes #3165
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
is being run itself, either from inside the listener or from a
concurrent thread, now raises a RuntimeError, as the collection used is
now an instance of ``colletions.deque()`` and does not support changes
while being iterated. Previously, a plain Python list was used where
removal from inside the event itself would produce silent failures.
fixes #3163
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
:class:`.SynonymProperty` and :class:`.ComparableProperty`.
- The ``info`` parameter has been added as a constructor argument
to all schema constructs including :class:`.MetaData`,
:class:`.Index`, :class:`.ForeignKey`, :class:`.ForeignKeyConstraint`,
:class:`.UniqueConstraint`, :class:`.PrimaryKeyConstraint`,
:class:`.CheckConstraint`.
fixes #2963
|
| |
| |
| |
| |
| |
| |
| |
| | |
:class:`.InspectionAttr`, where in addition to being available
on all :class:`.MapperProperty` objects, it is also now available
on hybrid properties, association proxies, when accessed via
:attr:`.Mapper.all_orm_descriptors`.
fixes #2971
|
| | |
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
Conflicts:
doc/build/changelog/changelog_10.rst
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- return a list of dicts like other methods do
- don't combine 'schema' with 'name', leave them separate
- support '*' argument so that we can retrieve cross-schema
if needed
- remove "conn" argument
- use bound parameters for 'schema' in SQL
- order by schema, name, label
- adapt _load_enums changes to column reflection
- changelog
- module docs for get_enums()
- add drop of enums to --dropfirst
|
| | |
| | |
| | |
| | |
| | | |
Provide opportunity to get enums list via an inspector instance public
interface.
|
| | | |
|
| | |
| | |
| | |
| | | |
information regarding #3027.
|
| | | |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
otherwise render a SQL NULL column value, rather than a JSON-encoded
``'null'``. To support this case, changes are as follows:
* The value :func:`.null` can now be specified, which will always
result in a NULL value resulting in the statement.
* A new parameter :paramref:`.JSON.none_as_null` is added, which
when True indicates that the Python ``None`` value should be
peristed as SQL NULL, rather than JSON-encoded ``'null'``.
Retrival of NULL as None is also repaired for DBAPIs other than
psycopg2, namely pg8000.
fixes #3159
|
| |
| |
| |
| | |
ref #3155
|