summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorbrln <matt@cranklogic.com>2016-08-02 18:37:35 -0400
committerMike Bayer <mike_mp@zzzcomputing.com>2016-08-02 18:45:59 -0400
commitce1492ef3aae692a3dc10fff400e178e7b2edff8 (patch)
tree1a6bfe9bf4f93d51aa506a62711031d3fb04fe18
parent2dfa954e1f6af20f3104ef05ba126b37f8f4e5c5 (diff)
downloadsqlalchemy-ce1492ef3aae692a3dc10fff400e178e7b2edff8.tar.gz
Warn that bulk save groups inserts/updates by type
Users who pass many different object types to bulk_save_objects may be surprised that the INSERT/UPDATE batches must necessarily be broken up by type. Add this to the list of caveats. Co-authored-by: Mike Bayer Change-Id: I8390c1c971ced50c41268b479a9dcd09c695b135 Pull-request: https://github.com/zzzeek/sqlalchemy/pull/294
-rw-r--r--doc/build/orm/persistence_techniques.rst8
1 files changed, 8 insertions, 0 deletions
diff --git a/doc/build/orm/persistence_techniques.rst b/doc/build/orm/persistence_techniques.rst
index a30d486b5..06b8faff7 100644
--- a/doc/build/orm/persistence_techniques.rst
+++ b/doc/build/orm/persistence_techniques.rst
@@ -307,6 +307,14 @@ to this approach is strictly one of reduced Python overhead:
objects and assigning state to them, which normally is also subject to
expensive tracking of history on a per-attribute basis.
+* The set of objects passed to all bulk methods are processed
+ in the order they are received. In the case of
+ :meth:`.Session.bulk_save_objects`, when objects of different types are passed,
+ the INSERT and UPDATE statements are necessarily broken up into per-type
+ groups. In order to reduce the number of batch INSERT or UPDATE statements
+ passed to the DBAPI, ensure that the incoming list of objects
+ are grouped by type.
+
* The process of fetching primary keys after an INSERT also is disabled by
default. When performed correctly, INSERT statements can now more readily
be batched by the unit of work process into ``executemany()`` blocks, which