summaryrefslogtreecommitdiff
path: root/docs/build/autogenerate.rst
blob: 46dde52302bfa49cebdc91448be3404c6e077791 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
Auto Generating Migrations
===========================

Alembic can view the status of the database and compare against the table metadata
in the application, generating the "obvious" migrations based on a comparison.  This
is achieved using the ``--autogenerate`` option to the ``alembic revision`` command,
which places so-called *candidate* migrations into our new migrations file.  We
review and modify these by hand as needed, then proceed normally.

To use autogenerate, we first need to modify our ``env.py`` so that it gets access
to a table metadata object that contains the target.  Suppose our application
has a :ref:`declarative base <sqla:declarative_toplevel>`
in ``myapp.mymodel``.  This base contains a :class:`~sqlalchemy.schema.MetaData` object which
contains :class:`~sqlalchemy.schema.Table` objects defining our database.  We make sure this
is loaded in ``env.py`` and then passed to :meth:`.EnvironmentContext.configure` via the
``target_metadata`` argument.   The ``env.py`` sample script used in the
generic template already has a
variable declaration near the top for our convenience, where we replace ``None``
with our :class:`~sqlalchemy.schema.MetaData`.  Starting with::

    # add your model's MetaData object here
    # for 'autogenerate' support
    # from myapp import mymodel
    # target_metadata = mymodel.Base.metadata
    target_metadata = None

we change to::

    from myapp.mymodel import Base
    target_metadata = Base.metadata

.. note::

  The above example refers to the **generic alembic env.py template**, e.g.
  the one created by default when calling upon ``alembic init``, and not
  the special-use templates such as ``multidb``.   Please consult the source
  code and comments within the ``env.py`` script directly for specific
  guidance on where and how the autogenerate metadata is established.

If we look later in the script, down in ``run_migrations_online()``,
we can see the directive passed to :meth:`.EnvironmentContext.configure`::

    def run_migrations_online():
        engine = engine_from_config(
                    config.get_section(config.config_ini_section), prefix='sqlalchemy.')

        with engine.connect() as connection:
            context.configure(
                        connection=connection,
                        target_metadata=target_metadata
                        )

            with context.begin_transaction():
                context.run_migrations()

We can then use the ``alembic revision`` command in conjunction with the
``--autogenerate`` option.  Suppose
our :class:`~sqlalchemy.schema.MetaData` contained a definition for the ``account`` table,
and the database did not.  We'd get output like::

    $ alembic revision --autogenerate -m "Added account table"
    INFO [alembic.context] Detected added table 'account'
    Generating /path/to/foo/alembic/versions/27c6a30d7c24.py...done

We can then view our file ``27c6a30d7c24.py`` and see that a rudimentary migration
is already present::

    """empty message

    Revision ID: 27c6a30d7c24
    Revises: None
    Create Date: 2011-11-08 11:40:27.089406

    """

    # revision identifiers, used by Alembic.
    revision = '27c6a30d7c24'
    down_revision = None

    from alembic import op
    import sqlalchemy as sa

    def upgrade():
        ### commands auto generated by Alembic - please adjust! ###
        op.create_table(
        'account',
        sa.Column('id', sa.Integer()),
        sa.Column('name', sa.String(length=50), nullable=False),
        sa.Column('description', sa.VARCHAR(200)),
        sa.Column('last_transaction_date', sa.DateTime()),
        sa.PrimaryKeyConstraint('id')
        )
        ### end Alembic commands ###

    def downgrade():
        ### commands auto generated by Alembic - please adjust! ###
        op.drop_table("account")
        ### end Alembic commands ###

The migration hasn't actually run yet, of course.  We do that via the usual ``upgrade``
command.   We should also go into our migration file and alter it as needed, including
adjustments to the directives as well as the addition of other directives which these may
be dependent on - specifically data changes in between creates/alters/drops.

.. _autogenerate_detects:

What does Autogenerate Detect (and what does it *not* detect?)
--------------------------------------------------------------

The vast majority of user issues with Alembic centers on the topic of what
kinds of changes autogenerate can and cannot detect reliably, as well as
how it renders Python code for what it does detect.     It is critical to
note that **autogenerate is not intended to be perfect**.   It is *always*
necessary to manually review and correct the **candidate migrations**
that autogenerate produces.   The feature is getting more and more
comprehensive and error-free as releases continue, but one should take
note of the current limitations.

Autogenerate **will detect**:

* Table additions, removals.
* Column additions, removals.
* Change of nullable status on columns.
* Basic changes in indexes and explicitly-named unique constraints

.. versionadded:: 0.6.1 Support for autogenerate of indexes and unique constraints.

* Basic changes in foreign key constraints

.. versionadded:: 0.7.1 Support for autogenerate of foreign key constraints.

Autogenerate can **optionally detect**:

* Change of column type.  This will occur if you set
  the :paramref:`.EnvironmentContext.configure.compare_type` parameter to
  ``True``.   The default implementation will reliably detect major changes,
  such as between :class:`.Numeric` and :class:`.String`, as well as
  accommodate for the types generated by SQLAlchemy's "generic" types such as
  :class:`.Boolean`.   Arguments that are shared between both types, such as
  length and precision values, will also be compared.   If either the metadata
  type or database type has **additional** arguments beyond that of the other
  type, these are **not** compared, such as if one numeric type featured a
  "scale" and other type did not, this would be seen as the backing database
  not supporting the value, or reporting on a default that the metadata did not
  specify.

  The type comparison logic is fully extensible as well; see
  :ref:`compare_types` for details.

  .. versionchanged:: 1.4 type comparison code has been reworked such that
     column types are compared based on their rendered DDL, which should allow
     the functionality enabled by
     :paramref:`.EnvironmentContext.configure.compare_type`
     to be much more accurate, correctly accounting for the behavior of
     SQLAlchemy "generic" types as well as major arguments specified within
     types.

* Change of server default.  This will occur if you set
  the :paramref:`.EnvironmentContext.configure.compare_server_default`
  parameter to ``True``, or to a custom callable function.
  This feature works well for simple cases but cannot always produce
  accurate results.  The Postgresql backend will actually invoke
  the "detected" and "metadata" values against the database to
  determine equivalence.  The feature is off by default so that
  it can be tested on the target schema first.  Like type comparison,
  it can also be customized by passing a callable; see the
  function's documentation for details.

Autogenerate **can not detect**:

* Changes of table name.   These will come out as an add/drop of two different
  tables, and should be hand-edited into a name change instead.
* Changes of column name.  Like table name changes, these are detected as
  a column add/drop pair, which is not at all the same as a name change.
* Anonymously named constraints.  Give your constraints a name,
  e.g. ``UniqueConstraint('col1', 'col2', name="my_name")``.  See the section
  :doc:`naming` for background on how to configure automatic naming schemes
  for constraints.
* Special SQLAlchemy types such as :class:`~sqlalchemy.types.Enum` when generated
  on a backend which doesn't support ENUM directly - this because the
  representation of such a type
  in the non-supporting database, i.e. a CHAR+ CHECK constraint, could be
  any kind of CHAR+CHECK.  For SQLAlchemy to determine that this is actually
  an ENUM would only be a guess, something that's generally a bad idea.
  To implement your own "guessing" function here, use the
  :meth:`sqlalchemy.events.DDLEvents.column_reflect` event
  to detect when a CHAR (or whatever the target type is) is reflected,
  and change it to an ENUM (or whatever type is desired) if it is known that
  that's the intent of the type.  The
  :meth:`sqlalchemy.events.DDLEvents.after_parent_attach`
  can be used within the autogenerate process to intercept and un-attach
  unwanted CHECK constraints.

Autogenerate can't currently, but **will eventually detect**:

* Some free-standing constraint additions and removals may not be supported,
  including PRIMARY KEY, EXCLUDE, CHECK; these are not necessarily implemented
  within the autogenerate detection system and also may not be supported by
  the supporting SQLAlchemy dialect.
* Sequence additions, removals - not yet implemented.

Autogenerating Multiple MetaData collections
--------------------------------------------

The ``target_metadata`` collection may also be defined as a sequence
if an application has multiple :class:`~sqlalchemy.schema.MetaData`
collections involved::

    from myapp.mymodel1 import Model1Base
    from myapp.mymodel2 import Model2Base
    target_metadata = [Model1Base.metadata, Model2Base.metadata]

The sequence of :class:`~sqlalchemy.schema.MetaData` collections will be
consulted in order during the autogenerate process.  Note that each
:class:`~sqlalchemy.schema.MetaData` must contain **unique** table keys
(e.g. the "key" is the combination of the table's name and schema);
if two :class:`~sqlalchemy.schema.MetaData` objects contain a table
with the same schema/name combination, an error is raised.

.. versionchanged:: 0.9.0 the
  :paramref:`.EnvironmentContext.configure.target_metadata`
  parameter may now be passed a sequence of
  :class:`~sqlalchemy.schema.MetaData` objects to support
  autogeneration of multiple :class:`~sqlalchemy.schema.MetaData`
  collections.

Comparing and Rendering Types
------------------------------

The area of autogenerate's behavior of comparing and rendering Python-based type objects
in migration scripts presents a challenge, in that there's
a very wide variety of types to be rendered in scripts, including those
part of SQLAlchemy as well as user-defined types.   A few options
are given to help out with this task.

.. _autogen_module_prefix:

Controlling the Module Prefix
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When types are rendered, they are generated with a **module prefix**, so
that they are available based on a relatively small number of imports.
The rules for what the prefix is is based on the kind of datatype as well
as configurational settings.   For example, when Alembic renders SQLAlchemy
types, it will by default prefix the type name with the prefix ``sa.``::

    Column("my_column", sa.Integer())

The use of the ``sa.`` prefix is controllable by altering the value
of :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`::

    def run_migrations_online():
        # ...

        context.configure(
                    connection=connection,
                    target_metadata=target_metadata,
                    sqlalchemy_module_prefix="sqla.",
                    # ...
                    )

        # ...

In either case, the ``sa.`` prefix, or whatever prefix is desired, should
also be included in the imports section of ``script.py.mako``; it also
defaults to ``import sqlalchemy as sa``.


For user-defined types, that is, any custom type that
is not within the ``sqlalchemy.`` module namespace, by default Alembic will
use the **value of __module__ for the custom type**::

    Column("my_column", myapp.models.utils.types.MyCustomType())

The imports for the above type again must be made present within the migration,
either manually, or by adding it to ``script.py.mako``.

.. versionchanged:: 0.7.0
   The default module prefix rendering for a user-defined type now makes use
   of the type's ``__module__`` attribute to retrieve the prefix, rather than
   using the value of
   :paramref:`~.EnvironmentContext.configure.sqlalchemy_module_prefix`.


The above custom type has a long and cumbersome name based on the use
of ``__module__`` directly, which also implies that lots of imports would
be needed in order to accomodate lots of types.  For this reason, it is
recommended that user-defined types used in migration scripts be made
available from a single module.  Suppose we call it ``myapp.migration_types``::

    # myapp/migration_types.py

    from myapp.models.utils.types import MyCustomType

We can first add an import for ``migration_types`` to our ``script.py.mako``::

    from alembic import op
    import sqlalchemy as sa
    import myapp.migration_types
    ${imports if imports else ""}

We then override Alembic's use of ``__module__`` by providing a fixed
prefix, using the :paramref:`.EnvironmentContext.configure.user_module_prefix`
option::

    def run_migrations_online():
        # ...

        context.configure(
                    connection=connection,
                    target_metadata=target_metadata,
                    user_module_prefix="myapp.migration_types.",
                    # ...
                    )

        # ...

Above, we now would get a migration like::

  Column("my_column", myapp.migration_types.MyCustomType())

Now, when we inevitably refactor our application to move ``MyCustomType``
somewhere else, we only need modify the ``myapp.migration_types`` module,
instead of searching and replacing all instances within our migration scripts.

.. versionadded:: 0.6.3 Added :paramref:`.EnvironmentContext.configure.user_module_prefix`.

.. _autogen_render_types:

Affecting the Rendering of Types Themselves
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The methodology Alembic uses to generate SQLAlchemy and user-defined type constructs
as Python code is plain old ``__repr__()``.   SQLAlchemy's built-in types
for the most part have a ``__repr__()`` that faithfully renders a
Python-compatible constructor call, but there are some exceptions, particularly
in those cases when a constructor accepts arguments that aren't compatible
with ``__repr__()``, such as a pickling function.

When building a custom type that will be rendered into a migration script,
it is often necessary to explicitly give the type a ``__repr__()`` that will
faithfully reproduce the constructor for that type.   This, in combination
with :paramref:`.EnvironmentContext.configure.user_module_prefix`, is usually
enough.  However, if additional behaviors are needed, a more comprehensive
hook is the :paramref:`.EnvironmentContext.configure.render_item` option.
This hook allows one to provide a callable function within ``env.py`` that will fully take
over how a type is rendered, including its module prefix::

    def render_item(type_, obj, autogen_context):
        """Apply custom rendering for selected items."""

        if type_ == 'type' and isinstance(obj, MySpecialType):
            return "mypackage.%r" % obj

        # default rendering for other objects
        return False

    def run_migrations_online():
        # ...

        context.configure(
                    connection=connection,
                    target_metadata=target_metadata,
                    render_item=render_item,
                    # ...
                    )

        # ...

In the above example, we'd ensure our ``MySpecialType`` includes an appropriate
``__repr__()`` method, which is invoked when we call it against ``"%r"``.

The callable we use for :paramref:`.EnvironmentContext.configure.render_item`
can also add imports to our migration script.  The :class:`.AutogenContext` passed in
contains a datamember called :attr:`.AutogenContext.imports`, which is a Python
``set()`` for which we can add new imports.  For example, if ``MySpecialType``
were in a module called ``mymodel.types``, we can add the import for it
as we encounter the type::

    def render_item(type_, obj, autogen_context):
        """Apply custom rendering for selected items."""

        if type_ == 'type' and isinstance(obj, MySpecialType):
            # add import for this type
            autogen_context.imports.add("from mymodel import types")
            return "types.%r" % obj

        # default rendering for other objects
        return False

.. versionchanged:: 0.8 The ``autogen_context`` data member passed to
   the ``render_item`` callable is now an instance of :class:`.AutogenContext`.

.. versionchanged:: 0.8.3 The "imports" data member of the autogen context
   is restored to the new :class:`.AutogenContext` object as
   :attr:`.AutogenContext.imports`.

The finished migration script will include our imports where the
``${imports}`` expression is used, producing output such as::

  from alembic import op
  import sqlalchemy as sa
  from mymodel import types

  def upgrade():
      op.add_column('sometable', Column('mycolumn', types.MySpecialType()))


.. _compare_types:

Comparing Types
^^^^^^^^^^^^^^^^

The default type comparison logic will work for SQLAlchemy built in types as
well as basic user defined types.   This logic is only enabled if the
:paramref:`.EnvironmentContext.configure.compare_type` parameter
is set to True::

    context.configure(
        # ...
        compare_type = True
    )

.. note::

   The default type comparison logic (which is end-user extensible) currently
   (as of Alembic version 1.4.0) works by comparing the generated SQL for a
   column. It does this in two steps-

   * First, it compares the outer type of each column such as ``VARCHAR``
     or ``TEXT``. Dialect implementations can have synonyms that are considered
     equivalent- this is because some databases support types by converting them
     to another type. For example, NUMERIC and DECIMAL are considered equivalent
     on all backends, while on the Oracle backend the additional synonyms
     BIGINT, INTEGER, NUMBER, SMALLINT are added to this list of equivalents

   * Next, the arguments within the type, such as the lengths of
     strings, precision values for numerics, the elements inside of an
     enumeration are compared. If BOTH columns have arguments AND they are
     different, a change will be detected. If one column is just set to the
     default and the other has arguments, Alembic will pass on attempting to
     compare these. The rationale is that it is difficult to detect what a
     database backend sets as a default value without generating false
     positives.

..versionchanged 1.4.0:: Added the text and keyword comparison for column types

Alternatively, the :paramref:`.EnvironmentContext.configure.compare_type`
parameter accepts a callable function which may be used to implement custom type
comparison logic, for cases such as where special user defined types
are being used::

    def my_compare_type(context, inspected_column,
                metadata_column, inspected_type, metadata_type):
        # return False if the metadata_type is the same as the inspected_type
        # or None to allow the default implementation to compare these
        # types. a return value of True means the two types do not
        # match and should result in a type change operation.
        return None

    context.configure(
        # ...
        compare_type = my_compare_type
    )

Above, ``inspected_column`` is a :class:`sqlalchemy.schema.Column` as
returned by
:meth:`sqlalchemy.engine.reflection.Inspector.reflect_table`, whereas
``metadata_column`` is a :class:`sqlalchemy.schema.Column` from the
local model environment.  A return value of ``None`` indicates that default
type comparison to proceed.

Additionally, custom types that are part of imported or third party
packages which have special behaviors such as per-dialect behavior
should implement a method called ``compare_against_backend()``
on their SQLAlchemy type.   If this method is present, it will be called
where it can also return True or False to specify the types compare as
equivalent or not; if it returns None, default type comparison logic
will proceed::

    class MySpecialType(TypeDecorator):

        # ...

        def compare_against_backend(self, dialect, conn_type):
            # return True if this type is the same as the given database type,
            # or None to allow the default implementation to compare these
            # types. a return value of False means the given type does not
            # match this type.

            if dialect.name == 'postgresql':
                return isinstance(conn_type, postgresql.UUID)
            else:
                return isinstance(conn_type, String)

.. warning::

    The boolean return values for the above
    ``compare_against_backend`` method, which is part of SQLAlchemy and not
    Alembic,are **the opposite** of that of the
    :paramref:`.EnvironmentContext.configure.compare_type` callable, returning
    ``True`` for types that are the same vs. ``False`` for types that are
    different.The :paramref:`.EnvironmentContext.configure.compare_type`
    callable on the other hand should return ``True`` for types that are
    **different**.

The order of precedence regarding the
:paramref:`.EnvironmentContext.configure.compare_type` callable vs. the
type itself implementing ``compare_against_backend`` is that the
:paramref:`.EnvironmentContext.configure.compare_type` callable is favored
first; if it returns ``None``, then the ``compare_against_backend`` method
will be used, if present on the metadata type.  If that returns ``None``,
then a basic check for type equivalence is run.

.. versionadded:: 0.7.6 - added support for the ``compare_against_backend()``
   method.

.. versionadded:: 1.4.0 - added column keyword comparisons and the
   ``type_synonyms`` property.


.. _post_write_hooks:

Applying Post Processing and Python Code Formatters to Generated Revisions
---------------------------------------------------------------------------

Revision scripts generated by the ``alembic revision`` command can optionally
be piped through a series of post-production functions which may analyze or
rewrite Python source code generated by Alembic, within the scope of running
the ``revision`` command.   The primary intended use of this feature is to run
code-formatting tools such as `Black <https://black.readthedocs.io/>`_ or
`autopep8 <https://pypi.org/project/autopep8/>`_, as well as custom-written
formatting and linter functions, on revision files as Alembic generates them.
Any number of hooks can be configured and they will be run in series, given the
path to the newly generated file as well as configuration options.

The post write hooks, when configured,  run against generated revision files
regardless of whether or not the autogenerate feature was used.

.. versionadded:: 1.2

.. note::

    Alembic's post write system is partially inspired by the `pre-commit
    <https://pre-commit.com/>`_ tool, which configures git hooks that reformat
    source files as they are committed to a git repository.  Pre-commit can
    serve this role for Alembic revision files as well, applying code
    formatters to them as they are committed.  Alembic's post write hooks are
    useful only in that they can format the files immediately upon generation,
    rather than at commit time, and also can be useful for projects that prefer
    not to use pre-commit.


Basic Formatter Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The ``alembic.ini`` samples now include commented-out configuration
illustrating how to configure code-formatting tools to run against the newly
generated file path.    Example::

  [post_write_hooks]

  # format using "black"
  hooks=black

  black.type=console_scripts
  black.entrypoint=black
  black.options=-l 79

Above, we configure a single post write hook that we call ``"black"``. Note
that this name is arbitrary.  We then define the configuration for the
``"black"`` post write hook, which includes:

* ``type`` - this is the type of hook we are running.   Alembic includes
  a hook runner called ``"console_scripts"``, which is specifically a
  Python function that uses ``subprocess.run()`` to invoke a separate
  Python script against the revision file.    For a custom-written hook
  function, this configuration variable would refer to the name under
  which the custom hook was registered; see the next section for an example.

* ``entrypoint`` - this part of the configuration is specific to the
  ``"console_scripts"`` hook runner.  This is the name of the `setuptools entrypoint <https://setuptools.readthedocs.io/en/latest/pkg_resources.html#entry-points>`_
  that is used to define the console script.    Within the scope of standard
  Python console scripts, this name will match the name of the shell command
  that is usually run for the code formatting tool, in this case ``black``.

* ``options`` - this is also specific to the ``"console_scripts"`` hook runner.
  This is a line of command-line options that will be passed to the
  code formatting tool.  In this case, we want to run the command
  as ``black -l 79 /path/to/revision.py``.   The path of the revision file
  is sent as a single positional argument to the script after the options.

  .. note:: Make sure options for the script are provided such that it will
     rewrite the input file **in place**.  For example, when running
     ``autopep8``, the ``--in-place`` option should be provided::

        [post_write_hooks]
        hooks=autopep8
        autopep8.type=console_scripts
        autopep8.entrypoint=autopep8
        autopep8.options=--in-place


When running ``alembic revision -m "rev1"``, we will now see the ``black``
tool's output as well::

  $ alembic revision -m "rev1"
    Generating /path/to/project/versions/481b13bc369a_rev1.py ... done
    Running post write hook "black" ...
  reformatted /path/to/project/versions/481b13bc369a_rev1.py
  All done! ✨ 🍰 ✨
  1 file reformatted.
    done

Hooks may also be specified as a list of names, which correspond to hook
runners that will run sequentially.  As an example, we can also run the
`zimports <https://pypi.org/project/zimports/>`_ import rewriting tool (written
by Alembic's author) subsequent to running the ``black`` tool, using a
configuration as follows::

  [post_write_hooks]

  # format using "black", then "zimports"
  hooks=black, zimports

  black.type=console_scripts
  black.entrypoint=black
  black.options=-l 79

  zimports.type=console_scripts
  zimports.entrypoint=zimports
  zimports.options=--style google

When using the above configuration, a newly generated revision file will
be processed first by the "black" tool, then by the "zimports" tool.

.. _post_write_hooks_custom:

Writing Custom Hooks as Python Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The previous section illustrated how to run command-line code formatters,
through the use of a post write hook provided by Alembic known as
``console_scripts``.  This hook is in fact a Python function that is registered
under that name using a registration function that may be used to register
other types of hooks as well.

To illustrate, we will use the example of a short Python function that wants
to rewrite the generated code to use tabs instead of four spaces.   For simplicity,
we will illustrate how this function can be present directly in the ``env.py``
file.   The function is declared and registered using the :func:`.write_hooks.register`
decorator::

    from alembic.script import write_hooks
    import re

    @write_hooks.register("spaces_to_tabs")
    def convert_spaces_to_tabs(filename, options):
        lines = []
        with open(filename) as file_:
            for line in file_:
                lines.append(
                    re.sub(
                        r"^(    )+",
                        lambda m: "\t" * (len(m.group(1)) // 4),
                        line
                    )
                )
        with open(filename, "w") as to_write:
            to_write.write("".join(lines))

Our new ``"spaces_to_tabs"`` hook can be configured in alembic.ini as follows::

  [alembic]

  # ...

  # ensure the revision command loads env.py
  revision_environment = true

  [post_write_hooks]

  hooks=spaces_to_tabs

  spaces_to_tabs.type=spaces_to_tabs


When ``alembic revision`` is run, the ``env.py`` file will be loaded in all
cases, the custom "spaces_to_tabs" function will be registered and it will then
be run against the newly generated file path::

  $ alembic revision -m "rev1"
    Generating /path/to/project/versions/481b13bc369a_rev1.py ... done
    Running post write hook "spaces_to_tabs" ...
    done