The oslo_db.sqlalchemy.test_migrations Module

class oslo_db.sqlalchemy.test_migrations.ModelsMigrationsSync

Bases: object

A helper class for comparison of DB migration scripts and models.

It’s intended to be inherited by test cases in target projects. They have to provide implementations for methods used internally in the test (as we have no way to implement them here).

test_model_sync() will run migration scripts for the engine provided and then compare the given metadata to the one reflected from the database. The difference between MODELS and MIGRATION scripts will be printed and the test will fail, if the difference is not empty. The return value is really a list of actions, that should be performed in order to make the current database schema state (i.e. migration scripts) consistent with models definitions. It’s left up to developers to analyze the output and decide whether the models definitions or the migration scripts should be modified to make them consistent.

Output:

[(
    'add_table',
    description of the table from models
),
(
    'remove_table',
    description of the table from database
),
(
    'add_column',
    schema,
    table name,
    column description from models
),
(
    'remove_column',
    schema,
    table name,
    column description from database
),
(
    'add_index',
    description of the index from models
),
(
    'remove_index',
    description of the index from database
),
(
    'add_constraint',
    description of constraint from models
),
(
    'remove_constraint,
    description of constraint from database
),
(
    'modify_nullable',
    schema,
    table name,
    column name,
    {
        'existing_type': type of the column from database,
        'existing_server_default': default value from database
    },
    nullable from database,
    nullable from models
),
(
    'modify_type',
    schema,
    table name,
    column name,
    {
        'existing_nullable': database nullable,
        'existing_server_default': default value from database
    },
    database column type,
    type of the column from models
),
(
    'modify_default',
    schema,
    table name,
    column name,
    {
        'existing_nullable': database nullable,
        'existing_type': type of the column from database
    },
    connection column default value,
    default from models
)]

Method include_object() can be overridden to exclude some tables from comparison (e.g. migrate_repo).

FKInfo

alias of fk_info

check_foreign_keys(metadata, bind)

Compare foreign keys between model and db table.

Returns:a list that contains information about:
  • should be a new key added or removed existing,
  • name of that key,
  • source table,
  • referred table,
  • constrained columns,
  • referred columns

Output:

[('drop_key',
  'testtbl_fk_check_fkey',
  'testtbl',
  fk_info(constrained_columns=(u'fk_check',),
          referred_table=u'table',
          referred_columns=(u'fk_check',)))]

DEPRECATED: this function is deprecated and will be removed from oslo.db in a few releases. Alembic autogenerate.compare_metadata() now includes foreign key comparison directly.

compare_server_default(ctxt, ins_col, meta_col, insp_def, meta_def, rendered_meta_def)

Compare default values between model and db table.

Return True if the defaults are different, False if not, or None to allow the default implementation to compare these defaults.

Parameters:
  • ctxt – alembic MigrationContext instance
  • insp_col – reflected column
  • meta_col – column from model
  • insp_def – reflected column default value
  • meta_def – column default value from model
  • rendered_meta_def – rendered column default value (from model)
compare_type(ctxt, insp_col, meta_col, insp_type, meta_type)

Return True if types are different, False if not.

Return None to allow the default implementation to compare these types.

Parameters:
  • ctxt – alembic MigrationContext instance
  • insp_col – reflected column
  • meta_col – column from model
  • insp_type – reflected column type
  • meta_type – column type from model
db_sync(engine)

Run migration scripts with the given engine instance.

This method must be implemented in subclasses and run migration scripts for a DB the given engine is connected to.

filter_metadata_diff(diff)

Filter changes before assert in test_models_sync().

Allow subclasses to whitelist/blacklist changes. By default, no filtering is performed, changes are returned as is.

Parameters:diff – a list of differences (see compare_metadata() docs for details on format)
Returns:a list of differences
get_engine()

Return the engine instance to be used when running tests.

This method must be implemented in subclasses and return an engine instance to be used when running tests.

get_metadata()

Return the metadata instance to be used for schema comparison.

This method must be implemented in subclasses and return the metadata instance attached to the BASE model.

include_object(object_, name, type_, reflected, compare_to)

Return True for objects that should be compared.

Parameters:
  • object – a SchemaItem object such as a Table or Column object
  • name – the name of the object
  • type – a string describing the type of object (e.g. “table”)
  • reflected – True if the given object was produced based on table reflection, False if it’s from a local MetaData object
  • compare_to – the object being compared against, if available, else None
test_models_sync()
class oslo_db.sqlalchemy.test_migrations.WalkVersionsMixin

Bases: object

Test mixin to check upgrade and downgrade ability of migration.

This is only suitable for testing of migrate migration scripts. An abstract class mixin. INIT_VERSION, REPOSITORY and migration_api attributes must be implemented in subclasses.

Auxiliary Methods:

migrate_up and migrate_down instance methods of the class can be used with auxiliary methods named _pre_upgrade_<revision_id>, _check_<revision_id>, _post_downgrade_<revision_id>. The methods intended to check applied changes for correctness of data operations. This methods should be implemented for every particular revision which you want to check with data. Implementation recommendations for _pre_upgrade_<revision_id>, _check_<revision_id>, _post_downgrade_<revision_id> implementation:

  • _pre_upgrade_<revision_id>: provide a data appropriate to
    a next revision. Should be used an id of revision which going to be applied.
  • _check_<revision_id>: Insert, select, delete operations
    with newly applied changes. The data provided by _pre_upgrade_<revision_id> will be used.
  • _post_downgrade_<revision_id>: check for absence (inability to use) changes provided by reverted revision.

Execution order of auxiliary methods when revision is upgrading:

_pre_upgrade_### => upgrade => _check_###

Execution order of auxiliary methods when revision is downgrading:

downgrade => _post_downgrade_###
INIT_VERSION

Initial version of a migration repository.

Can be different from 0, if a migrations were squashed.

Return type:int
REPOSITORY

Allows basic manipulation with migration repository.

Returns:migrate.versioning.repository.Repository subclass.
migrate_down(version, with_data=False)

Migrate down to a previous version of the db.

Parameters:
  • version (str) – id of revision to downgrade.
  • with_data (Bool) – Whether to verify the absence of changes from migration(s) being downgraded, see auxiliary-dynamic-methods.
migrate_engine

Provides engine instance.

Should be the same instance as used when migrations are applied. In most cases, the engine attribute provided by the test class in a setUp method will work.

Example of implementation:

def migrate_engine(self):
return self.engine
Returns:sqlalchemy engine instance
migrate_up(version, with_data=False)

Migrate up to a new version of the db.

Parameters:
  • version (str) – id of revision to upgrade.
  • with_data (Bool) – Whether to verify the applied changes with data, see auxiliary-dynamic-methods.
migration_api

Provides API for upgrading, downgrading and version manipulations.

Returns:migrate.api or overloaded analog.
walk_versions(snake_walk=False, downgrade=True)

Check if migration upgrades and downgrades successfully.

Determine the latest version script from the repo, then upgrade from 1 through to the latest, with no data in the databases. This just checks that the schema itself upgrades successfully.

walk_versions calls migrate_up and migrate_down with with_data argument to check changes with data, but these methods can be called without any extra check outside of walk_versions method.

Parameters:
  • snake_walk (bool) –

    enables checking that each individual migration can be upgraded/downgraded by itself.

    If we have ordered migrations 123abc, 456def, 789ghi and we run upgrading with the snake_walk argument set to True, the migrations will be applied in the following order:

    `123abc => 456def => 123abc =>
     456def => 789ghi => 456def => 789ghi`
    
  • downgrade (bool) – Check downgrade behavior if True.