The cinder.volume.manager Module

Volume manager manages creating, attaching, detaching, and persistent storage.

Persistent storage volumes keep their state independent of instances. You can attach to an instance, terminate the instance, spawn a new instance (even one from a different image) and re-attach the volume with the same data intact.

Related Flags

volume_manager:The module name of a class derived from manager.Manager (default: cinder.volume.manager.Manager).
volume_driver:Used by Manager. Defaults to cinder.volume.drivers.lvm.LVMVolumeDriver.
volume_group:Name of the group that will contain exported volumes (default: cinder-volumes)
num_shell_tries:
 Number of times to attempt to run commands (default: 3)
class VolumeManager(volume_driver=None, service_name=None, *args, **kwargs)

Bases: cinder.manager.CleanableManager, cinder.manager.SchedulerDependentManager

Manages attachable block storage devices.

RPC_API_VERSION = '3.10'
accept_transfer(context, volume_id, new_user, new_project)
attach_volume(context, volume_id, instance_uuid, host_name, mountpoint, mode, volume=None)

Updates db to show volume is attached.

attachment_delete(context, attachment_id, vref)

Delete/Detach the specified attachment.

Notifies the backend device that we’re detaching the specified attachment instance.

param: vref: Volume object associated with the attachment param: attachment: Attachment reference object to remove

NOTE if the attachment reference is None, we remove all existing attachments for the specified volume object.

attachment_update(context, vref, connector, attachment_id)

Update/Finalize an attachment.

This call updates a valid attachment record to associate with a volume and provide the caller with the proper connection info. Note that this call requires an attachment_ref. It’s expected that prior to this call that the volume and an attachment UUID has been reserved.

param: vref: Volume object to create attachment for param: connector: Connector object to use for attachment creation param: attachment_ref: ID of the attachment record to update

copy_volume_to_image(context, volume_id, image_meta)

Uploads the specified volume to Glance.

image_meta is a dictionary containing the following keys: ‘id’, ‘container_format’, ‘disk_format’

create_cgsnapshot(context, cgsnapshot)

Creates the cgsnapshot.

create_consistencygroup(context, group)

Creates the consistency group.

create_consistencygroup_from_src(context, group, cgsnapshot=None, source_cg=None)

Creates the consistency group from source.

The source can be a CG snapshot or a source CG.

create_group(context, group)

Creates the group.

create_group_from_src(context, group, group_snapshot=None, source_group=None)

Creates the group from source.

The source can be a group snapshot or a source group.

create_group_snapshot(context, group_snapshot)

Creates the group_snapshot.

create_snapshot(context, snapshot)

Creates and exports the snapshot.

create_volume(context, volume, request_spec=None, filter_properties=None, allow_reschedule=True)

Creates the volume.

delete_cgsnapshot(context, cgsnapshot)

Deletes cgsnapshot.

delete_consistencygroup(context, group)

Deletes consistency group and the volumes in the group.

delete_group(context, group)

Deletes group and the volumes in the group.

delete_group_snapshot(context, group_snapshot)

Deletes group_snapshot.

delete_snapshot(context, snapshot, unmanage_only=False)

Deletes and unexports snapshot.

delete_volume(context, volume, unmanage_only=False, cascade=False)

Deletes and unexports volume.

  1. Delete a volume(normal case) Delete a volume and update quotas.
  2. Delete a migration volume If deleting the volume in a migration, we want to skip quotas but we need database updates for the volume.
detach_volume(context, volume_id, attachment_id=None, volume=None)

Updates db to show volume is detached.

extend_volume(context, volume, new_size, reservations)
failover(context, secondary_backend_id=None)

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceptable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Parameters:
  • context – security context
  • secondary_backend_id – Specifies backend_id to fail over to
failover_completed(context, updates)

Finalize failover of this backend.

When a service is clustered and replicated the failover has 2 stages, one that does the failover of the volumes and another that finalizes the failover of the services themselves.

This method takes care of the last part and is called from the service doing the failover of the volumes after finished processing the volumes.

failover_host(context, secondary_backend_id=None)

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceptable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Parameters:
  • context – security context
  • secondary_backend_id – Specifies backend_id to fail over to
finish_failover(context, service, updates)

Completion of the failover locally or via RPC.

freeze_host(context)

Freeze management plane on this backend.

Basically puts the control/management plane into a Read Only state. We should handle this in the scheduler, however this is provided to let the driver know in case it needs/wants to do something specific on the backend.

Parameters:context – security context
get_backup_device(ctxt, backup, want_objects=False)
get_capabilities(context, discover)

Get capabilities of backend storage.

get_manageable_snapshots(ctxt, marker, limit, offset, sort_keys, sort_dirs, want_objects=False)
get_manageable_volumes(ctxt, marker, limit, offset, sort_keys, sort_dirs, want_objects=False)
init_host(added_to_cluster=None, **kwargs)

Perform any required initialization.

init_host_with_rpc()
initialize_connection(context, volume, connector)

Prepare volume for connection from host represented by connector.

This method calls the driver initialize_connection and returns it to the caller. The connector parameter is a dictionary with information about the host that will connect to the volume in the following format:

{
    'ip': ip,
    'initiator': initiator,
}

ip: the ip address of the connecting machine

initiator: the iscsi initiator name of the connecting machine. This can be None if the connecting machine does not support iscsi connections.

driver is responsible for doing any necessary security setup and returning a connection_info dictionary in the following format:

{
    'driver_volume_type': driver_volume_type,
    'data': data,
}
driver_volume_type: a string to identify the type of volume. This
can be used by the calling code to determine the strategy for connecting to the volume. This could be ‘iscsi’, ‘rbd’, ‘sheepdog’, etc.
data: this is the data that the calling code will use to connect
to the volume. Keep in mind that this will be serialized to json in various places, so it should not contain any non-json data types.
is_working()

Return if Manager is ready to accept requests.

This is to inform Service class that in case of volume driver initialization failure the manager is actually down and not ready to accept any requests.

manage_existing(ctxt, volume, ref=None)
manage_existing_snapshot(ctxt, snapshot, ref=None)
migrate_volume(ctxt, volume, host, force_host_copy=False, new_type_id=None)

Migrate the volume to the specified host (called on source host).

migrate_volume_completion(ctxt, volume, new_volume, error=False)
publish_service_capabilities(context)

Collect driver status and then publish.

remove_export(context, volume_id)

Removes an export for a volume.

retype(context, volume, new_type_id, host, migration_policy='never', reservations=None, old_reservations=None)
secure_file_operations_enabled(ctxt, volume)
target = <Target version=3.10>
terminate_connection(context, volume_id, connector, force=False)

Cleanup connection from host represented by connector.

The format of connector is the same as for initialize_connection.

thaw_host(context)

UnFreeze management plane on this backend.

Basically puts the control/management plane back into a normal state. We should handle this in the scheduler, however this is provided to let the driver know in case it needs/wants to do something specific on the backend.

Parameters:context – security context
update_consistencygroup(context, group, add_volumes=None, remove_volumes=None)

Updates consistency group.

Update consistency group by adding volumes to the group, or removing volumes from the group.

update_group(context, group, add_volumes=None, remove_volumes=None)

Updates group.

Update group by adding volumes to the group, or removing volumes from the group.

update_migrated_volume(ctxt, volume, new_volume, volume_status)

Finalize migration process on backend device.