Restart an OpenStack service

Restart an OpenStack service¶

Troubleshooting of an OpenStack service usually requires a service restart. To restart an OpenStack service, complete the steps described in the following table on all controller nodes unless indicated otherwise.

Caution

Before restarting a service on the next controller node, verify that the service is up and running on the node where you have restarted it using the service <SERVICE_NAME> status.

Note

Since a resource restart requires a considerable amount of time, some commands listed in the table below do not provide an immediate output.

Service name Restart procedure
Ceilometer
  1. Log in to a controller node CLI.

  2. Restart the Ceilometer services:

    # service ceilometer-agent-central restart
    # service ceilometer-api restart
    # service ceilometer-agent-notification restart
    # service ceilometer-collector status restart
    
  3. Verify the status of the Ceilometer services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

Cinder
  1. Log in to a controller node CLI.

  2. Restart the Cinder services:

    # service cinder-api restart
    # service cinder-scheduler restart
    
  3. Verify the status of the Cinder services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

  5. On every node with Cinder role, run:

    # service cinder-volume restart
    # service cinder-backup restart
    
  6. Verify the status of the cinder-volume and cinder-backup services.

Corosync/Pacemaker
  1. Log in to a controller node CLI.

  2. Restart the Corosync and Pacemaker services:

    # service corosync restart
    # service pacemaker restart
    
  3. Verify the status of the Corosync and Pacemaker services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

Glance
  1. Log in to a controller node CLI.

  2. Restart the Glance services:

    # service glance-api restart
    # service glance-registry restart
    
  3. Verify the status of the Glance services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

Horizon

Since the Horizon service is available through the Apache server, you should restart the Apache service on all controller nodes:

  1. Log in to a controller node CLI.

  2. Restart the Apache server:

    # service apache2 restart
    
  3. Verify whether the Apache service is successfully running after restart:

    # service apache2 status
    
  4. Verify whether the Apache ports are opened and listening:

    # netstat -nltp | egrep apache2
    
  5. Repeat step 1 - 3 on all controller nodes.

Ironic
  1. Log in to a controller node CLI.

  2. Restart the Ironic services:

    # service ironic-api restart
    # service ironic-conductor restart
    
  3. Verify the status of the Ironic services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

  5. On any controller node, run the following command for the nova-compute service configured to work with Ironic:

    # crm resource restart p_nova_compute_ironic
    
  6. Verify the status of the p_nova_compute_ironic service.

Keystone

Since the Keystone service is available through the Apache server, complete the following steps on all controller nodes:

  1. Log in to a controller node CLI.

  2. Restart the Apache server:

    # service apache2 restart
    
  3. Verify whether the Apache service is successfully running after restart:

    # service apache2 status
    
  4. Verify whether the Apache ports are opened and listening:

    # netstat -nltp | egrep apache2
    
  5. Repeat step 1 - 3 on all controller nodes.

MySQL
  1. Log in to any controller node CLI.

  2. Run the following command:

    # pcs status | grep -A1 mysql
    

    In the output, the resource clone_p_mysql should be in the Started status.

  3. Disable the clone_p_mysql resource:

    # pcs resource disable clone_p_mysqld
    
  4. Verify that the resource clone_p_mysqld is in the Stopped status:

    # pcs status | grep -A2 mysql
    

    It may take some time for this resource to be stopped on all controller nodes.

  5. Disable the clone_p_mysql resource:

    # pcs resource enable clone_p_mysqld
    
  6. Verify that the resource clone_p_mysqld is in the Started status again on all controller nodes:

    # pcs status | grep -A2 mysql
    

Warning

Use the pcs commands instead of crm for restarting the service. The pcs tool correctly stops the service according to the quorum policy preventing MySQL failures.

Neutron

Use the following restart steps for the DHCP Neutron agent as an example for all Neutron agents.

  1. Log in to any controller node CLI.

  2. Verify the DHCP agent status:

    # pcs resource show | grep -A1 neutron-dhcp-agent
    

    The output should contain the list of all controllers in the Started status.

  3. Stop the DHCP agent:

    # pcs resource disable clone_neutron-dhcp-agent
    
  4. Verify the Corosync status of the DHCP agent:

    # pcs resource show | grep -A1 neutron-dhcp-agent
    

    The output should contain the list of all controllers in the Stopped status.

  5. Verify the neutron-dhcp-agent status on the OpenStack side:

    # neutron agent-list
    

    The output table should contain the DHCP agents for every controller node with xxx in the alive column.

  6. Start the DHCP agent on every controller node:

    # pcs resource enable clone_neutron-dhcp-agent
    
  7. Verify the DHCP agent status:

    # pcs resource show | grep -A1 neutron-dhcp-agent
    

    The output should contain the list of all controllers in the Started status.

  8. Verify the neutron-dhcp-agent status on the OpenStack side:

    # neutron agent-list
    

    The output table should contain the DHCP agents for every controller node with :-) in the alive column and True in the admin_state_up column.

Nova
  1. Log in to a controller node CLI.

  2. Restart the Nova services:

    # service nova-api restart
    # service nova-cert restart
    # service nova-compute restart
    # service nova-conductor restart
    # service nova-consoleauth restart
    # service nova-novncproxy restart
    # service nova-scheduler restart
    # service nova-spicehtml5proxy restart
    # service nova-xenvncproxy restart
    
  3. Verify the status of the Nova services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

  5. On every compute node, run:

    # service nova-compute restart
    
  6. Verify the status of the nova-compute service.

RabbitMQ
  1. Log in to any controller node CLI.

  2. Disable the RabbitMQ service:

    # pcs resource disable master_p_rabbitmq-server
    
  3. Verify whether the service is stopped:

    # pcs status | grep -A2 rabbitmq
    
  4. Enable the service:

    # pcs resource enable master_p_rabbitmq-server
    

    During the startup process, the output of the pcs status command can show all existing RabbitMQ services in the Slaves mode.

  5. Verify the service status:

    # rabbitmqctl cluster_status
    

    In the output, the running_nodes field should contain all controllers’ host names in the rabbit@<HOSTNAME> format. The partitions field should be empty.

Swift
  1. Log in to a controller node CLI.

  2. Restart the Swift services:

    # service swift-account-auditor restart
    # service swift-account restart
    # service swift-account-reaper restart
    # service swift-account-replicator restart
    # service swift-container-auditor restart
    # service swift-container restart
    # service swift-container-reconciler restart
    # service swift-container-replicator restart
    # service swift-container-sync restart
    # service swift-container-updater restart
    # service swift-object-auditor restart
    # service swift-object restart
    # service swift-object-reconstructor restart
    # service swift-object-replicator restart
    # service swift-object-updater restart
    # service swift-proxy restart
    
  3. Verify the status of the Swift services. See Verify an OpenStack service status.

  4. Repeat step 1 - 3 on all controller nodes.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.

Contents