The Telemetry service uses an agent-based architecture. Several modules combine their responsibilities to collect, normalize, and redirect data to be used for use cases such as metering, monitoring, and alerting.
The Telemetry service is built from the following agents:
Note
The ceilometer-polling
service provides polling support on any
namespace but many distributions continue to provide namespace-scoped
agents: ceilometer-agent-central
, ceilometer-agent-compute
,
and ceilometer-agent-ipmi
.
Except for the ceilometer-polling
agents polling the compute
or
ipmi
namespaces, all the other services are placed on one or more
controller nodes.
The Telemetry architecture depends on the AMQP service both for consuming notifications coming from OpenStack services and internal communication.
The other key external component of Telemetry is the database, where events, samples, alarm definitions, and alarms are stored. Each of the data models have their own storage service and each support various back ends.
The list of supported base back ends for measurements:
The list of supported base back ends for alarms:
The list of supported base back ends for events:
The Telemetry service collects information about the virtual machines, which requires close connection to the hypervisor that runs on the compute hosts.
The following is a list of supported hypervisors.
Note
For details about hypervisor support in libvirt please see the Libvirt API support matrix.
Telemetry is able to retrieve information from external networking services:
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.