Computing services (nova) overview
Use OpenStack Compute to host and manage cloud computing systems. OpenStack computing is a major part of infrastructure as a service (IaaS) systems. The main module is implemented in Python.
OpenStack Compute interacts with OpenStack Identity for authentication, with OpenStack Placement for inventory tracking and selection, with OpenStack Image service for disk and server images, and with OpenStack Dashboard for user and management interfaces. Image access is limited by projects and users; The quota for each project is limited (for example, the number of instances). OpenStack Compute can scale horizontally on standard hardware and download images to start instances.
OpenStack Compute consists of the following areas and their components:
1. Nova API service
Accept and respond to end user computing API calls. The service supports OpenStack computing API. It enforces some policies and starts most orchestration activities, such as running instances.
2. Nova API metadata service
Accept metadata requests from instances. For more information, see metadata services.
3. Nova compute service
Create and terminate worker daemons for virtual machine instances through the hypervisor API. For example:
- libvirt for KVM or QEMU
- VMwareAPI for VMware
The processing is quite complex. Basically, the daemon accepts the operations in the queue and executes a series of system commands, such as starting the KVM instance and updating the status in the database.
4. Nova Scheduler service
Get the virtual machine instance request from the queue and determine which computing server host it is running on.
5. Nova conductor module
Mediate the interaction between nova computing service and database. It eliminates the direct access of nova compute service to cloud database. nova conductor module scale horizontally. However, do not deploy it on the node where the nova compute service is running. For more information, see the boot section in configuration options.
6. nova novncproxy daemon
An agent is provided to access running instances through VNC connections. Supports browser based novnc clients.
7. Nova spice html5proxy daemon
Provides an agent that accesses a running instance through a SPICE connection. Supports browser based HTML5 clients.
8. Message queue
A central hub for passing messages between daemons. It is usually implemented using RabbitMQ, but other options are available.
9. SQL database
Most build time and run-time states of the storage cloud infrastructure, including:
- Available instance types
- Instances in use
- Available network
- project
In theory, OpenStack Compute can support any database supported by SQLAlchemy. Common databases include SQLite3, MySQL, MariaDB and PostgreSQL for testing and development.
Installing and configuring the controller node
Pre installation requirements
Before installing and configuring computing services, you must create databases, service credentials, and API terminals.
To create a database, complete the following steps:
Connect to the database server as root using the database access client:
mysql -u root -p
Create nova_api, nova and nova_cell0 database:
CREATE DATABASE nova_api; CREATE DATABASE nova; CREATE DATABASE nova_cell0;
Grant correct access to the database (replace nova_dbpass with the appropriate password):
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS';
Exit the database access client.
Obtain administrator credentials to access administrator only CLI commands:
. admin-openrc
Create computing service credentials:
Create nova user:
openstack user create --domain default --password-prompt nova
To add a nova user to the administrator role:
openstack role add --project service --user nova admin
Create nova service entity:
openstack service create --name nova \ --description "OpenStack Compute" compute
To create a computing API service terminal:
openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1
Install the Placement service and configure users and terminals: for more information, see the Placement service installation guide.
Installing and configuring components
Install package:
yum install openstack-nova-api openstack-nova-conductor \ openstack-nova-novncproxy openstack-nova-scheduler
Edit the / etc/nova/nova.conf file and do the following:
In the [DEFAULT] section, only calculation and metadata APIs are enabled:
[DEFAULT] # ... enabled_apis = osapi_compute,metadata
In the [api_database] and [database] sections, configure database access (replace nova_dbpass with the password selected for the compute database):
[api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
In the [DEFAULT] section, configure RabbitMQ message queue access (replace rabbit_pass with the password you selected for the openstack account in RabbitMQ):
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
In the [api] and [keystone_authtoken] sections, configure identity service access (replace NOVA_PASS with the password you selected for NOVA users in the identity service):
Comment out or delete any other options in the [keystone_authtoken] section.
[api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS
In the [DEFAULT] section, configure my_ip option to use the management interface ip address of the controller node:
[DEFAULT] # ... my_ip = 10.0.0.11
Configure the [neutron] section of / etc/nova/nova.conf. For more details, see Network service installation guide.
In the [vnc] section, configure the vnc agent to use the management interface IP address of the controller node:
[vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip
In the [grace] section, configure the location of the image service API:
[glance] # ... api_servers = http://controller:9292
In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency] # ... lock_path = /var/lib/nova/tmp
In the [placement] section, configure access to the placement service (replace PLACEMENT_PASS with the password you selected for the placement service user created when installing placement. Comment out or delete any other options in the [placement] section):
[placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS
Synchronize nova api database:
su -s /bin/sh -c "nova-manage api_db sync" nova
Register cell0 database:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
Create cell1 cell:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
Synchronize nova database:
su -s /bin/sh -c "nova-manage db sync" nova
Verify that nova cell0 and cell1 are registered correctly:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False | | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+
Complete installation
Start the computing service and configure it to start when the system boots:
systemctl enable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service systemctl start \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service
Installing and configuring compute nodes
This section describes how to install and configure computing services on a computing node. The service supports multiple hypervisors to deploy instances or virtual machines (VMS). For simplicity, this configuration uses a fast emulator (QEMU) hypervisor and kernel based VM (KVM) extensions on compute nodes that support virtual machine hardware acceleration. On traditional hardware, this configuration uses the general QEMU hypervisor. You can follow these instructions with minor modifications to expand your environment horizontally using other compute nodes.
This section assumes that you are gradually configuring the first compute node as described in this guide. If you are configuring additional compute nodes, prepare them in a similar manner to the first compute node in the sample architecture section. Each additional compute node requires a unique IP address
Installing and configuring components
The default profile varies from release to release. You may need to add these sections and options instead of modifying existing sections and options. In addition, the ellipsis (...) in the configuration code snippet indicates the potential default configuration options that you should keep.
Install package:
yum install openstack-nova-compute
Edit the / etc/nova/nova.conf file and do the following:
In the [DEFAULT] section, only the calculation and metadata API s are enabled:
[DEFAULT] # ... enabled_apis = osapi_compute,metadata
In the [DEFAULT] section, configure RabbitMQ message queue access (replace rabbit_pass with the password you selected for the openstack account in RabbitMQ):
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
In the [api] and [keystone_authtoken] sections, configure identity service access (replace NOVA_PASS with the password you selected for NOVA users in the identity service):
[api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS
In the [DEFAULT] section, configure my_ip option (replace the IP address of the management interface with the IP address of the management network interface on the computing node, for Example architecture The first node in, usually 10.0.0.31):
[DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Configure the [neutron] section of / etc/nova/nova.conf. For more details, see Network service installation guide.
In the [vnc] section, enable and configure remote console access:
[vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens to all IP addresses, and the proxy component listens only to the management interface IP address of the computing node. The base URL represents the location of the remote console where the instance on this compute node can be accessed using a web browser.
If the web browser accessing the remote console is on a host that cannot resolve the controller hostname, you must replace the controller with the management interface IP address of the controller node.
In the [grace] section, configure the location of the image service API:
[glance] # ... api_servers = http://controller:9292
In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency] # ... lock_path = /var/lib/nova/tmp
In the [placement] section, configure the placement API (replace PLACEMENT_PASS with the password you selected for the placement user in the authentication service. Comment out any other options in the [placement] section.):
[placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS
Complete installation
Determine whether the compute node supports hardware acceleration of the virtual machine:
egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns one or more values, the computing node supports hardware acceleration and usually does not require additional configuration.
If the value returned by this command is zero, the compute node does not support hardware acceleration, and libvirt must be configured to use QEMU instead of KVM.
Edit the [libvirt] section in the / etc/nova/nova.conf file as follows:
[libvirt] # ... virt_type = qemu
Start the computing service (including its dependencies) and configure it to start automatically when the system boots:
systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service
If the nova compute service cannot be started, check / var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is not accessible, which may indicate that the firewall on the controller node is blocking access to port 5672. Configure the firewall to open port 5672 on the controller node and restart the nova compute service on the compute node.
Add the compute node to the cell database and run the following command on the controller node.
Obtain administrator credentials to enable the administrator only CLI command, and then verify that the compute host exists in the database:
. admin-openrc openstack compute service list --service nova-compute
Discover compute host:
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
When adding a new compute node, you must run Nova manage cell on the controller node_ v2 discover_ Hosts to register these new compute nodes. Alternatively, you can set the appropriate interval in / etc/nova/nova.conf:
[scheduler] discover_hosts_in_cells_interval = 300