Openstack control host installation and configuration process III

1. Environment configuration

Hosts configuration
   modify the / etc/hosts file and add wtcontroller, wtcompute1 and wtcompute2:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.10.100 wtcontroller
172.16.10.101 wtcompute1
172.16.10.102 wtcompute2

Modify local hostname

echo "wtcontroller"> /etc/hostname

1.1 revise yum source

   the yum source of 163 used in this example:

CentOS7-Base-163.repo

   copy the above files to the directory / etc/yum.repos.d
   backup the CentOS-Base.repo file in this directory
Modify CentOS7-Base-163.repo to CentOS-Base.repo
Execute the following command:

yum clean all         #Clear cache
yum makecache       #Generate cache
yum list #Show all installed and installable packages

                   

systemctl stop initial-setup-text 

1.2 firewall operation

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service

1.3 turn off selinux security service

setenforce 0
getenforce
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux

1.4 installation time synchronization NTP service

yum install chrony -y
vim /etc/chrony.conf
--Refer to the network configuration and make sure the following configuration is on:
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
//And modify the following configuration to open the following network segment node to calibrate to the control node:
allow 172.16.10.0/24

   restart the service and set the service self start

systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service

   revise time zone

timedatectl set-timezone Asia/Shanghai
chronyc sources

1.5 install openstack update yum

yum install centos-release-openstack-rocky -y
yum clean all
yum makecache

1.6 install client software

yum install python-openstackclient openstack-selinux -y

2. Installation process

2.1 installation database

yum install mariadb mariadb-server python2-PyMySQL -y

   create and edit files

vi /etc/my.cnf.d/openstack.cnf
//Content:
[mysqld]
bind-address = 172.16.10.100
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

Binding address
Default storage engine
Using the exclusive table space mode, each table will have a table space, index files, quick index query, shared table space, shared table space and index. If there is damage, it is difficult to repair. For example, if the database used by zabbix is not used, it is difficult to optimize

   add database boot entry and start service

systemctl enable mariadb.service
systemctl start mariadb.service
systemctl list-unit-files |grep mariadb.service

   initial security settings for starting database service (root/wtoe@123456)

mysql_secure_installation

The   setting process is as follows:

Enter current password for root (enter for none):<–First run direct return 
OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MySQL root user without the proper authorisation.

Set root password? [Y/n] #Set root password, enter y and enter or enter directly
New password: #To set the password of root, use wtoe@123456
Re-enter new password: #Enter your password again 
Password updated successfully! Reloading privilege tables.. ... Success!
Remove anonymous users? [Y/n] #Whether to delete anonymous users? It is recommended to delete the production environment, so enter directly Success!

Disallow root login remotely? [Y/n] #Do you want to disable root remote login? Select Y/n according to your own needs and press enter. It is recommended to disable
Success!

Remove test database and access to it? [Y/n] #Delete test database or not, enter directly 
- Dropping test database... ... 
Success! - Removing privileges on test database... ...
 Success!

Reload privilege tables now? [Y/n] #Reload permission table, enter directly 
Success! Cleaning up...
All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL!

2.2 installing RabbitMQ

yum install rabbitmq-server -y

Add startup and service
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

Increase users and permissions to access rabbitMQ
rabbitmqctl add_user openstack wtoe@123456
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"

Start web Management
rabbitmq-plugins list
rabbitmq-plugins enable rabbitmq_management
systemctl restart rabbitmq-server.service
rabbitmq-plugins list
 Visit: http://192.168.1.241: 15672
 #The default username and password are all guest
 You need to confirm openstack users have added through web page access

2.2 install etcd -- service discovery system

   service installation

yum install etcd -y

   edit profile

vi /etc/etcd/etcd.conf

Modify as follows:

#Note that the IP address above cannot be replaced by controller and cannot be resolved
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.3.241 :2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.3.241:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.3.241:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.3.241:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.3.241:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

Start the service and set from:

systemctl enable etcd
systemctl start etcd

2.3keystone certification service installation

Database configuration
Enter database

mysql -u root -p 
//First, increase root's permissions for all databases
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'wtoe@123456';
#Create database
CREATE DATABASE keystone;
#Add user configuration permission
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'wtoe@123456';
flush privileges;
show databases;
select user,host from mysql.user;
exit

Install keystone related software package on the control node

yum install openstack-keystone httpd mod_wsgi -y
yum install openstack-keystone python-keystoneclient openstack-utils -y

           quick modification of keystone configuration -- non official website, openstack utils support is required

openstack-config -set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:wtoe@123456@controller/keystone

openstack-config --set /etc/keystone/keystone.conf token provider fernet

   view the active configuration

egrep -v "^#|^$" /etc/keystone/keystone.conf

Configuration information shall be as follows:

[DEFAULT]
[application_credential]
[assignment]
[auth]
[cache]
[catalog]
[cors]
[credential]
[database]
connection = mysql+pymysql://keystone:wtoe@123456@wtcontroller/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_tokens]
[healthcheck]
[identity]
[identity_mapping]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[policy]
[profiler]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider = fernet
[tokenless_auth]
[trust]
[unified_limit]
[wsgi]

Initialize synchronization keystone database (including 44 tables)

su -s /bin/sh -c "keystone-manage db_sync" keystone

Note: if python reports an error during database synchronization, you may need to do the following:
   install pip to update python response Library:

yum install python-pip
sudo pip uninstall urllib3
sudo pip uninstall chardet
sudo pip install requests

   view created tables

mysql -h192.168.3.241 -ukeystone -pwtoe@123456 -e "use keystone;show tables;"

Initialize the Fernet token Library

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
//Configure start Apache (httpd)
//Modify httpd master profile
vim /etc/httpd/conf/httpd.conf +95
vim /etc/httpd/conf/httpd.conf +95
#The amendments are as follows
ServerName controller
#inspect
cat /etc/httpd/conf/httpd.conf |grep ServerName
#Configure virtual host
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

Apache starts the service and adds startup items

systemctl enable httpd.service
systemctl start httpd.service
systemctl list-unit-files |grep httpd.service #View service settings

Check Apache service status

netstat -anptl|grep httpd

#If http can't work, you need to close selinux or install Yum install openstack selinux

Create keystone user, initialized service entity and API endpoint

#Create keystone service entity and identity authentication service. The following three types are public, internal and managed.

keystone-manage bootstrap --bootstrap-password wtoe@123456 \
  --bootstrap-admin-url http://wtcontroller:5000/v3/ \
  --bootstrap-internal-url http://wtcontroller:5000/v3/ \
  --bootstrap-public-url http://wtcontroller:5000/v3/ \
  --bootstrap-region-id RegionOne

Configure system environment variables for admin

export OS_USERNAME=admin
export OS_PASSWORD=wtoe@123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://wtcontroller:5000/v3
export OS_IDENTITY_API_VERSION=3
#View configuration
env |grep OS_

Create a general instance of keystone
#The following command creates a project named example in the project table

openstack domain create --description "An Example Domain" example

#Create a project named service for keystone system environment to provide services
#For general (non administrative) tasks, an unprivileged user is required
#The following command creates a project named service in the project table

openstack project create --domain default --description "Service Project" service

#Create myproject project and corresponding users and roles
#As a general user (non administrator) project, provide services for general users
#The following command creates a project named myproject in the project table

openstack project create --domain default --description "Demo Project" myproject

#Create a myuser user user in the default domain
#Use the -- password option to directly configure the plaintext password, and use the -- password prompt option to enter the password for interactive use
#The following command will add the myuser user user to the local Uuser table

openstack user create --domain default  --password-prompt myuser
#Password wtoe@123456
##openstack user create --domain default  --password=myuser wtoe@123456

#Create myrole role in role table
openstack role create myrole
#Add the myrole role role to the myproject project and the myuser user user group
openstack role add --project myproject --user myuser myrole

Verify operation keystone is installed successfully
Remove environment variables
#Close the temporary authentication token mechanism, obtain the token, and verify that keystone configuration is successful

Request an authenticated token as an administrator user
#Test whether the admin account can be used for login authentication and request authentication token

openstack --os-auth-url http://wtcontroller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

Using ordinary users to obtain authentication token
#The following command uses the password and API port 5000 of the "myuser" user and allows only general (non administrative) access to the authentication service API.

Create OpenStack client environment script

System environment variable VI admin openrc of admin

The contents are as follows:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=wtoe@123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

#VI myuser openrc of myuser
//The contents are as follows:
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=wtoe@123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

#Verification
source admin-openrc
openstack token issue

source myuser-openrc
openstack token issue

2.4 install glance image service

Create database

mysql -uroot -pwtoe@123456
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'wtoe@123456';
flush privileges;
exit

Register glance on keystone
Create glance user on keystone
#The following command creates a glance user in the local user table

openstack user create --domain default --password=wtoe@123456 glance
openstack user list

Add glance user as admin role (permission) of service project on keystone

openstack role add --project service --user glance admin

The following command adds a glance entry to the service table

openstack service create --name glance --description "OpenStack Image" image
openstack service list

Create API endpoint of image service

openstack endpoint create --region RegionOne image public http://wtcontroller:9292
openstack endpoint create --region RegionOne image internal http://wtcontroller:9292
openstack endpoint create --region RegionOne image admin http://wtcontroller:9292

Install glance software

yum install openstack-glance python-glance python-glanceclient -y

Modify glance related configuration
Execute the following command to quickly configure glance-api.conf

openstack-config --set  /etc/glance/glance-api.conf database connection  mysql+pymysql://glance:wtoe@123456@wtcontroller/glance
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://wtcontroller:5000
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken auth_url http://wtcontroller:5000
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken memcached_servers  wtcontroller:11211
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken project_name service 
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken password wtoe@123456
openstack-config --set  /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set  /etc/glance/glance-api.conf glance_store stores  file,http
openstack-config --set  /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set  /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

//Execute the following command to quickly configure glance-registry.conf
openstack-config --set  /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:wtoe@123456@wtcontroller/glance
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://wtcontroller:5000
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken auth_url http://wtcontroller:5000
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken memcached_servers wtcontroller:11211
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken password wtoe@123456
openstack-config --set  /etc/glance/glance-registry.conf paste_deploy flavor keystone

Synchronize glance database
#Generated related tables (15 tables)

su -s /bin/sh -c "glance-manage db_sync" glance

If the number of database connections always exceeds the number of threads started by the network service due to the performance of the host (the default number of threads depends on the number of cpu cores), you need to manually set the number of worker threads (manually changed to 4 here):

openstack-config --set  /etc/nova/nova.conf scheduler workers  4

#Ensure that all required tables have been established, otherwise, it may not be able to proceed later

mysql -h172.16.10.100 -uglance -pwtoe@123456 -e "use glance;show tables;"

Start glance image service
Start glance image service, and configure power on self start

systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl list-unit-files |grep openstack-glance*

Check that glance is installed correctly

Download mirroring
http://download.cirros-cloud.net/ ා manual
cd /home
wget http://download.cirros-cloud.net/0.3.5/cirros-d190515-x86_64-disk.img

Get administrator rights

. admin-openrc 

Upload image to glance

openstack image create "cirros" --file cirros-d190515-x86_64-disk.img --disk-format qcow2 --container-format bare --public
openstack image create "CentOS7" --file CentOS-7-x86_64-GenericCloud-1907.qcow2 --disk-format qcow2 --container-format bare --public

Check whether the image is uploaded successfully
openstack image list

2.5 install nova computing services

Create database

mysql -uroot -pwtoe@123456
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'wtoe@123456';
flush privileges;
show databases;
select user,host from mysql.user;
Exit

//Register nova service on keystone
#Create service certificate
//Create nova users on keystone
. admin-openrc
openstack user create --domain default --password=wtoe@123456 nova

//Configure nova user as admin role in keystone and add it to service project
openstack role add --project service --user nova admin

//Create entity of nova computing service
openstack service create --name nova --description "OpenStack Compute" compute

Create an API endpoint for a computing service

openstack endpoint create --region RegionOne compute public http://wtcontroller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://wtcontroller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://wtcontroller:8774/v2.1
openstack endpoint list

This version of nova adds the placement project
#Similarly, create and register a service certificate for the project

openstack user create --domain default --password=wtoe@123456 placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement

#Create the endpoint (API port) of the placement project

openstack endpoint create --region RegionOne placement public http://wtcontroller:8778
openstack endpoint create --region RegionOne placement internal http://wtcontroller:8778
openstack endpoint create --region RegionOne placement admin http://wtcontroller:8778
openstack endpoint list

The end is finished.

Install nova related services at the control node
Install nova related packages

yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api -y

Quick modification of nova configuration

openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 172.16.10.100
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron  true 
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:wtoe@123456@wtcontroller
openstack-config --set  /etc/nova/nova.conf api_database connection  mysql+pymysql://nova:wtoe@123456@wtcontroller/nova_api
openstack-config --set  /etc/nova/nova.conf database connection  mysql+pymysql://nova:wtoe@123456@wtcontroller/nova
openstack-config --set  /etc/nova/nova.conf placement_database connection  mysql+pymysql://placement:wtoe@123456@wtcontroller/placement
openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url  http://wtcontroller:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers  wtcontroller:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name  default 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username  nova 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password  wtoe@123456
openstack-config --set  /etc/nova/nova.conf vnc enabled true
openstack-config --set  /etc/nova/nova.conf vnc server_listen '$my_ip'
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip'
openstack-config --set  /etc/nova/nova.conf glance api_servers  http://wtcontroller:9292
openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp 
openstack-config --set  /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement project_name service
openstack-config --set  /etc/nova/nova.conf placement auth_type password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement auth_url http://wtcontroller:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username placement
openstack-config --set  /etc/nova/nova.conf placement password wtoe@123456
openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300

#How often does the server's computing node check the newly added host host information? It can automatically add the installed computing node host to the cluster 300

#View configuration

egrep -v "^#|^$" /etc/nova/nova.conf

Configure hardware acceleration for virtual machines

#First determine whether your compute node supports hardware acceleration of virtual machines.

egrep -c '(vmx|svm)' /proc/cpuinfo

#If the return bit is 0, it means that the computing node does not support hardware acceleration. You need to configure libvirt to use QEMU mode to manage the virtual machine, and use the following command:

openstack-config --set  /etc/nova/nova.conf libvirt virt_type  qemu
egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type'

#If you return a different value, indicating that the compute node supports hardware acceleration and does not require additional configuration, use the following command:

openstack-config --set  /etc/nova/nova.conf libvirt virt_type  kvm

If an instance creation error still occurs when the computing node supports hardware acceleration, you need to further confirm whether the hardware acceleration is turned on:

dmesg | grep kvm
 If there is a display of [3.692481] kvm: disabled by bios
 You need to turn on the virtualization option in bios

Start nova related services and configure it to start from scratch
#2 services need to be started

systemctl start libvirtd.service openstack-nova-compute.service 
systemctl status libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl list-unit-files |grep libvirtd.service
systemctl list-unit-files |grep openstack-nova-compute.service

Add compute nodes to cell database
#The following commands operate on the control node:

. admin-openrc 

#Check and confirm that there are new computing nodes in the database

openstack compute service list --service nova-compute

#If not, you need to add it manually -- add the new computing node to the openstack cluster manually

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

If the number of database connections always exceeds the number of threads started by the network service due to the performance of the host (the default number of threads depends on the number of cpu cores), you need to manually set the number of worker threads (manually changed to 4 here):

openstack-config --set  /etc/nova/nova.conf scheduler workers  4

#Set up tasks for automatic registration of newly created nodes (already added to the configuration file)

[scheduler]
discover_hosts_in_cells_interval = 300

Verify that nova service of control node is normal
Application administrator environment variable script

. admin-openrc 

List the installed nova service components
#Verify that each process was successfully registered and started

openstack compute service list

List API endpoints in the authentication service to verify their connectivity

openstack catalog list

List the existing mirrors in the mirror service check the connectivity of the mirror service

openstack image list

Check the status of nova components
#Check whether the placement API and cell service are working properly

nova-status upgrade check

#At this point, the nova computing node is installed and added to the openstack cluster

2.6 installing neutron network services

   create a neutron database and grant appropriate access

mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'wtoe@123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'wtoe@123456';
Exit

Configuration of Keystone neutron

openstack user create --domain default --password=wtoe@123456 neutron
openstack user list

Add neutron to the service project and grant the admin role
#The following command has no output

openstack role add --project service --user neutron admin
//Create a neutron service entity
openstack service create --name neutron --description "OpenStack Networking" network
openstack service list

Create an API endpoint for a neutron network service

openstack endpoint create --region RegionOne network public http://wtcontroller:9696
openstack endpoint create --region RegionOne network internal http://wtcontroller:9696
openstack endpoint create --region RegionOne network admin http://wtcontroller:9696
openstack endpoint list

Installing the neutron network component in the control node
Installing the neutron package

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

Quick configuration / etc/neutron/neutron.conf

openstack-config --set  /etc/neutron/neutron.conf database connection  mysql+pymysql://neutron:wtoe@123456@wtcontroller/neutron 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin  ml2  
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:wtoe@123456@wtcontroller
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://wtcontroller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url  http://wtcontroller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers  wtcontroller:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name default  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  wtoe@123456  
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  True  
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  True  
openstack-config --set  /etc/neutron/neutron.conf nova auth_url  http://wtcontroller:5000
openstack-config --set  /etc/neutron/neutron.conf nova auth_type  password 
openstack-config --set  /etc/neutron/neutron.conf nova project_domain_name  default  
openstack-config --set  /etc/neutron/neutron.conf nova user_domain_name  default  
openstack-config --set  /etc/neutron/neutron.conf nova region_name  RegionOne  
openstack-config --set  /etc/neutron/neutron.conf nova project_name  service  
openstack-config --set  /etc/neutron/neutron.conf nova username  nova  
openstack-config --set  /etc/neutron/neutron.conf nova password  wtoe@123456  
openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

#Check the validity of the revision

egrep -v "^#|^$" /etc/neutron/neutron.conf

//Quick configuration / etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  local,flat,vlan,vxlan,gre
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  openvswitch,l2population
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  True

#Check the validity of the revision

egrep -v "^#|^$" /etc/neutron/plugins/ml2/ml2_conf.ini

//Quick configuration / etc/neutron/plugins/ml2/openvswitch_agent.ini
openstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types  vxlan
openstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population  True
openstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini agent prevent_arp_spoofing  True
openstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip  172.16.20.80
penstack-config --set  /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge  br-tun
egrep -v "^#|^$" /etc/neutron/plugins/ml2/openvswitch_agent.ini

Quick configuration / etc/neutron/dhcp_agent.ini

openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  interface_driver  neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  dhcp_driver  neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  enable_isolated_metadata  True 
openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

View active configurations

egrep -v '(^$|^#)' /etc/neutron/dhcp_agent.ini
//Quick configuration / etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host wtcontroller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret wtoe@123456
//View active configurations
egrep -v '(^$|^#)' /etc/neutron/metadata_agent.ini

Quickly configure / etc/nova/nova.conf to add neutron to the compute node

openstack-config --set  /etc/nova/nova.conf  neutron url http://wtcontroller:9696
openstack-config --set  /etc/nova/nova.conf  neutron auth_url http://wtcontroller:5000
openstack-config --set  /etc/nova/nova.conf  neutron auth_type password
openstack-config --set  /etc/nova/nova.conf  neutron project_domain_name default
openstack-config --set  /etc/nova/nova.conf  neutron user_domain_name default
openstack-config --set  /etc/nova/nova.conf  neutron region_name RegionOne
openstack-config --set  /etc/nova/nova.conf  neutron project_name service
openstack-config --set  /etc/nova/nova.conf  neutron username neutron
openstack-config --set  /etc/nova/nova.conf  neutron password wtoe@123456
openstack-config --set  /etc/nova/nova.conf  neutron service_metadata_proxy true
openstack-config --set  /etc/nova/nova.conf  neutron metadata_proxy_shared_secret wtoe@123456

View active configurations

egrep -v '(^$|^#)' /etc/nova/nova.conf

Create a link to a network plug-in

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Synchronize database

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

When synchronizing the database, if the number of database connections exceeds the maximum number of connections (view through status), you need to enter the data to modify the maximum number of connections

  >Show variables like 'Max' connections
  >Set global max_connections = 1000; (set the maximum number of connections to 1000, and you can check whether it is set successfully again)

In addition, if the number of connections always exceeds the number of threads started by the network service due to host performance (the default number of threads depends on the number of cpu cores), you need to manually set the number of worker threads:

openstack-config --set  /etc/neutron/neutron.conf DEFAULT api_workers  4
restart nova_api service
systemctl restart openstack-nova-api.service
//Start the neutron service and set the startup
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl list-unit-files |grep neutron* |grep enabled

2.6 install the horizon service

Install dashboard package

yum install openstack-dashboard -y

Modify the configuration file / etc / openstack dashboard / local_settings
#Check and confirm the following configuration

vim /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_HOST = "wtcontroller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'wtcontroller:11211',
    }
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_***': False,
}
TIME_ZONE = "Asia/Shanghai"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
modify/etc/httpd/conf.d/openstack-dashboard.conf
#Add the following
vim /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}

Restart web server and session storage service

systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service

Check if dashboard is available
#Enter the following address in the browser: domain name with default

http://wtcontroller:80/dashboard 
User 1: admin/wtoe@123456
 User 2: myuser/wtoe@123456

Keywords: Linux OpenStack Database yum MySQL

Added by fastidious on Tue, 07 Apr 2020 12:54:15 +0300