Introduction to 24.1 Automation Operation and Maintenance
Understanding Automation Operation and Maintenance:
Traditional operation and maintenance is inefficient and most of the work is done artificially.
Traditional operation and maintenance work is cumbersome and error-prone.
Traditional operations do the same thing over and over again every day
There is no standardized process for traditional operations and maintenance
Traditional operation and maintenance scripts are too many to be managed easily.
Automation operation and maintenance is to solve all the above problems.
Common automated operation and maintenance tools:
Puppet (www.puppetlabs.com) is based on rubby development, c/s architecture, supports multiple platforms, and can manage configuration files, users, cron tasks, software packages, system services, etc. It is divided into Community Edition (free) and Enterprise Edition (fee). Enterprise edition supports graphical configuration.
Saltstack (official website https://saltstack.com, document docs.saltstack.com) is based on python development, c/s architecture, supports multiple platforms, is lighter than puppet, is very fast in remote execution of commands, is easier to configure and use than puppet, and can achieve almost all functions of puppet.
-
Ansible (www.ansible.com) is a more concise automated operation and maintenance tool. It does not need to install agent on the client, and is developed based on python. It can realize the configuration of batch operation system, the deployment of batch program and the batch operation command.
24.2 saltstack Installation
Saltstack introduces https://docs.saltstack.com/en/latest/topics/index.html
—— You can use salt-ssh for remote execution, similar to ansible
—— It also supports the c/s mode. We will describe the use of this mode. We need to prepare two machines.
—— 194.130 is the server, 194.132 is the client.
1. Setting hostname and hosts, arslinux-01,arslinux-02
[root@arslinux-01 ~]# vim /etc/hosts 192.168.194.130 arslinux-01 192.168.194.132 arslinux-02
2. All two machines are equipped with saltstack yum source
[root@arslinux-01 ~]# yum install -y https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm [root@arslinux-02 ~]# yum install -y https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm
3. Install salt-master salt-minion on 130 and salt-minion on 132
[root@arslinux-01 ~]# yum install -y salt-master salt-minion [root@arslinux-02 ~]# yum install -y salt-minion
If you want which machine to be the control center, install salt-master on the machine, and salt-minion on the other machines.
24.3 Start saltstack service
Editing configuration files on 1, 130
[root@arslinux-01 ~]# vim /etc/salt/minion master: arslinux-01
(The space after the colon cannot be omitted, otherwise it will be wrong)
2. Start the service salt-master, salt-minion
[root@arslinux-01 ~]# systemctl start salt-master [root@arslinux-01 ~]# systemctl start salt-minion [root@arslinux-01 ~]# ps aux|grep salt root 44172 0.3 1.3 389376 40932 ? Ss 22:23 0:03 /usr/bin/python /usr/bin/salt-master root 44181 0.0 0.6 306024 20072 ? S 22:23 0:00 /usr/bin/python /usr/bin/salt-master root 44188 0.0 1.1 469972 34380 ? Sl 22:23 0:00 /usr/bin/python /usr/bin/salt-master root 44192 0.0 1.1 388464 34144 ? S 22:23 0:00 /usr/bin/python /usr/bin/salt-master root 44193 0.7 1.9 417660 60528 ? S 22:23 0:08 /usr/bin/python /usr/bin/salt-master root 44194 0.0 1.1 389120 34820 ? S 22:23 0:00 /usr/bin/python /usr/bin/salt-master root 44195 0.0 1.1 765976 35248 ? Sl 22:23 0:00 /usr/bin/python /usr/bin/salt-master root 44203 0.3 1.5 487824 49356 ? Sl 22:23 0:04 /usr/bin/python /usr/bin/salt-master root 44204 0.3 1.5 487804 49320 ? Sl 22:23 0:04 /usr/bin/python /usr/bin/salt-master root 44205 0.3 1.5 487796 49184 ? Sl 22:23 0:04 /usr/bin/python /usr/bin/salt-master root 44207 0.3 1.5 487808 49192 ? Sl 22:23 0:04 /usr/bin/python /usr/bin/salt-master root 44208 0.3 1.5 487792 49316 ? Sl 22:23 0:04 /usr/bin/python /usr/bin/salt-master root 44210 0.2 1.1 463108 35224 ? Sl 22:23 0:02 /usr/bin/python /usr/bin/salt-master root 47603 14.0 0.7 314132 21716 ? Ss 22:43 0:00 /usr/bin/python /usr/bin/salt-minion root 47606 56.0 1.3 567764 42856 ? Sl 22:43 0:01 /usr/bin/python /usr/bin/salt-minion root 47614 0.3 0.6 403864 20176 ? S 22:43 0:00 /usr/bin/python /usr/bin/salt-minion root 47685 0.0 0.0 112724 988 pts/0 R+ 22:43 0:00 grep --color=auto salt
Editing configuration files on 3 and 132
[root@arslinux-02 ~]# vim /etc/salt/minion master: arslinux-01 [root@arslinux-02 ~]# systemctl start salt-minion
4. Start the service salt-minion
[root@arslinux-02 ~]# ps aux|grep salt root 14221 33.0 2.1 314028 21740 ? Ss 22:43 0:00 /usr/bin/python /usr/bin/salt-minion root 14224 55.5 3.9 466532 39152 ? Sl 22:43 0:01 /usr/bin/python /usr/bin/salt-minion root 14232 0.0 2.0 403760 20180 ? S 22:43 0:00 /usr/bin/python /usr/bin/salt-minion root 14294 0.0 0.0 112724 988 pts/1 R+ 22:43 0:00 grep --color=auto salt
Server monitors 4505 and 4506 ports, 4505 is the port for message publishing, 4506 is the port for communication with client.
Clients do not need to listen on ports
Error:
Start satl-minion, you can't see the process, there is an error in the following method
[root@arslinux-01 ~]# /usr/bin/salt-minion start /usr/lib/python2.7/site-packages/salt/scripts.py:198: DeprecationWarning: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. Salt will drop support for Python 2.7 in the Sodium release or later. [ERROR ] Error parsing configuration file: /etc/salt/minion - conf should be a document, not <type 'unicode'>. [ERROR ] Error parsing configuration file: /etc/salt/minion - conf should be a document, not <type 'unicode'>.
Finally, it is found that the space between master: arslinux-01 in / etc/salt/minion cannot be omitted.
24.4 saltstack Configuration Authentication
Recognition of saltstack configuration authentication:
—— The communication between master and minion needs to establish a secure channel, and the transmission process needs to be encrypted, so authentication must be configured, and encryption and decryption can also be achieved through key pairs.
—— Minion generates minion.pem and minion.pub under / etc/salt/pki/minion / on the first startup, where. pub is the public key and it transfers the public key to master
—— The master also generates a key pair under / etc/salt/pki/master at the first startup. When the master receives the public key from minion, it accepts the public key through the salt-key tool. Once accepted, it stores the newly accepted public key in the / etc/salt/pki/master/minions / directory, and the client also accepts it. Master passes the past public key, places it in the / etc/salt/pki/minion directory, and names it minion_master.pub
The above process needs to be implemented with the salt-key tool
[root@arslinux-01 ~]# salt-key -a arslinux-02 The following keys are going to be accepted: Unaccepted Keys: arslinux-02 Proceed? [n/Y] y Key for minion arslinux-02 accepted. [root@arslinux-01 ~]# salt-key Accepted Keys: arslinux-02 Denied Keys: Unaccepted Keys: arslinux-01 Rejected Keys: [root@arslinux-01 ~]# ls /etc/salt/pki/master/minions/ arslinux-02 [root@arslinux-01 ~]# cat /etc/salt/pki/master/minions/arslinux-02 -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA33bNZQ/cEK8v20hVFbb6 WGMROxv9kGImHyn6OYNfJHFFpiJblgZheeqct0nrUW4TugLv7LI7a3+DXs2JkzqH Sh5Q06W1nj4Q0Qv9uGJqf75ZjCvapuCGRR8e79ETbXmhmAwXMmewK8UiWCRFe2/g nc/w/2rwk6QIpUsNYLCwPF0FLrdJJJDEcWp93UW0SZXHllkqubsBdHdqo8SZVK0H 30n2e3dzwwbVqgIV3AE9kp8qevuwq5sJ1XJLV0BcLroTfft4BODttS4AcaVyWmKK qNlal3oYYpjXRnJIcZzp5e5srQRjUzFzDKJfS1o6iFf76BuBRnp+eiIx37K05w3d SQIDAQAB -----END PUBLIC KEY-----[root@arslinux-01 ~]#
Sal-key command usage:
- a. Follow the host name to authenticate the designated host
- A Authenticate All Hosts
- r. With host name, refuse to specify host
- R. Deny all hosts
- d with host name, delete designated host authentication
- D Delete all host authentication
- y omits interaction, which is equivalent to pressing y directly
Actual operation:
[root@arslinux-01 ~]# salt-key -A The following keys are going to be accepted: Unaccepted Keys: arslinux-01 Proceed? [n/Y] y Key for minion arslinux-01 accepted. [root@arslinux-01 ~]# !ls ls /etc/salt/pki/master/minions/ arslinux-01 arslinux-02 [root@arslinux-01 ~]# salt-key -D The following keys are going to be deleted: Accepted Keys: arslinux-01 arslinux-02 Proceed? [N/y] y Key for minion arslinux-01 deleted. Key for minion arslinux-02 deleted. [root@arslinux-01 ~]# ls /etc/salt/pki/master/minions/ [root@arslinux-01 ~]#
—— You can't add it after deletion. You need to restart minion so that the master can recognize minion again.
[root@arslinux-01 ~]# salt-key -A -y The key glob '*' does not match any unaccepted keys. [root@arslinux-01 ~]# systemctl restart salt-minion [root@arslinux-02 ~]# systemctl restart salt-minion [root@arslinux-01 ~]# salt-key Accepted Keys: Denied Keys: Unaccepted Keys: arslinux-01 arslinux-02 Rejected Keys: [root@arslinux-01 ~]# salt-key -A -y The following keys are going to be accepted: Unaccepted Keys: arslinux-01 arslinux-02 Key for minion arslinux-01 accepted. Key for minion arslinux-02 accepted.
—— Only keys under Unaccepted keys can be operated on by salt-key-r or salt-key-R
[root@arslinux-01 ~]# salt-key -r arslinux-02 The key glob 'arslinux-02' does not match any unaccepted keys. [root@arslinux-01 ~]# systemctl restart salt-minion [root@arslinux-02 ~]# systemctl restart salt-minion [root@arslinux-01 ~]# salt-key Accepted Keys: Denied Keys: Unaccepted Keys: arslinux-01 arslinux-02 Rejected Keys: [root@arslinux-01 ~]# salt-key -r arslinux-02 The following keys are going to be rejected: Unaccepted Keys: arslinux-02 Proceed? [n/Y] y Key for minion arslinux-02 rejected.
24.5 saltstack remotely executes commands
[root@arslinux-01 ~]# salt-key Accepted Keys: arslinux-01 arslinux-02 Denied Keys: Unaccepted Keys: Rejected Keys:
Salt'*'test.ping) Tests whether the opponent's machine survives
[root@arslinux-01 ~]# salt '*' test.ping arslinux-02: True arslinux-01: True [root@arslinux-01 ~]# salt 'arslinux-02' test.ping arslinux-02: True
Here * denotes all signed minion s, and you can also specify one, rest.ping, to test whether the other machine survives.
Sal'*'cmd. run "command". Execute this command on the minion end of all signatures
[root@arslinux-01 ~]# salt '*' cmd.run 'ip addr' arslinux-02: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:14:4f:d9 brd ff:ff:ff:ff:ff:ff inet 192.168.194.132/24 brd 192.168.194.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::4c99:ed43:5757:e772/64 scope link noprefixroute valid_lft forever preferred_lft forever arslinux-01: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:24:ea:f2 brd ff:ff:ff:ff:ff:ff inet 192.168.194.130/24 brd 192.168.194.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.194.150/24 brd 192.168.194.255 scope global secondary noprefixroute ens33:0 valid_lft forever preferred_lft forever inet6 fe80::c905:5e78:b916:41da/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:24:ea:fc brd ff:ff:ff:ff:ff:ff inet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens37 valid_lft forever preferred_lft forever inet6 fe80::f41:9da7:d8e3:10ba/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@arslinux-01 ~]# salt 'arslinux-02' cmd.run 'tail -1 /etc/passwd' arslinux-02: git:x:1001:1001::/home/git:/usr/bin/git-shell
Note: The * here must be an authenticated client on the master, which can be found through salt-key, usually the id value we have set.
For this part, it supports wildcards, lists, and rules. For example, two clients, aming-01 and aming-02, can be written in the form of salt'aming-*', salt'aming-0 [12]'salt-L'aming-01,aming-02' salt-E'aming-(01 | 02)'. Use lists, i.e., multiple machines are separated by commas, and need to add-L, and use regularities must take-E option. It also supports grains, plus - G option, pillar plus - I option, as described below.
[root@arslinux-01 ~]# salt 'arslinux-*' cmd.run 'hostname' arslinux-01: arslinux-01 arslinux-02: arslinux-02 [root@arslinux-01 ~]# salt 'arslinux-0[12]' cmd.run 'hostname' arslinux-02: arslinux-02 arslinux-01: arslinux-01 [root@arslinux-01 ~]# salt -L 'arslinux-01,arslinux-02' cmd.run 'hostname' arslinux-02: arslinux-02 arslinux-01: arslinux-01 [root@arslinux-01 ~]# salt -E 'arslinux-[0-9]+' cmd.run 'hostname' arslinux-02: arslinux-02 arslinux-01: arslinux-01 [root@arslinux-01 ~]# salt -E 'arslinux-(01|02)' cmd.run 'hostname' arslinux-02: arslinux-02 arslinux-01: arslinux-01
24.6 grains
grains is information gathered when minion starts up, such as operating system type, network card ip, kernel version, cpu architecture, etc.
salt'hostname'grains.ls] Lists all grains project names
[root@arslinux-01 ~]# salt 'arslinux-01' grains.ls arslinux-01: - SSDs - biosreleasedate - biosversion - cpu_flags - cpu_model - cpuarch - disks - dns - domain - fqdn - fqdn_ip4 - fqdn_ip6 - fqdns - gid - gpus - groupname - host - hwaddr_interfaces - id - init - ip4_gw - ip4_interfaces - ip6_gw - ip6_interfaces - ip_gw - ip_interfaces - ipv4 - ipv6 - kernel - kernelrelease - kernelversion - locale_info - localhost - lsb_distrib_codename - lsb_distrib_id - machine_id - manufacturer - master - mdadm - mem_total - nodename - num_cpus - num_gpus - os - os_family - osarch - oscodename - osfinger - osfullname - osmajorrelease - osrelease - osrelease_info - path - pid - productname - ps - pythonexecutable - pythonpath - pythonversion - saltpath - saltversion - saltversioninfo - selinux - serialnumber - server_id - shell - swap_total - systemd - uid - username - uuid - virtual - zfs_feature_flags - zfs_support - zmqversion
Sal'arslinux-01'grains. items
[root@arslinux-01 ~]# salt 'arslinux-01' grains.items arslinux-01: ---------- SSDs: biosreleasedate: 07/02/2015 biosversion: 6.00 cpu_flags: - fpu - vme - de - pse - tsc - msr - pae - mce - cx8 - apic - sep - mtrr - pge - mca - cmov - pat - pse36 - clflush - dts - mmx - fxsr - sse - sse2 - ss - syscall - nx - pdpe1gb - rdtscp - lm - constant_tsc - arch_perfmon - pebs - bts - nopl - xtopology - tsc_reliable - nonstop_tsc - aperfmperf - eagerfpu - pni - pclmulqdq - ssse3 - fma - cx16 - pcid - sse4_1 - sse4_2 - x2apic - movbe - popcnt - tsc_deadline_timer - aes - xsave - avx - f16c - rdrand - hypervisor - lahf_lm - abm - 3dnowprefetch - epb - fsgsbase - tsc_adjust - bmi1 - avx2 - smep - bmi2 - invpcid - rdseed - adx - smap - xsaveopt - dtherm - arat - pln - pts - hwp - hwp_notify - hwp_act_window - hwp_epp cpu_model: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz cpuarch: x86_64 disks: - sda - sdb - sr0 - dm-0 dns: ---------- domain: ip4_nameservers: - 119.29.29.29 ip6_nameservers: nameservers: - 119.29.29.29 options: search: sortlist: domain: fqdn: arslinux-01 fqdn_ip4: - 192.168.194.130 fqdn_ip6: - fe80::c905:5e78:b916:41da - fe80::f41:9da7:d8e3:10ba fqdns: gid: 0 gpus: |_ ---------- model: SVGA II Adapter vendor: vmware groupname: root host: arslinux-01 hwaddr_interfaces: ---------- ens33: 00:0c:29:24:ea:f2 ens37: 00:0c:29:24:ea:fc lo: 00:00:00:00:00:00 id: arslinux-01 init: systemd ip4_gw: 192.168.194.2 ip4_interfaces: ---------- ens33: - 192.168.194.130 - 192.168.194.150 ens37: - 192.168.100.1 lo: - 127.0.0.1 ip6_gw: False ip6_interfaces: ---------- ens33: - fe80::c905:5e78:b916:41da - 192.168.194.150 ens37: - fe80::f41:9da7:d8e3:10ba lo: - ::1 ip_gw: True ip_interfaces: ---------- ens33: - 192.168.194.130 - fe80::c905:5e78:b916:41da - 192.168.194.150 ens37: - 192.168.100.1 - fe80::f41:9da7:d8e3:10ba lo: - 127.0.0.1 - ::1 ipv4: - 127.0.0.1 - 192.168.100.1 - 192.168.194.130 - 192.168.194.150 ipv6: - ::1 - fe80::f41:9da7:d8e3:10ba - fe80::c905:5e78:b916:41da kernel: Linux kernelrelease: 3.10.0-957.el7.x86_64 kernelversion: #1 SMP Thu Nov 8 23:39:32 UTC 2018 locale_info: ---------- defaultencoding: UTF-8 defaultlanguage: zh_CN detectedencoding: UTF-8 localhost: arslinux-01 lsb_distrib_codename: CentOS Linux 7 (Core) lsb_distrib_id: CentOS Linux machine_id: 0b3b2aee4c754c669d6ca09336428b22 manufacturer: VMware, Inc. master: arslinux-01 mdadm: mem_total: 2827 nodename: arslinux-01 num_cpus: 1 num_gpus: 1 os: CentOS os_family: RedHat osarch: x86_64 oscodename: CentOS Linux 7 (Core) osfinger: CentOS Linux-7 osfullname: CentOS Linux osmajorrelease: 7 osrelease: 7.6.1810 osrelease_info: - 7 - 6 - 1810 path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin pid: 4817 productname: VMware Virtual Platform ps: ps -efHww pythonexecutable: /usr/bin/python pythonpath: - /usr/bin - /usr/lib64/python27.zip - /usr/lib64/python2.7 - /usr/lib64/python2.7/plat-linux2 - /usr/lib64/python2.7/lib-tk - /usr/lib64/python2.7/lib-old - /usr/lib64/python2.7/lib-dynload - /usr/lib64/python2.7/site-packages - /usr/lib/python2.7/site-packages pythonversion: - 2 - 7 - 5 - final - 0 saltpath: /usr/lib/python2.7/site-packages/salt saltversion: 2019.2.0 saltversioninfo: - 2019 - 2 - 0 - 0 selinux: ---------- enabled: False enforced: Disabled serialnumber: VMware-56 4d 2d 5f 36 b3 f6 de-b7 99 1d 0c 81 24 ea f2 server_id: 858362777 shell: /bin/sh swap_total: 1952 systemd: ---------- features: +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN version: 219 uid: 0 username: root uuid: 5f2d4d56-b336-def6-b799-1d0c8124eaf2 virtual: VMware zfs_feature_flags: False zfs_support: False zmqversion: 4.1.4
—— grains'information is not dynamic and does not change in real time. It is collected at minion startup.
—— We can do configuration management based on some information collected by grains
Customize grains information
1. Add two lines to / etc/salt/grains on the minion side and restart salt-minion
[root@arslinux-02 ~]# vim /etc/salt/grains env: test role: nginx [root@arslinux-02 ~]# systemctl restart salt-minion
2. Getting grains on master
[root@arslinux-01 ~]# salt '*' grains.item role env arslinux-01: ---------- env: role: arslinux-02: ---------- env: test role: nginx
—— It can be executed with some attribute information of grains
Sal-G Key: Value Specific Operation
[root@arslinux-01 ~]# salt '*' grains.item role env arslinux-01: ---------- env: role: arslinux-02: ---------- env: test role: nginx
[root@arslinux-01 ~]# salt -G role:nginx cmd.run 'hostname' arslinux-02: arslinux-02 [root@arslinux-01 ~]# salt -G role:nginx cmd.run 'ifconfig' arslinux-02: ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.194.132 netmask 255.255.255.0 broadcast 192.168.194.255 inet6 fe80::4c99:ed43:5757:e772 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:14:4f:d9 txqueuelen 1000 (Ethernet) RX packets 7957 bytes 1228538 (1.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7860 bytes 1432289 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 1019 bytes 89448 (87.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1019 bytes 89448 (87.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@arslinux-01 ~]# salt -G role:nginx test.ping arslinux-02: True
You can customize grains for the same class or group of machines, and then remotely operate these machines through grains
24.7 pillar
pillar, unlike grains, is defined on the master, and it is some information defined for minion. For example, some important data (passwords) can be stored in pillars, variables can be defined, and so on.
Configure custom pillar
1. In the configuration file / etc/salt/master, find pillar_roots: and then three lines, uncomment, restart salt-master
[root@arslinux-01 ~]# vim /etc/salt/master pillar_roots: base: - /srv/pillar [root@arslinux-01 ~]# systemctl restart salt-master
Note the space in the configuration, there are two spaces in front of base, - there are four spaces in front of base, so you can't omit them.
2. Create / srv/pillar and create test.sls in the directory with the content of conf: /etc/123.conf. You can create another test2.sls
[root@arslinux-01 ~]# mkdir /srv/pillar [root@arslinux-01 ~]# vi /srv/pillar/test.sls conf: /etc/123.conf [root@arslinux-01 ~]# vi /srv/pillar/test2.sls dir: /data/123 [root@arslinux-01 ~]# vi /srv/pillar/top.sls base: 'arslinux-02': - test - test2 //Multiple can be defined according to need and reality
3. After changing the pillar configuration file, we can refresh the pillar configuration to get the new pillar status without restarting salt-master.
[root@arslinux-01 ~]# salt '*' saltutil.refresh_pillar arslinux-01: True arslinux-02: True
4. Verification status
[root@arslinux-01 ~]# salt '*' pillar.item conf arslinux-01: ---------- conf: arslinux-02: ---------- conf: /etc/123.conf
[root@arslinux-01 ~]# salt '*' pillar.item conf dir arslinux-01: ---------- conf: dir: arslinux-02: ---------- conf: /etc/123.conf dir: /data/123
—— Of course, the parameters of different machines can also be written in the same top.sls, for example:
base: 'arslinux-02': - test 'arslinux-01': - test2
[root@arslinux-01 ~]# salt '*' saltutil.refresh_pillar arslinux-02: True arslinux-01: True
[root@arslinux-01 ~]# salt '*' pillar.item conf dir arslinux-01: ---------- conf: dir: /data/123 arslinux-02: ---------- conf: /etc/123.conf dir:
You can see the difference between the results of previous operations.
5. pillar can also be used as a match object for salt
Sal-I'parameter'test.ping
[root@arslinux-01 ~]# salt -I 'conf:/etc/123.conf' cmd.run 'w' arslinux-02: 23:21:44 up 1:16, 1 user, load average: 0.00, 0.01, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.194.1 22:06 24.00s 0.17s 0.17s -bash [root@arslinux-01 ~]# salt -I 'conf:/etc/123.conf' test.ping arslinux-02: True
24.8 Installation Configuration httpd
1. Find file_roots in master configuration file: Enable its file storage directory
[root@arslinux-01 ~]# vim /etc/salt/master file_roots: base: - /srv/salt/
2. Create / srv/salt / directory and enter
[root@arslinux-01 ~]# mkdir /srv/salt/ [root@arslinux-01 ~]# cd !$ cd /srv/salt/
3. Create top.sls and restart salt-master
[root@arslinux-01 salt]# vim top.sls base: '*': - httpd [root@arslinux-01 salt]# systemctl restart salt-master
This means that the httpd module is executed on all clients
4. Create httpd.sls
[root@arslinux-01 salt]# vim httpd.sls httpd-service: pkg.installed: - names: - httpd - httpd-devel service.running: - name: httpd - enable: True
Note: httpd-service is the name of id, which is customized. pkg.installed is the package installation function. The following is the name of the package to be installed. service.running is also a function to ensure that the specified service is started, and enable means boot-up.
5. Executing Installation Commands
[root@arslinux-01 salt]# salt 'arslinux-01' state.highstate
When the command is executed, it goes down to / srv/salt / to find top.sls, which is then executed according to the relevant modules mentioned.
The whole process of silent installation
Before performing the operation, remember to close the service that occupies 80 ports, otherwise it will report an error and httpd will not start.
24.9 Configuration Management File
1. Create test.sls on master
[root@arslinux-01 salt]# vim test.sls file_test: file.managed: - name: /tmp/arslinux - source: salt://test/123/ppp.txt - user: root - group: root - mode: 600
Note: The file_test in the first line is a self-defined name, indicating the name of the configuration segment and can be referenced in other configuration segments; the file.management module can define parameters; the name refers to the path and name of the file created on the minion side; the source specifies where the file is copied; and the salt://test/123/1.txt here corresponds to / srv/salt/test/123/1.txt
2. Create ppp.txt file
[root@arslinux-01 salt]# mkdir test [root@arslinux-01 salt]# mkdir test/123/ [root@arslinux-01 salt]# cp /etc/inittab test/123/ppp.txt
3. Change top.sls
[root@arslinux-01 salt]# vim top.sls base: '*': - test
4. Execution of operations
[root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: file_test Function: file.managed Name: /tmp/arslinux Result: True Comment: File /tmp/arslinux updated Started: 22:43:37.846500 Duration: 167.482 ms Changes: ---------- diff: New file Summary for arslinux-02 ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 167.482 ms
5. minion side to see if the file was successfully created
[root@arslinux-02 ~]# ll /tmp/arslinux -rw------- 1 root root 511 8 month 3 22:43 /tmp/arslinux
24.10 Configuration Management Directory
1. Create test_dir.sls on master
[root@arslinux-01 salt]# vim testdir.sls file_dir: file.recurse: - name: /tmp/testdir - source: salt://test/123 - user: root - file_mode: 640 - dir_mode: 750 - mkdir: True - clean: True
Note: clean, with it, the source deletes files or directories, and the target (minion side) is deleted, otherwise it will not be deleted; other parameters are similar to those of the previous managed files.
2. Changing top.sls can increase directly
[root@arslinux-01 salt]# echo ' - testdir' >> top.sls [root@arslinux-01 salt]# cat top.sls base: '*': - test - testdir
3. Execution of operations
[root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: file_test Function: file.managed Name: /tmp/arslinux Result: True Comment: File /tmp/arslinux is in the correct state Started: 23:00:27.660586 Duration: 95.354 ms Changes: ---------- ID: file_dir Function: file.recurse Name: /tmp/testdir Result: True Comment: Recursively updated /tmp/testdir Started: 23:00:27.756271 Duration: 325.589 ms Changes: ---------- /tmp/testdir/ppp.txt: ---------- diff: New file mode: 0640 Summary for arslinux-02 ------------ Succeeded: 2 (changed=1) Failed: 0 ------------ Total states run: 2 Total run time: 420.943 ms
4. Check whether the minion side was successfully created and whether the permissions were correct
[root@arslinux-02 ~]# ll /tmp/testdir/ //Total dosage 4 -rw-r----- 1 root root 511 8 month 3 23:00 ppp.txt [root@arslinux-02 ~]# ll -d /tmp/testdir/ drwxr-x--- 2 root root 21 8 month 3 23:00 /tmp/testdir/
5. If state.highstate is executed again, an error will be reported because / test/123 is not available./
[root@arslinux-01 salt]# cd test/ [root@arslinux-01 test]# mkdir abc [root@arslinux-01 test]# touch 123.txt [root@arslinux-01 test]# rm -rf 123 [root@arslinux-01 test]# ls 123.txt abc
[root@arslinux-01 test]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: file_test Function: file.managed Name: /tmp/arslinux Result: False Comment: Source file salt://test/123/ppp.txt not found in saltenv 'base' Started: 23:08:19.655224 Duration: 140.84 ms Changes: ---------- ID: file_dir Function: file.recurse Name: /tmp/testdir Result: False Comment: Recurse failed: none of the specified sources were found Started: 23:08:19.796420 Duration: 32.291 ms Changes: Summary for arslinux-02 ------------ Succeeded: 0 Failed: 2 ------------ Total states run: 2 Total run time: 173.131 ms
Because / test/123 / was deleted, the operation based on this directory would be wrong.
6. Solve the problem by removing test from top.sls and no longer referring to it.
[root@arslinux-01 salt]# vim top.sls base: '*': - testdir
7. Create / srv/salt/test/123/
[root@arslinux-01 salt]# mkdir test/123/ [root@arslinux-01 salt]# mv test/abc test/123.txt test/123/
8. Reoperation
[root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: file_dir Function: file.recurse Name: /tmp/testdir Result: True Comment: Recursively updated /tmp/testdir Started: 23:16:26.961983 Duration: 420.045 ms Changes: ---------- /tmp/testdir/123.txt: ---------- diff: New file mode: 0640 removed: - /tmp/testdir/ppp.txt Summary for arslinux-02 ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 420.045 ms
9. The minion side does not synchronize the abc directory, because the abc is empty. If you want to synchronize, you must not have the abc directory empty.
[root@arslinux-02 ~]# ll /tmp/testdir/ //Total dosage 0 -rw-r----- 1 root root 0 8 month 3 23:16 123.txt
24.11 Configuration Management Remote Command
1. Editing top.sls
[root@arslinux-01 salt]# vim top.sls base: '*': - shell_test
2. Create shell_test.sls
[root@arslinux-01 salt]# vim shell_test.sls hell_test: cmd.script: - source: salt://test/1.sh - user: root
3. Create script 1.sh
[root@arslinux-01 salt]# vim test/1.sh #!/bin/bash touch /tmp/111.txt if [ ! -d /tmp/1233 ] then mkdir /tmp/1233 fi
4. Execution of operations
[root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: hell_test Function: cmd.script Result: True Comment: Command 'hell_test' run Started: 16:54:25.741342 Duration: 168.634 ms Changes: ---------- pid: 4413 retcode: 0 stderr: stdout: Summary for arslinux-02 ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 168.634 ms
5. minion side view
[root@arslinux-02 ~]# ll /tmp/ //Total dosage 4 -rw-r--r-- 1 root root 0 8 month 4 16:54 111.txt drwxr-xr-x 2 root root 6 8 month 4 16:54 1233 -rw------- 1 root root 511 8 month 3 22:43 arslinux
24.12 Configuration Management Planning Tasks
1. Editing top.sls
[root@arslinux-01 salt]# vim top.sls base: '*': - cron_test
2. Create cron_test
[root@arslinux-01 salt]# vim cron_test.sls cron_test: cron.present: - name: /bin/touch /tmp/12121212.txt - user: root - minute: '20' - hour: 17 - daymonth: '*' - month: '*' - dayweek: '*'
Note: * Need to use single quotation marks. Of course, we can also use file. management module to manage cron, because the cron of the system exists in the form of configuration files.
—— To delete the cron, you need to add:
cron.absent:
- name: /bin/touch /tmp/111.txt
They can't coexist. To delete a cron, the previous press has to be removed.
3. Execution of operations
[root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: cron_test Function: cron.present Name: /bin/touch /tmp/12121212.txt Result: True Comment: Cron /bin/touch /tmp/12121212.txt added to root's crontab Started: 17:16:36.800747 Duration: 543.17 ms Changes: ---------- root: /bin/touch /tmp/12121212.txt Summary for arslinux-02 ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 543.170 ms
4. minion side view
[root@arslinux-02 ~]# date 2019 year 08 month 04 day Sunday 17:18:11 CST [root@arslinux-02 ~]# ll /tmp/ //Total dosage 4 -rw-r--r-- 1 root root 0 8 month 4 16:54 111.txt drwxr-xr-x 2 root root 6 8 month 4 16:54 1233 -rw------- 1 root root 511 8 month 3 22:43 arslinux [root@arslinux-02 ~]# crontab -l # Lines below here are managed by Salt, do not edit # SALT_CRON_IDENTIFIER:/bin/touch /tmp/12121212.txt 20 17 * * * /bin/touch /tmp/12121212.txt
Look at the minion end after 5, 17:20
[root@arslinux-02 ~]# ll /tmp/ //Total dosage 4 -rw-r--r-- 1 root root 0 8 month 4 16:54 111.txt -rw-r--r-- 1 root root 0 8 month 4 17:20 12121212.txt drwxr-xr-x 2 root root 6 8 month 4 16:54 1233 -rw------- 1 root root 511 8 month 3 22:43 arslinux
Has succeeded
6. Do not alter crontab on the minion end without authorization after adding, otherwise the master will add it again when executing salt again
[root@arslinux-02 ~]# crontab -e crontab: installing new crontab [root@arslinux-02 ~]# crontab -l # SALT_CRON_IDENTIFIER:/bin/touch /tmp/12121212.txt 20 17 * * * /bin/touch /tmp/12121212.txt [root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: cron_test Function: cron.present Name: /bin/touch /tmp/12121212.txt Result: True Comment: Cron /bin/touch /tmp/12121212.txt added to root's crontab Started: 17:29:33.617502 Duration: 491.19 ms Changes: ---------- root: /bin/touch /tmp/12121212.txt Summary for arslinux-02 ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 491.190 ms
[root@arslinux-02 ~]# crontab -l # SALT_CRON_IDENTIFIER:/bin/touch /tmp/12121212.txt 20 17 * * * /bin/touch /tmp/12121212.txt # Lines below here are managed by Salt, do not edit # SALT_CRON_IDENTIFIER:/bin/touch /tmp/12121212.txt 20 17 * * * /bin/touch /tmp/12121212.txt
—— See the prompt # Lines below here are managed by Salt, do not edit
We can't change it at will, otherwise we can't delete or modify the cron.
7. Modify minion crontab to the correct state first
[root@arslinux-02 ~]# crontab -e crontab: installing new crontab [root@arslinux-02 ~]# crontab -l # Lines below here are managed by Salt, do not edit # SALT_CRON_IDENTIFIER:/bin/touch /tmp/12121212.txt 20 17 * * * /bin/touch /tmp/12121212.txt
8. The master side executes the deletion of crontab, using cron.absent: module
[root@arslinux-01 salt]# vim cron_test.sls cron_test: cron.absent: - name: /bin/touch /tmp/12121212.txt [root@arslinux-01 salt]# salt 'arslinux-02' state.highstate arslinux-02: ---------- ID: cron_test Function: cron.absent Name: /bin/touch /tmp/12121212.txt Result: True Comment: Cron /bin/touch /tmp/12121212.txt removed from root's crontab Started: 17:34:42.720616 Duration: 437.822 ms Changes: ---------- root: /bin/touch /tmp/12121212.txt Summary for arslinux-02 ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 437.822 ms
[root@arslinux-02 ~]# crontab -l # Lines below here are managed by Salt, do not edit
24.13 Other orders
Copy files from master to client
Copy the directory on the master to the client
[root@arslinux-01 salt]# cp /etc/passwd test/1.txt
[root@arslinux-01 salt]# salt '*' cp.get_file salt://test/1.txt /tmp/1234567.txt arslinux-02: /tmp/1234567.txt arslinux-01: /tmp/1234567.txt
[root@arslinux-01 salt]# salt '*' cp.get_dir salt://test/123/ /tmp/ arslinux-01: - /tmp//123/123.txt - /tmp//123/abc arslinux-02: - /tmp//123/123.txt - /tmp//123/abc
Salt-rum management.up
Sal'*'cmd.script salt://script) Execute shell script on master under command line
[root@arslinux-01 salt]# salt-run manage.up - arslinux-01 - arslinux-02
[root@arslinux-01 salt]# salt '*' cmd.script salt://test/1.sh arslinux-01: ---------- pid: 21621 retcode: 0 stderr: stdout: arslinux-02: ---------- pid: 7289 retcode: 0 stderr: stdout:
24.14 salt-ssh
Sal-ssh does not need to authenticate the client, nor does the client need to install salt-minion, which is similar to pssh/expect.
1. Installing salt-ssh
[root@arslinux-01 ~]# yum install -y https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm [root@arslinux-01 ~]# yum install -y salt-ssh
2. Editing configuration file roster
[root@arslinux-01 ~]# vim /etc/salt/roster arslinux-01: host: 192.168.194.130 user: root passwd: xxxxxxx arslinux-02: host: 192.168.194.132 user: root passwd: xxxxxxx
3. Testing for login
[root@arslinux-01 ~]# salt-ssh --key-deploy '*' -r 'w' [ERROR ] Failed collecting tops for Python binary python3. arslinux-02: ---------- retcode: 0 stderr: stdout: root@192.168.194.132's password: 19:25:46 up 2:42, 1 user, load average: 0.00, 0.06, 0.09 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.194.1 16:44 1:50m 0.09s 0.09s -bash arslinux-01: ---------- retcode: 0 stderr: stdout: root@192.168.194.130's password: 19:25:46 up 2:42, 1 user, load average: 0.45, 0.22, 0.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.194.1 16:44 10.00s 9.07s 0.04s /usr/bin/python /usr/bin/salt-ssh --key-deploy * -r w
[root@arslinux-01 ~]# date 2019 year 08 month 04 day Sunday 19:27:10 CST [root@arslinux-01 ~]# ll /root/.ssh/authorized_keys -rw-r--r--. 1 root root 1191 8 month 4 19:25 /root/.ssh/authorized_keys [root@arslinux-02 ~]# ll /root/.ssh/authorized_keys -rw-r--r--. 1 root root 1199 8 month 4 19:25 /root/.ssh/authorized_keys
The public key has been passed on in the past
4. Delete the password in roster and execute it again. You can log in.
[root@arslinux-01 ~]# salt-ssh --key-deploy '*' -r 'w' arslinux-02: ---------- retcode: 0 stderr: stdout: 19:30:23 up 2:47, 1 user, load average: 0.00, 0.03, 0.06 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.194.1 16:44 1:27 0.10s 0.10s -bash arslinux-01: ---------- retcode: 0 stderr: stdout: 19:30:23 up 2:47, 1 user, load average: 0.25, 0.18, 0.16 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.194.1 16:44 7.00s 1.49s 0.02s /usr/bin/python /usr/bin/salt-ssh --key-deploy * -r w