Saltstack data system
1.saltstack data system
SaltStack has two major data systems:
- Grains
- Pillar
2.saltstack component (Grains)
Grains is a component of SaltStack, which stores the information collected when minion starts.
Grains is one of the most important components in SaltStack components, because we often use it in the process of configuration and deployment. Grains is a component of SaltStack that records some static information of minion. It can be simply understood that grains records some common attributes of each minion, such as CPU, memory, disk, network information, etc. We can view all grains information of a minion through grains.items.
Functions of Grains:
- Collect asset information
Grains application scenario:
- Information Service
- Target matching at the command line
- Target matching in top file
- Target matching in template
For target matching in the template, see: https://docs.saltstack.com/en/latest/topics/pillar/
Information query column:
List the key s and value s of all grains
[root@master ~]# salt 'minion' grains.items minion: ---------- biosreleasedate: 07/22/2020 //bios time biosversion: 6.00 //Version of bios cpu_flags: //cpu related properties - fpu - vme - de - pse - tsc - msr - pae - mce - cx8 - apic - sep - mtrr - pge - mca - cmov - pat - pse36 - clflush - mmx - fxsr - sse - sse2 - ht - syscall - nx - mmxext - fxsr_opt - pdpe1gb - rdtscp - lm - constant_tsc - rep_good - nopl - tsc_reliable - nonstop_tsc - cpuid - extd_apicid - pni - pclmulqdq - ssse3 - fma - cx16 - sse4_1 - sse4_2 - x2apic - movbe - popcnt - aes - xsave - avx - f16c - rdrand - hypervisor - lahf_lm - cmp_legacy - extapic - cr8_legacy - abm - sse4a - misalignsse - 3dnowprefetch - osvw - topoext - ssbd - ibpb - vmmcall - fsgsbase - bmi1 - avx2 - smep - bmi2 - rdseed - adx - smap - clflushopt - clwb - sha_ni - xsaveopt - xsavec - xgetbv1 - xsaves - clzero - wbnoinvd - arat - umip - rdpid - overflow_recov - succor cpu_model: //Specific model of cpu AMD Ryzen 9 4900HS with Radeon Graphics cpuarch: //cpu architecture x86_64 cwd: / disks: - sr0 - sda dns: ---------- domain: ip4_nameservers: - 192.168.100.2 ip6_nameservers: nameservers: - 192.168.100.2 options: search: - localdomain sortlist: domain: efi: False efi-secure-boot: False fqdn: node1 fqdn_ip4: - 211.137.54.230 fqdn_ip6: - fe80::fdbf:aff7:d636:5069 fqdns: - node1 gid: 0 gpus: |_ ---------- model: SVGA II Adapter vendor: vmware groupname: root host: node1 //host name hwaddr_interfaces: ---------- ens33: 00:0c:29:70:da:64 lo: 00:00:00:00:00:00 id: //ID of minion minion init: systemd ip4_gw: 192.168.100.2 ip4_interfaces: ---------- ens33: - 192.168.100.149 lo: - 127.0.0.1 ip6_gw: False ip6_interfaces: ---------- ens33: - fe80::fdbf:aff7:d636:5069 lo: - ::1 ip_gw: True ip_interfaces: ---------- ens33: - 192.168.100.149 - fe80::fdbf:aff7:d636:5069 lo: - 127.0.0.1 - ::1 ipv4: - 127.0.0.1 - 192.168.100.149 ipv6: - ::1 - fe80::fdbf:aff7:d636:5069 kernel: Linux kernelparams: |_ - BOOT_IMAGE - (hd0,msdos1)/vmlinuz-4.18.0-257.el8.x86_64 |_ - root - /dev/mapper/cs-root |_ - ro - None |_ - crashkernel - auto |_ - resume - /dev/mapper/cs-swap |_ - rd.lvm.lv - cs/root |_ - rd.lvm.lv - cs/swap |_ - rhgb - None |_ - quiet - None kernelrelease: 4.18.0-257.el8.x86_64 kernelversion: #1 SMP Thu Dec 3 22:16:23 UTC 2020 locale_info: ---------- defaultencoding: UTF-8 defaultlanguage: zh_CN detectedencoding: UTF-8 timezone: EDT localhost: node1 lsb_distrib_codename: CentOS Stream 8 lsb_distrib_id: CentOS Stream lsb_distrib_release: 8 lvm: ---------- cs: - home - root - swap machine_id: f6b97a1f9cd64c5f912e90e86fa9a73a manufacturer: VMware, Inc. master: 192.168.100.148 mdadm: mem_total: 1789 nodename: node1 num_cpus: 4 num_gpus: 1 os: CentOS Stream os_family: RedHat osarch: x86_64 oscodename: CentOS Stream 8 osfinger: CentOS Stream-8 osfullname: CentOS Stream osmajorrelease: 8 osrelease: 8 osrelease_info: - 8 path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin pid: 1401 productname: VMware Virtual Platform ps: ps -efHww pythonexecutable: /usr/bin/python3.6 pythonpath: - /usr/bin - /usr/lib64/python36.zip - /usr/lib64/python3.6 - /usr/lib64/python3.6/lib-dynload - /usr/lib64/python3.6/site-packages - /usr/lib/python3.6/site-packages pythonversion: - 3 - 6 - 8 - final - 0 saltpath: /usr/lib/python3.6/site-packages/salt saltversion: 3004 saltversioninfo: - 3004 selinux: ---------- enabled: True enforced: Enforcing serialnumber: VMware-56 4d b1 cc 9f 07 68 74-f7 3b b2 6c 9b 70 da 64 server_id: 279719642 shell: /bin/sh ssds: swap_total: 2079 systemd: ---------- features: +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy version: 239 systempath: - /usr/local/sbin - /usr/local/bin - /usr/sbin - /usr/bin transactional: False uid: 0 username: root uuid: ccb14d56-079f-7468-f73b-b26c9b70da64 virtual: VMware zfs_feature_flags: False zfs_support: False zmqversion: 4.3.4 [root@master ~]#
Query the key s of all grains
[root@master ~]# salt 'minion' grains.ls minion: - biosreleasedate - biosversion - cpu_flags - cpu_model - cpuarch - cwd - disks - dns - domain - efi - efi-secure-boot - fqdn - fqdn_ip4 - fqdn_ip6 - fqdns - gid - gpus - groupname - host - hwaddr_interfaces - id - init - ip4_gw - ip4_interfaces - ip6_gw - ip6_interfaces - ip_gw - ip_interfaces - ipv4 - ipv6 - kernel - kernelparams - kernelrelease - kernelversion - locale_info - localhost - lsb_distrib_codename - lsb_distrib_id - lsb_distrib_release - lvm - machine_id - manufacturer - master - mdadm - mem_total - nodename - num_cpus - num_gpus - os - os_family - osarch - oscodename - osfinger - osfullname - osmajorrelease - osrelease - osrelease_info - path - pid - productname - ps - pythonexecutable - pythonpath - pythonversion - saltpath - saltversion - saltversioninfo - selinux - serialnumber - server_id - shell - ssds - swap_total - systemd - systempath - transactional - uid - username - uuid - virtual - zfs_feature_flags - zfs_support - zmqversion [root@master ~]#
Check the value of a key, such as IP address
[root@master ~]# salt 'minion' grains.get ip4_interfaces minion: ---------- ens33: - 192.168.100.149 lo: - 127.0.0.1 [root@master ~]#
[root@master ~]# salt 'minion' grains.get fqdn_ip4 minion: - 211.137.54.230
[root@master ~]# salt 'minion' grains.get ip4_interfaces:ens33 minion: - 192.168.100.149 [root@master ~]#
Target matching instance:
Match minion with Grains:
[root@master ~]# salt -G 'os:CentOS Stream' cmd.run 'uptime' master: 06:27:31 up 20 min, 1 user, load average: 0.00, 0.01, 0.05 minion: 06:27:31 up 20 min, 2 users, load average: 0.01, 0.03, 0.03 [root@master ~]#
Use Grains in the top file:
[root@master base]# vim top.sls [root@master base]# cat top.sls base: 'os:CentOS Stream': - match: grain - web.apache.apache [root@master base]# [root@master base]# salt '*' state.highstate minion: ---------- ID: apache-install Function: pkg.installed Name: httpd Result: True Comment: All specified packages are already installed Started: 06:32:33.061753 Duration: 471.185 ms Changes: ---------- ID: apache-service Function: service.running Name: httpd Result: True Comment: The service httpd is already running Started: 06:32:33.535444 Duration: 47.035 ms Changes: Summary for minion ------------ Succeeded: 2 Failed: 0 ------------ Total states run: 2 Total run time: 518.220 ms master: Minion did not return. [No response] The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command: salt-run jobs.lookup_jid 20211102103231049142 ERROR: Minions returned with non-zero exit code [root@master base]#
There are two ways to customize Grains:
- minion configuration file, search for grains in the configuration file
- Generate a grains file under / etc/salt and define it in this file (recommended method)
Before doing this, you must pay attention to:
[root@master base]# cat top.sls base: 'os:CentOS Stream': //Delete this line - match: grain - web.apache.apache
[root@master ~]# cd /etc/salt/ [root@master salt]# ls cloud cloud.maps.d master minion.d proxy cloud.conf.d cloud.profiles.d master.d minion_id proxy.d cloud.deploy.d cloud.providers.d minion pki roster [root@master salt]# touch grains [root@master salt]# vim grains [root@master salt]# cat grains test-grains: linux-node1 [root@master salt]# [root@master ~]# salt '*' grains.get test-grains minion: master: linux-minion [root@master ~]#
Customize Grains without restarting:
[root@master ~]# vim /etc/salt/grains [root@master ~]# cat /etc/salt/grains test-grains: linux-minion shenlongfei: shuaige [root@master ~]# [root@master ~]# salt '*' saltutil.sync_grains master: minion: [root@master ~]# [root@master ~]# salt '*' grains.get shenlongfei master: shuaige minion: [root@master ~]#
3.saltstack component Piller
Pillar is also one of the very important components of the SaltStack component. It is a data management center. It often configures states and uses it in large-scale configuration management. The main function of pillar in SaltStack is to store and define some data required in configuration management, such as software version number, user name, password and other information. Its definition storage format is similar to Grains, which is YAML format.
There is a section of Pillar settings in the Master configuration file, which specifically defines some parameters related to Pillar:
[root@master ~]# vim /etc/salt/master #pillar_roots: / / uncomment # base: # - /srv/pillar
In the default Base environment, the working directory of Pillar is under / srv/pillar directory. If you want to define multiple Pillar working directories with different environments, you only need to modify the configuration file here.
Pillar features:
- You can define the data required for the specified minion
- Only the specified person can see the defined data
- Set in master configuration file
[root@master srv]# salt '*' pillar.items master: ---------- minion: ---------- [root@master srv]#
The default pillar does not have any information. If you want to view the information, you need to set the pillar in the master configuration file_ The annotation of opts is uncommented and its value is set to True.
[root@master ~]# vim /etc/salt/master pillar_opts: True //Change to true and cancel the comment
Restart the master and view the pillar information
[root@master salt]# systemctl restart salt-master [root@master pillar]# salt 'minion' pillar.items minion: ---------- master: ---------- __cli: salt-master __role: master allow_minion_key_revoke: True archive_jobs: False auth_events: True auth_mode: 1 auto_accept: False azurefs_update_interval: 60 cache: localfs cache_sreqs: True cachedir: /var/cache/salt/master clean_dynamic_modules: True cli_summary: False client_acl_verify: True cluster_mode: False con_cache: False conf_file: /etc/salt/master config_dir: /etc/salt cython_enable: False daemon: False decrypt_pillar: decrypt_pillar_default: gpg decrypt_pillar_delimiter: : decrypt_pillar_renderers: - gpg default_include: master.d/*.conf default_top: base detect_remote_minions: False discovery: False django_auth_path: django_auth_settings: drop_messages_signature_fail: False dummy_pub: False eauth_acl_module: eauth_tokens: localfs enable_gpu_grains: False enable_ssh_minions: False enforce_mine_cache: False engines: env_order: event_match_type: startswith event_publisher_niceness: None event_return: event_return_blacklist: event_return_niceness: None event_return_queue: 0 event_return_whitelist: ext_job_cache: ext_pillar: extension_modules: /var/cache/salt/master/extmods external_auth: ---------- extmod_blacklist: ---------- extmod_whitelist: ---------- failhard: False file_buffer_size: 1048576 file_client: local file_ignore_glob: file_ignore_regex: file_recv: False file_recv_max_size: 100 file_roots: ---------- base: - /srv/salt - /srv/spm/salt fileserver_backend: - roots fileserver_followsymlinks: True fileserver_ignoresymlinks: False fileserver_limit_traversal: False fileserver_update_niceness: None fileserver_verify_config: True fips_mode: False gather_job_timeout: 10 git_pillar_base: master git_pillar_branch: master git_pillar_env: git_pillar_fallback: git_pillar_global_lock: True git_pillar_includes: True git_pillar_insecure_auth: False git_pillar_passphrase: git_pillar_password: git_pillar_privkey: git_pillar_pubkey: git_pillar_refspecs: - +refs/heads/*:refs/remotes/origin/* - +refs/tags/*:refs/tags/* git_pillar_root: git_pillar_ssl_verify: True git_pillar_update_interval: 60 git_pillar_user: git_pillar_verify_config: True gitfs_base: master gitfs_disable_saltenv_mapping: False gitfs_fallback: gitfs_global_lock: True gitfs_insecure_auth: False gitfs_mountpoint: gitfs_passphrase: gitfs_password: gitfs_privkey: gitfs_pubkey: gitfs_ref_types: - branch - tag - sha gitfs_refspecs: - +refs/heads/*:refs/remotes/origin/* - +refs/tags/*:refs/tags/* gitfs_remotes: gitfs_root: gitfs_saltenv: gitfs_saltenv_blacklist: gitfs_saltenv_whitelist: gitfs_ssl_verify: True gitfs_update_interval: 60 gitfs_user: gpg_cache: False gpg_cache_backend: disk gpg_cache_ttl: 86400 hash_type: sha256 hgfs_base: default hgfs_branch_method: branches hgfs_mountpoint: hgfs_remotes: hgfs_root: hgfs_saltenv_blacklist: hgfs_saltenv_whitelist: hgfs_update_interval: 60 http_connect_timeout: 20.0 http_max_body: 107374182400 http_request_timeout: 3600.0 id: minion interface: 0.0.0.0 ipc_mode: ipc ipc_write_buffer: 0 ipv6: None jinja_env: ---------- jinja_lstrip_blocks: False jinja_sls_env: ---------- jinja_trim_blocks: False job_cache: True job_cache_store_endtime: False keep_acl_in_token: False keep_jobs: 24 key_cache: key_logfile: /var/log/salt/key key_pass: None keysize: 2048 local: True lock_saltenv: False log_datefmt: %H:%M:%S log_datefmt_console: %H:%M:%S log_datefmt_logfile: %Y-%m-%d %H:%M:%S log_file: /var/log/salt/master log_fmt_console: [%(levelname)-8s] %(message)s log_fmt_jid: [JID: %(jid)s] log_fmt_logfile: %(asctime)s,%(msecs)03d [%(name)-17s:%(lineno)-4d][%(levelname)-8s][%(process)d] %(message)s log_granular_levels: ---------- log_level: warning log_level_logfile: warning log_rotate_backup_count: 0 log_rotate_max_bytes: 0 loop_interval: 60 maintenance_niceness: None master_job_cache: local_cache master_pubkey_signature: master_pubkey_signature master_roots: ---------- base: - /srv/salt-master master_sign_key_name: master_sign master_sign_pubkey: False master_stats: False master_stats_event_iter: 60 master_tops: ---------- master_tops_first: False master_use_pubkey_signature: False max_event_size: 1048576 max_minions: 0 max_open_files: 100000 memcache_debug: False memcache_expire_seconds: 0 memcache_full_cleanup: False memcache_max_items: 1024 min_extra_mods: minion_data_cache: True minion_data_cache_events: True minion_id: minion minionfs_blacklist: minionfs_env: base minionfs_mountpoint: minionfs_update_interval: 60 minionfs_whitelist: module_dirs: mworker_niceness: None mworker_queue_niceness: None netapi_allow_raw_shell: False nodegroups: ---------- on_demand_ext_pillar: - libvirt - virtkey open_mode: False optimization_order: - 0 - 1 - 2 order_masters: False outputter_dirs: peer: ---------- permissive_acl: False permissive_pki_access: False pidfile: /var/run/salt-master.pid pillar_cache: False pillar_cache_backend: disk pillar_cache_ttl: 3600 pillar_includes_override_sls: False pillar_merge_lists: False pillar_opts: True pillar_roots: ---------- base: - /srv/pillar/base pillar_safe_render_error: True pillar_source_merging_strategy: smart pillar_version: 2 pillarenv: None ping_on_rotate: False pki_dir: /etc/salt/pki/master preserve_minion_cache: False pub_hwm: 1000 pub_server_niceness: None publish_port: 4505 publish_session: 86400 publisher_acl: ---------- publisher_acl_blacklist: ---------- queue_dirs: range_server: range:80 reactor: reactor_niceness: None reactor_refresh_interval: 60 reactor_worker_hwm: 10000 reactor_worker_threads: 10 regen_thin: False remote_minions_port: 22 renderer: jinja|yaml renderer_blacklist: renderer_whitelist: req_server_niceness: None require_minion_sign_messages: False ret_port: 4506 root_dir: / roots_update_interval: 60 rotate_aes_key: True runner_dirs: runner_returns: True s3fs_update_interval: 60 salt_cp_chunk_size: 98304 saltenv: None saltversion: 3004 schedule: ---------- search: serial: msgpack show_jid: False show_timeout: True sign_pub_messages: True signing_key_pass: None sock_dir: /var/run/salt/master sock_pool_size: 1 sqlite_queue_dir: /var/cache/salt/master/queues ssh_config_file: /root/.ssh/config ssh_identities_only: False ssh_list_nodegroups: ---------- ssh_log_file: /var/log/salt/ssh ssh_passwd: ssh_port: 22 ssh_priv_passwd: ssh_scan_ports: 22 ssh_scan_timeout: 0.01 ssh_sudo: False ssh_sudo_user: ssh_timeout: 60 ssh_use_home_key: False ssh_user: root ssl: None state_aggregate: False state_auto_order: True state_events: False state_output: full state_output_diff: False state_output_profile: True state_top: salt://top.sls state_top_saltenv: None state_verbose: True sudo_acl: False svnfs_branches: branches svnfs_mountpoint: svnfs_remotes: svnfs_root: svnfs_saltenv_blacklist: svnfs_saltenv_whitelist: svnfs_tags: tags svnfs_trunk: trunk svnfs_update_interval: 60 syndic_dir: /var/cache/salt/master/syndics syndic_event_forward_timeout: 0.5 syndic_failover: random syndic_forward_all_events: False syndic_jid_forward_cache_hwm: 100 syndic_log_file: /var/log/salt/syndic syndic_master: masterofmasters syndic_pidfile: /var/run/salt-syndic.pid syndic_wait: 5 tcp_keepalive: True tcp_keepalive_cnt: -1 tcp_keepalive_idle: 300 tcp_keepalive_intvl: -1 tcp_master_pub_port: 4512 tcp_master_publish_pull: 4514 tcp_master_pull_port: 4513 tcp_master_workers: 4515 test: False thin_extra_mods: thorium_interval: 0.5 thorium_roots: ---------- base: - /srv/thorium thorium_top: top.sls thoriumenv: None timeout: 5 token_dir: /var/cache/salt/master/tokens token_expire: 43200 token_expire_user_override: False top_file_merging_strategy: merge transport: zeromq unique_jid: False user: root utils_dirs: - /var/cache/salt/master/extmods/utils verify_env: True winrepo_branch: master winrepo_cachefile: winrepo.p winrepo_dir: /srv/salt/win/repo winrepo_dir_ng: /srv/salt/win/repo-ng winrepo_fallback: winrepo_insecure_auth: False winrepo_passphrase: winrepo_password: winrepo_privkey: winrepo_pubkey: winrepo_refspecs: - +refs/heads/*:refs/remotes/origin/* - +refs/tags/*:refs/tags/* winrepo_remotes: - https://github.com/saltstack/salt-winrepo.git winrepo_remotes_ng: - https://github.com/saltstack/salt-winrepo-ng.git winrepo_ssl_verify: True winrepo_user: worker_threads: 5 zmq_backlog: 1000 zmq_filtering: False zmq_monitor: False [root@master pillar]#
pillar custom data:
Find pillar in the master configuration file_ Roots can see where they store the pillar
[root@master salt]# vim /etc/salt/master # highstate format, and is generally just key/value pairs. pillar_roots: base: - /srv/pillar/base # [root@master ~]# cd /srv/ [root@master srv]# tree pillar/ pillar/ └── base 1 directory, 0 files [root@master srv]# [root@master pillar]# vim /srv/pillar/base/apache.sls [root@master pillar]# cat /srv/pillar/base/apache.sls {% if grains['os'] == 'CentOS Stream' %} apache: httpd {% elif grains['os'] == 'Debian' %} apache: apache2 {% endif %} [root@master pillar]#
Define top file entry file
[root@master pillar]# cat /srv/pillar/base/top.sls base: //Specify environment 'minion': //Specify target - web.apache.apache //Reference apache.sls [root@master pillar]# [root@master pillar]# pwd /srv/pillar [root@master pillar]# [root@master base]# salt '*' pillar.items master: ---------- minion: ---------- apache: httpd [root@master base]#
Modify the apache status file under salt and reference the pilar data
[root@master base]# vim /srv/salt/base/web/apache/apache.sls [root@master base]# cat /srv/salt/base/web/apache/apache.sls apache-install: pkg.installed: - name: {{ pillar['apache'] }} apache-service: service.running: - name: {{ pillar['apache'] }} - enable: true [root@master base]#
Execute advanced status file
[root@master base]# salt '*' state.highstate minion: ---------- ID: states Function: no.None Result: False Comment: No Top file or master_tops data matches found. Please see master log for details. Changes: Summary for minion ------------ Succeeded: 0 Failed: 1 ------------ Total states run: 1 Total run time: 0.000 ms master: ---------- ID: states Function: no.None Result: False Comment: No Top file or master_tops data matches found. Please see master log for details. Changes: Summary for master ------------ Succeeded: 0 Failed: 1 ------------ Total states run: 1 Total run time: 0.000 ms ERROR: Minions returned with non-zero exit code [root@master base]#
Differences between Grains and Pillar
Storage location | type | Acquisition mode | Application scenario | |
---|---|---|---|---|
Grains | minion | static state | When minion starts, the collection can avoid restarting minion service by refreshing | 1. Information query 2. Target matching on the command line 3. Target matching in top file 4. Target matching in template |
Pillar | master | dynamic | Specify and take effect in real time | 1. Target matching 2. Sensitive data configuration |