Linux command line and shell Scripting v3 Richard Blum

1. Initial Linux shell

  • Linux kernel
    • System memory management
    • Software program management
    • Hardware device management: character type, block, network device file
    • File system management
  • GNU tools: coreutils package
  • Graphical desktop environment: X Window, KDE, GNOME, Unity
  • Application software

2. Enter the shell

  • Set terminal color
    • setterm -foreground blue
    • setterm -background yellow
    • Clear the settings and clear the screen: setterm -reset
  • Console terminal: exit the graphical desktop mode and enter the text mode
  • Graphical Terminal: Terminal in graphical interface

3. Basic bash shell commands

  • man
  • Link:
    • Symbolic link: ln -s source file link file (link file is new)
    • Hard link: ln source file link file (link file and source file point to the same file)
  • mv move the file or rename the file, and the inode of the file remains unchanged. ls -i
  • tree
  • file type

4. More bash shell commands

  • ps
  • top
  • kill killall
  • mount umount
  • df
  • du
  • sort -t ':' -k 3 -n /etc/passwd
  • grep egrep fgrep
  • Compressed data: gzip gzcat gunzip
  • Archive data: tar
    • Create archive file: tar - CVF test tar test1/ test2/
    • View the archive file: tar - TF test tar
    • Extract archive file: tar -xvf test tar

5. Understand shell

  • Process list: parentheses turn the command list into a process list and generate a sub shell to execute the corresponding commands
# Command list
[root@localhost ~]# pwd;ls ;cd /etc;pwd;cd;pwd;ls;ps --forest;echo $BASH_SUBSHELL
/root
anaconda-ks.cfg
/etc
/root
anaconda-ks.cfg
    PID TTY          TIME CMD
   1833 pts/0    00:00:00 bash
   1990 pts/0    00:00:00  \_ ps
0

# Process list
[root@localhost ~]# (pwd;ls ;cd /etc;pwd;cd;pwd;ls;ps --forest;echo $BASH_SUBSHELL)
/root
anaconda-ks.cfg
/etc
/root
anaconda-ks.cfg
    PID TTY          TIME CMD
   1833 pts/0    00:00:00 bash
   1991 pts/0    00:00:00  \_ bash
   1994 pts/0    00:00:00      \_ ps
1

# BASH_ Whether subshell creates a subshell. You can nest parentheses to create a subshell of a subshell
[root@localhost ~]# (pwd;echo $BASH_SUBSHELL)
/root
1
[root@localhost ~]# (pwd;(echo $BASH_SUBSHELL))
/root
2
  • Put process in background
    • '&'
    • View background process: jobs -l
    • Coordination process:
      • coproc sleep 10
      • coproc My_Job {sleep 10}
  • The difference between built-in commands and external commands is that the former does not need to use sub processes to execute. They have been compiled with the shell and exist as part of the shell tool.
[root@localhost ~]# type ps
ps is hashed (/usr/bin/ps)
[root@localhost ~]# type cd
cd is a shell builtin
[root@localhost ~]# type -a pwd
pwd is a shell builtin
pwd is /usr/bin/pwd
  • history
    • a force write history file (. bash_history)
    • n synchronization, when there are multiple clients
    • c clear
  • !! Execute the previous command;! 20 execution of order 20
  • Alias alias
    • p view currently available aliases
    • Create alias lxq = 'ls -la'

6. What are environmental variables

  • The global environment variable is visible to the shell drawing and all generated sub shells. The values of local variables are visible to the shell that created them.
  • printenv and env display global variables, and set displays global variables, local variables and user-defined variables
  • Set user-defined variables. There can be no spaces between variable name, equal sign and value. my_variable=“Hello World”
  • Set global variable: first create a local environment variable, and then export it to the global environment. export my_variable
    • Modifying the global variable in the child shell does not affect the value of the variable in the parent shell.
  • Delete environment variable: unset my_variable
    • Deleting global variables in the child shell does not affect the parent shell global variables
  • Common environment variables
    • UID,EUID
    • PPID: parent process ID
    • RANDOM: RANDOM number from 0 to 32767
    • BASH_ARGC,BASH_ARGV
    • FUNCNAME: the name of the currently executed function
    • HOSTNAME
  • Set the environment variable: path, which defines the directory used to execute commands and program lookups. PATH=$PATH:/home/work/bin
  • Three ways to start bash shell
    • As the default login shell when logging in. The command will be read from five different startup files
      • /etc/profile. This script will load / etc / profile Script in folder D
      • For user specific startup files, the shell will look for the following three files in the following order, find the first, and the rest will be ignored
        • $HOME/.bash_profile
        • $HOME/.bash_login
        • $HOME/.profile
      • $HOME/.bashrc
    • Interactive shell as a non login shell: (for example, typing bash under the command line), it will not access / etc/profile, but only check the in the user's HOME directory bashrc file
    • As a non interactive shell for running scripts: the system executes shell scripts. When the shell starts a non interactive shell, it checks for BAHS_ENV environment variable to view the startup file executed. If Bash is not available_ Env, then as a child shell, the child shell inherits the exported variables of the parent shell and does not start the child shell. Local variables and global variables already exist in the current shell
  • Persistence of environment variables:
    • For global variables, you can add them in / etc/profile, but this file will change with the upgrade version of the distribution, preferably in / etc/profile D create one in the directory sh end of the file. Set all new and modified global variables in this file
    • The place where permanent bash shell variables for individual users are stored is $home / Bashrc file
  • Array variable: to set multiple values for an environment variable, you can put the values in parentheses and separate them with spaces. The portability of array variables is not good. There may be problems in other shell s. Use them carefully
mytest=(one  two  three four five)
echo $mytest # one
echo ${mytest[1]} # two
echo ${mytest[*]} # one two three four five
mytest[2]=lxq
unset mytest[2] # Delete the value of the second element, and mytest[2] is null

7. Security of Linux

  • useradd work [-G groupname]
  • userdel -r work # r delete the HOME directory and mail directory of work
  • usermod: modify user account
    • L locked account, unable to log in
    • U unlock account
    • G add group: usermod -G groupname username
  • passwd, chpasswd batch modify user name and password
  • chsh: modify the default user login, chsh -s /bin/csh test
  • chfn: modify the account notes. grep work /etc/passwd can view the modification results
  • chage is used to help manage the validity of user accounts
  • groupadd [groupname]
  • groupmod -n working work # modify group name
  • umask: default file permissions. The full permission value is 666 for files and 777 for directories
touch test.txt
mkdir test.d
drwxr-xr-x. 2 root root        6 Jan 23 19:29 test.d
-rw-r--r--. 1 root root        0 Jan 23 19:29 test.txt
  • chmod: permission modification. R recursion
  • chown: Master modification. R recursion
    • Modify user: Chmod work test txt
    • Modify users and groups: Chmod work work test. txt
    • Modify users and groups: Chmod work test. txt
    • Modification group: Chmod work test. txt
  • chgrp: modify array
  • Share files. The SGID bit is very important for file sharing. After enabling the SGID bit, you can force the creation of new files in a shared directory to belong to the group of the directory, and this group becomes the group of each user.
    • SUID(4): when the file is used by the user, the program will run with the permission of no owner. chmod u+s file
    • SGID(2): for files, the program will run with the permission of the file array; For a directory, new files created in the directory will take the default group of the directory as the default group. chmod g+s file
    • Adhesive bit (1): after the process ends, the file still resides (adheres) in memory. chmod o+t file

8. Managing file systems

  • Log file system: ext4, XFS
  • Copy file system on write: ZFS, Btrf. If data is modified, clone or writable snapshot will be used
  • Create partition fdisk
    • fdisk -l lists partition information
    • fdisk /dev/sdb operation partition
  • mount -t ext4 /dev/sdb1 /mnt/my_partition
  • File system check: fsck

Logical volume management

  • Physical volume (PV): each physical volume is mapped to a specific physical partition on the hard disk
  • Volume group (VG): multiple physical volumes can be grouped together to form a volume group. The logical volume management system regards the volume group as a physical hard disk, but in fact, the volume group may consist of multiple physical partitions partitioned on multiple physical hard disks. Volume groups provide a platform for creating logical partitions that contain file systems.
  • logical volume (LV): provides Linux with a partition environment for creating file systems
  • snapshot:
  • Mirror: a complete copy of a logical volume that is updated in real time. When a mirror logical volume is created, LVM synchronizes the original logical volume to the mirror copy
  • Striping: create logical volumes across multiple hard disks, and distribute file data blocks to multiple hard disks to increase performance
  • Use LVM
# Allocate two 8G hard disks, / dev/sdc /dev/sdb. Use the following command to check whether it is successful
fdisk -l

# Partition / dev/sdc /dev/sdb respectively, and repeat the operation to establish three partitions
fdisk /dev/sdc
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (1-4, default 1):
First sector (2048-16777215, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-16777215, default 16777215): +2G

Created a new partition 1 of type 'Linux' and of size 2 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'.

Command (m for help): p
Disk /dev/sdc: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe54e9c4c

Device     Boot Start     End Sectors Size Id Type
/dev/sdc1        2048 4196351 4194304   2G 8e Linux LVM

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

# Create physical volumes with partitions
pvcreate /dev/sdc1 /dev/sdc2 /dev/sdc3 /dev/sdb1 /dev/sdb2 /dev/sdb3
# Viewing physical volumes
pvdisplay
# Create a volume group and select four physical volumes
vgcreate vg_lxq /dev/sdc1  /dev/sdc3 /dev/sdb1 /dev/sdb3
# Viewing volume groups
vgdisplay vg_lxq
# Create logical volume L specify size n specify logical volume name
lvcreate -L 3G -n lv_sda1 vg_lxq
# View logical volume details
lvdisplay vg_lxq
# Create file system and mount
mkfs.ext4 /dev/vg_lxq/lv_sda1
mount  /dev/vg_lxq/lv_sda1 /mnt/my_partition
mount # /dev/mapper/vg_lxq-lv_sda1 on /mnt/my_partition type ext4 (rw,relatime,seclabel)
# ext4 expansion
lvresize -L 4G /dev/vg_lxq/lv_sda1
e2fsck -f /dev/vg_lxq/lv_sda1
resize2fs /dev/vg_lxq/lv_sda1 # ext4 information will be updated
# ext4 reduction
umount /dev/vg_lxq/lv_sda1
e2fsck -f /dev/vg_lxq/lv_sda1
resize2fs /dev/vg_lxq/lv_sda1 3G
lvresize -L 3G /dev/vg_lxq/lv_sda1 

# xfs expansion
lvcreate -L 1G -n lv_sda2 vg_lxq
mkfs.xfs /dev/vg_lxq/lv_sda2
mkdir /mnt/my_xfs
mount /dev/vg_lxq/lv_sda2 /mnt/my_xfs
lvresize -L 2G /dev/vg_lxq/lv_sda2
xfs_growfs /dev/vg_lxq/lv_sda2
umount /dev/vg_lxq/lv_sda2
dd if=/dev/zero bs=1M count=1000 of=/mnt/my_xfs/test1000M.zero

# Delete physical volume
vgreduce vg_lxq /dev/sdc1
# Extended volume group
vgextend vg_lxq /dev/sdc2

# Create snapshot - s create snapshot - L snapshot size - n snapshot name
lvcreate -s -L 500M -n lvsda1snap2 /dev/vg_lxq/lv_sda1
# View snapshot
lvdisplay /dev/vg_lxq/lvsda1snap2

# pvs vgs lvs  

9. Install software program

  • centos yum
# 
yum list
yum list nginx
yum list installed 
yum list installed nginx

# Find / etc / yum.com Which package does conf belong to
yum provides /etc/yum.conf

yum install nginx
# Install rpm installation files locally using yum or up2date
yum localinstall package_name.rpm

# Available updates with installed packages are listed
yum list updates

yum update package_name
# Update all updates
yum update
# Delete only the software package and retain the configuration file and data file
yum remove package_name
# Delete the software and all its files
yum erase package_name

yum clean all

# Show package dependencies
yum deplist package_name

11. Build basic script

  • echo -n no newline
  • Command replacement
    • date
    • $(date)
  • Pipeline (|), do not think that the two commands connected by the pipeline will be executed in turn. The Linux system will actually run these two commands at the same time and connect them inside the system. When the first command generates output, the output will be sent to the second command immediately. No intermediate files or buffers will be used for data transfer.
  • Perform mathematical operations. bash shell only supports integer operations, and zsh and bc calculators can be considered for floating-point operations
var1=$(expr 2 \* 5)
var2=$[2*5]
  • Calculating floating point numbers using bc
var1=$(echo "3+4"|bc)
# EOF inline input redirection
var5=$(bc << EOF
scale=4
a1 = ($var1 * $var2)
a2 = ($var3 * $var4)
a1+a2
EOF) 
  • Exit $? You can view the exit code, and exit can specify the exit code

12. Use structured commands

  • If then: if the exit status code of the following command is 0 (the command is executed successfully), the command in the then part will be executed.
if pwd
then
  echo "It worked"
else
  echo "It don't worked"
fi

if pwd;then
  xxx
elif ls -d /home/work ; then
  xxx
else 
  xxx
fi 
  • Test: provides the path to test different conditions in the if then statement. If the conditions listed in the test command are true, the test command will exit and return the exit status code 0; If not, a non-zero exit status code is returned. Test can handle the following three conditions
    • Numerical comparison, - eq -ge -gt -le -lt -ne
    • String comparison, =! = < >
      • < > must be escaped
      • -n str1 check whether the length of str1 is non-zero
      • -z str1 checks whether the length of str1 is 0
    • File comparison
      • -d file check whether the file exists and is a directory
      • -f file check whether the file exists and is a file
      • -e file check whether the file exists
      • -[r|w|x] file check whether the file is readable, writable and executable
      • -s file check whether the file exists and is not empty
      • -O file check whether the file exists and belongs to the current user
      • -G file check whether the file exists and the default group is the same as the current user
      • file1 -nt file2 check whether file1 is newer than file2
      • file1 -ot file2 check if file1 is older than file2
  • The test command can be replaced by square brackets. A space must be added after the first square bracket and before the second square bracket, otherwise an error will be reported. if [ condition ];then fi
  • Composite condition test [cond1] & & [cond2]; [ cond1 ] || [ cond2 ];
  • Using the double bracket command allows advanced mathematical expressions to be used during the comparison
if (( 3 ** 4 == 81 ));then
        echo "3**4==81"
fi
  • Use double brackets for string pattern matching
if [[ $USER == r* ]];then
        echo "user r*"
fi
  • Case statement, | can match multiple values,;; At the end of each case, *) captures all values that do not match the known pattern.
case $USER in
rich | junmo)
  echo "xxx"
  echo "hello";;
testing)
  echo "testing";;
*)
  echo "default";;
esac

13. More structured commands

  • Internal field separator IFS, bash shell takes space, tab and newline as field separator by default.
IFS=:
IFS=$'\n':;"

IFS.OLD=$IFS
IFS=$'\n'
#Use the new IFS value in your code
IFS=$IFS.OLD
  • for
# Read from list
for test in I don\'t know if "this'll" work
do
  echo $test
done

# Read from variable
list="AB C D E"
list=$list" F"
for state in $list
do
  echo $state
done
  
# Read from command
for f in $(cat $file)
do
  echo $f
done

# Read directory, you can read multiple directories
for file in /etc/* /home/work
do
  if [ -d "$file" ]
  then
    echo "$file is a directory"
  elif [ -f "$file" ]
  then
    echo "$file is a file"
  fi
done
  • C language style for loop
for (( i = 0;i<10;i++))
do
  echo "index $i"
done
  • while: when using multiple test commands, each test command is a line, and only the exit status code of the last test command will be used to determine when to end the loop
var1=10
while echo $var1
      [ $var1 -ge 0 ]
do 
  echo "This is inside the loop"
  var1=$[ $var1 - 1 ]
done
  • until command: Contrary to while, it exits when the exit status code returns zero
  • nested loop
  • Circular data processing
    • Using nested loops
    • Using IFS environment variables
IFS.OLD=$IFS
IFS=$'\n'
for entry in $(cat /etc/passwd)
do
  echo "Value in $entry -"
  IFS=:
  for value in $entry
  do
    echo " $value"
  done
done
IFS=$IFS.OLD
  • Control statement break continue, where break n is to break the external loop of the nth layer
  • Loop output, which can be redirected to the file after done, or use the pipe character, such as done > test123 txt; done | sort

14. Process user input

  • Command line parameters 1 , 1, 1,{10}
  • Read script name 0 , 0, 0,(basename $0)
  • [- n "$1"] test whether the parameter exists
  • Number of parameters: $# last variable: ${! #}
  • Grab all data, $* and @ , @ , @, * variables treat all parameters as a single parameter, while $@ variables handle each parameter separately
  • shift moves variables, which can be used to skip unnecessary variables
# test.sh 1 2 3 4 5
echo "first parm: $1" # 1
shift 2
echo "first parm: $1" # 2
  • Processing option getopt: processing command line options and parameters to produce only one output. Command line execution:
$ getopt ab:cd -a -btest1 -cd test1 test2
 -a -b test1 -c -d -- test1 test2
  • Processing option getopts: shell built-in command. Only one parameter is detected in each call, $optarget is the currently parsed value, and OPTIND is the current processing position plus 1
while getopts :ab:c opt
do
  case "$opt" in
    a) echo "Found the -a option" ;;
    b) echo "Found the -b option, with value $OPTARG" ;;
    c) echo "Found the -c option" ;;
    *) echo "Unknown option: $opt"
  esac 
done

# Processing parameters
shift $[ $OPTIND -1 ]
count=1
for param in "$@"
do
  echo "Parameter $count: $param"
  count=$[ $count + 1 ]
done

$ bash a.sh -a -btest1 -c --  testaaa sjsjjs
Found the -a option
Found the -b option, with value test1 
Found the -c option
Parameter 1: testaaa
Parameter 2: sjsjjs
  • Get user input: read. If no variable is specified, it is in the global variable REPLY
read -p "Enter your name: " name
read name
# Wait for 5s
read -t 5 name
# n specifies that when the input character reaches the specified number of words, it will exit automatically
read -n 1 name
# Read in hidden mode
read -s pass
# Read from file, one line at a time
count=1
cat test.txt | while read line
do
  echo "Line $count: $line"
  count=$[ $count+1 ]
done

15. Presenting data

  • STDIN 0 ;STDOUT 2 ;STDERR 2;
  • Redirect error 2 >
  • Redirect errors and data & >
  • Temporary redirection: when redirecting to a file descriptor, you must add a before the file descriptor number&
echo "this is an error" >&2
  • Permanent redirection: use exec in the script to tell the script to redirect a specific file descriptor during execution
exec 1> testout
exec 2> testerror
  • redirect input
#!/bin/bash
exec 0< a.sh
count=1
while read line
do
        echo "Line #$count: $line"
        count=$[ $count+1 ]
done
  • There can be up to 9 file descriptors in a shell script
# Create output file descriptor
exec 3> test13out

echo "test13out hhh" >&3
  • Redirect file descriptor, restore
# Redirect 3 to STDOUT
exec 3>&1
# Redirect STDOUT to file
exec 1> testout
# 3 points to STDOUT, so if 1 redirects to 3, the redirection will be restored
exec 1>&3
  • Create read / write file descriptor exec 3 < > testfile
  • Close file descriptor exec 3 >&-
  • List open file descriptors: lsof
  • Organize command output 2 > / dev / null
  • Create temporary file mktemp
# When you create a temporary file locally, X will be filled with random strings by the system
mktemp testing.XXXXX
# Create a temporary file in the temporary directory t
mktemp -t testing.XXXXX
# Create a temporary directory in the temporary directory d
mktemp -d dir.XXXXX
  • tee file: send STDIN data to STDOUT and file, append data to file, and use - a option

16. Control script

  • SIGINT: CTRL+C will generate this signal
  • SIGTSTP: Ctrl+Z
  • trap commands signals
    • Capture signal: trap "echo 'Sorry... Ctrl + C is captured." "SIGINT
    • Capture script EXIT: trap "echo Goodbye..." EXIT
    • trap "echo 'I modified the captured.'" SIGINT
    • Remove capture: trap – SIGINT
#!/bin/bash
trap "echo Goodbye..." EXIT
trap "echo ' Sorry... Ctrl+C is trapped.'" SIGINT
count=1
while [ $count -le 5 ]; do
  echo "Loop1: #$count"
  sleep 1
  count=$[ $count+1 ]
done
trap "echo ' I modified the trapped.'" SIGINT
count=1
while [ $count -le 5 ]; do
  echo "Loop2: #$count"
  sleep 1
  count=$[ $count+1 ]
done
trap -- SIGINT
count=1
while [ $count -le 5 ]; do
  echo "Loop2: #$count"
  sleep 1
  count=$[ $count+1 ]
done
  • Background running script: &, but it should be noted that when the terminal session exits, the background process will also exit, because when the terminal exits, it will send a SIGHUB signal to its child process
  • The nohup command disassociates the terminal from the process
  • Job control jobs
    • l list process PID and job number
    • n lists only jobs whose status has changed since the last notification from the shell
    • p list PID only
    • r lists only running jobs
    • s lists only stopped jobs
  • Restart stop job
    • bg [n] background start
    • fg [n] foreground start
  • Adjust humility
    • nice -n 10 ./test.sh &
    • renice -n 15 -p 22233
  • Scheduled task
    • One time execution at
    • Execute cron periodically
    • ancron: handle script tasks missed due to shutdown

Keywords: Linux Operation & Maintenance bash

Added by kecebong_soft on Mon, 31 Jan 2022 03:31:09 +0200