TiUP is a cluster operation and maintenance tool introduced by TiDB version 4.0. TiUP cluster is a cluster management component written by Golang provided by TiUP. Through TiUP cluster component, you can carry out daily operation and maintenance work, including deployment, startup, shutdown, destruction, elastic capacity expansion, upgrading TiDB cluster, and managing TiDB cluster parameters.
Smallest TiDB cluster topology:
example | number | IP | to configure |
---|---|---|---|
TiKV | 3 | 10.186.65.41 | Avoid port and directory conflicts |
TiDB | 1 | 10.186.65.41 | Default port, global directory configuration |
PD | 1 | 10.186.65.41 | Default port, global directory configuration |
TiFlash | 1 | 10.186.65.41 | Default port, global directory configuration |
Monitor | 1 | 10.186.65.41 | Default port, global directory configuration |
1. Add data disk EXT4 file system
For production environment deployment, it is recommended to use NVME type SSD disk of EXT4 type file system to store TiKV data files. This configuration scheme is the best implementation scheme, and its reliability, security and stability have been confirmed in a large number of online scenarios.
Log in to the target machine with root user, format the data disk of the deployment target machine into ext4 file system, and add nodelalloc and noatimemount parameters when mounting. Nodelalloc is a required parameter, otherwise the detection cannot pass during TiUP installation; noatime is an optional recommended parameter.
be careful:
If your data disk has been formatted as ext4 and mounted, you can uninstall it by executing the umount /dev/vdb command first, starting from the step of editing the / etc/fstab file, and then add the mounting parameters to mount it again.
1.1 viewing data disk
fdisk -l Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
1.2 creating partitions
parted -s -a optimal /dev/vdb mklabel gpt -- mkpart primary ext4 1 -1
1.3 format file system
mkfs.ext4 /dev/vdb
1.4 use the lsblk command to view the device number and UUID of the partition:
[root@tidb01 ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sr0 iso9660 CONTEXT 2021-04-30-09-59-46-00 vda └─vda1 xfs de86ba8a-914b-4104-9fd8-f9de800452ea / vdb ext4 957bb4c8-68f7-40df-ab37-1de7a4b5ee5e
1.5 edit the / etc/fstab file and add the nodelalloc mount parameter
vi /etc/fstab UUID=957bb4c8-68f7-40df-ab37-1de7a4b5ee5e /data ext4 defaults,nodelalloc,noatime 0 2
1.6 create data directory and mount disk
mkdir -p /data && mount -a
1.7 check whether the file system is mounted successfully
Execute the following command. If the file system is ext4 and the mount parameter contains nodelalloc, it indicates that it has taken effect.
[root@tidb01 ~]# mount -t ext4 /dev/vdb on /data type ext4 (rw,noatime,nodelalloc,data=ordered)
2. Installation steps
2.1 download and install TiUP:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
2.2 install cluster components of TiUP:
tiup cluster
You need to open a new terminal or reload source / root / bash_ Profile file to execute the tiup command
2.3 if the machine has installed TiUP cluster, the software version needs to be updated:
tiup update --self && tiup update cluster
2.4 increase the number of connections of s SHD service
Due to the simulation of multi machine deployment, it is necessary to increase the connection limit of s SHD service through root user:
- Modify / etc/ssh/sshd_config set MaxSessions to 20.
- Restart sshd service:
systemctl restart sshd.service
2.5 create and start a cluster
According to the following configuration template, edit the configuration file and name it TOPO Yaml, where:
- User: "tidb": means that the internal management of the cluster is done through the tidb system user (which will be created automatically after deployment). By default, port 22 is used to log in to the target machine through ssh
- deploy_dir/data_dir: installation directory and data directory of cluster components respectively
- replication. Enable placement rules: set this PD parameter to ensure the normal operation of TiFlash
- Host: set as the IP address of the deployment host
[root@tidb01 .tiup]# pwd /root/.tiup [root@tidb01 .tiup]# cat topo.yaml # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/data/tidb-deploy" data_dir: "/data/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 10.186.65.41 tidb_servers: - host: 10.186.65.41 tikv_servers: - host: 10.186.65.41 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 10.186.65.41 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 10.186.65.41 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 10.186.65.41 monitoring_servers: - host: 10.186.65.41 grafana_servers: - host: 10.186.65.41
2.6 execute the cluster installation command:
tiup cluster deploy <cluster-name> <tidb-version> ./topo.yaml --user root -p
- Parameter indicates setting the cluster name
- Parameter indicates setting the cluster version. You can view the currently deployed TiDB version through the tiup list tidb command
Examples
tiup cluster deploy barlow 4.0.12 ./topo.yaml --user root -p
Follow the guidance and enter "y" and root password to complete the deployment:
Do you want to continue? [y/N]: y Input SSH password:
Successful installation will prompt the following words:
Cluster `barlow` deployed successfully, you can start it with command: `tiup cluster start barlow`
2.7 start the cluster:
tiup cluster start barlow
tiup cluster start <cluster-name>
2.8 access cluster:
- Install MySQL client. If MySQL client is installed, you can skip this step:
yum install -y mysql
- To access TiDB database, the password is blank:
mysql -uroot -p -h10.186.65.41 -P4000
- Access to Grafana monitoring of TiDB:
Access the cluster Grafana monitoring page through http: / / {Grafana IP}: 3000. The default user name and password are admin.
http://10.186.65.41:3000/
- To access TiDB's Dashboard:
Access the cluster TiDB Dashboard monitoring page through http: / / {PD IP}: 2379 / dashboard. The default user name is root and the password is empty.
http://10.186.65.41:2379/dashboard
- Execute the following command to confirm the list of currently deployed clusters:
tiup cluster list
[root@tidb01 .tiup]# tiup cluster list Starting component `cluster`: /root/.tiup/components/cluster/v1.4.2/tiup-cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- barlow tidb v4.0.12 /root/.tiup/storage/cluster/clusters/barlow /root/.tiup/storage/cluster/clusters/barlow/ssh/id_rsa
- Execute the following command to view the topology and status of the cluster:
tiup cluster display <cluster-name>
[root@tidb01 .tiup]# tiup cluster display barlow Starting component `cluster`: /root/.tiup/components/cluster/v1.4.2/tiup-cluster display barlow Cluster type: tidb Cluster name: barlow Cluster version: v4.0.12 SSH type: builtin Dashboard URL: http://10.186.65.41:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 10.186.65.41:3000 grafana 10.186.65.41 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000 10.186.65.41:2379 pd 10.186.65.41 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 10.186.65.41:9090 prometheus 10.186.65.41 9090 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090 10.186.65.41:4000 tidb 10.186.65.41 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 10.186.65.41:9000 tiflash 10.186.65.41 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000 10.186.65.41:20160 tikv 10.186.65.41 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 10.186.65.41:20161 tikv 10.186.65.41 20161/20181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161 10.186.65.41:20162 tikv 10.186.65.41 20162/20182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162 Total nodes: 8