Dameng DMDSC cluster deployment and construction

Environmental preparation

(1) Two virtual machines are required, and the memory is better than 2G. DM8 is installed on each virtual machine, and initialization is not required after installation.
(2) In this paper, the machine is named. The first one is called node 1 and the second one is called node 2, which is convenient to call. Because it is learning to use, it is built with vmware.
(3) To mount a disk for node 1, you need to shut down, then edit the virtual machine settings on vmware and add a hard disk (you can ask the O & M for help in production, which must be a bare device). In production, it is recommended to hang 2 pieces or more, and store data and logs separately.
Here, it is recommended that the hard disk be a single file, "virtual device node", and select 1:0.
(4) The second node, shut down, add a hard disk, select an existing hard disk, browse and select the hard disk file (. vmdk) of node 1, and select 0:1 for "virtual device node".
Then, enter the directory of each virtual machine, open the "xxxx.vmx" file in Notepad, and add at the end:

disk.locking="FALSE"
scsi0:1.SharedBus="Virtual"
scsi1:1.SharedBus="Virtual"


Note: the end must be the end of carriage return.
(5) If there are multiple hard disks, they are basically mounted in this way. The number on the right side of the "virtual device node" can be incremented or the default one can be used.

(6) Configuration file directory: because the shared disk of DMDSC actually stores data and other contents, many software ini configuration files must be placed on the local of the virtual machine, not on the shared disk, so you need to plan a place to store them. This article uses / dm8/date / as the configuration file directory

(7) The "bin directory" mentioned in this article is the bin directory under the installation directory of dm8. For example, the installation directory is / dm8, so the bin directory is / dm8/bin

Note: the DMDSC cluster will be initialized later, the instance information will be configured, and the instance will be automatically pulled up. Therefore, you only need to formulate the database name, port and other information in advance. At that time, the databases of the two nodes will be built on the shared disk (that is, the disk file of vmware), and the configuration files of the library (including the configuration files of other software) will be on the local machine (that is, the virtual machine you operate), which is why there is (6)

DM8 trial download address: https://eco.dameng.com/download/?_blank

1. Shared disk (raw device) partition

(1) Both virtual machines are powered on. Use the root user to turn off the firewall
(2) Node 1, execute fdisk -l, and you can see the shared hard disk just mounted (if only one is hung, the name is sdb by default, and two are sdb and sdc, and so on)
(3) Node 2, also execute fdisk -l to see whether the shared disk can be seen

The shared disk needs to be divided into four parts to store dcr information, vote information, redo log and data, which are bound as raw1~raw4 respectively

(4) Node 1, execute fdisk /dev/sdb to partition the disk
Enter in sequence:
n → p → 1 → enter → + 100M → enter
n → p → 2 → enter → + 100M → enter
n → p → 3 → enter → + 2048M → enter
n → p → enter → enter (that is, by default, all remaining space is allocated to zone 4)
w → enter
(5) Execute fdisk -l again to check whether there is partition information below the shared disk
(6) Then go to node 2 and execute fdisk -l to check whether the shared disk has partition information. If not, it means that there is a problem with the mounting of the shared disk and it is not connected. Return to 1. Environment preparation to mount again

If you have 4 or more disks attached and each disk is large enough to hold the contents above, skip this step

Two nodes, execute:

vi /etc/udev/rules.d/60­raw.rules

Enter the following:

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdb4", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="raw[1-4]", OWNER="dmdba", GROUP="dinstall", MODE="660"

If you mount multiple disks, pay attention to modifying the disk name of KERNEL


After completion, both nodes execute the following commands to make the raw device binding take effect

udevadm trigger --type=devices --action=change

View bare devices:

ll /dev/raw/*


Return the following:

crw-rw----. 1 dmdba dinstall 162, 1 12 June 24-16:49 /dev/raw/raw1
crw-rw----. 1 dmdba dinstall 162, 2 12 August 28-21:25 /dev/raw/raw2
crw-rw----. 1 dmdba dinstall 162, 3 12 August 28-21:22 /dev/raw/raw3
crw-rw----. 1 dmdba dinstall 162, 4 12 June 28-19:25 /dev/raw/raw4
crw-rw----. 1 root  disk     162, 0 12 June 24-16:49 /dev/raw/rawctl


#If you can't see these files or the permissions of the files are wrong, it doesn't matter. Ignore them for the time being
(7) Both machines, restart
(8) After restarting, check again: ll /dev/raw/*
If there is still no change, it means that there is a problem with the disk and you can only mount it again

2. Create a configuration file and switch back to the dmdba user

su - dmdba
Go to the configuration file directory, i.e. / dm8/data, and create a configuration file

[dmdcr_cfg.ini] both nodes are configured

vi dmdcr_cfg.ini


Write the following contents (except the ip to be modified, the others are basically unchanged):

DCR_N_GRP = 3
DCR_VTD_PATH = /dev/raw/raw2
DCR_OGUID = 63635
[GRP]
DCR_GRP_TYPE = CSS
DCR_GRP_NAME = GRP_CSS
DCR_GRP_N_EP = 2
DCR_GRP_DSKCHK_CNT = 60
[GRP_CSS]
DCR_EP_NAME = CSS0
DCR_EP_HOST = Of node 1 ip
DCR_EP_PORT = 9341
[GRP_CSS]
DCR_EP_NAME = CSS1
DCR_EP_HOST = Of node 2 ip
DCR_EP_PORT = 9343
[GRP]
DCR_GRP_TYPE = ASM
DCR_GRP_NAME = GRP_ASM
DCR_GRP_N_EP = 2
DCR_GRP_DSKCHK_CNT = 60
[GRP_ASM]
DCR_EP_NAME = ASM0
DCR_EP_SHM_KEY = 93360
DCR_EP_SHM_SIZE = 10
DCR_EP_HOST = Of node 1 ip
DCR_EP_PORT = 9349
DCR_EP_ASM_LOAD_PATH = /dev/raw
[GRP_ASM]
DCR_EP_NAME = ASM1
DCR_EP_SHM_KEY = 93361
DCR_EP_SHM_SIZE = 10
DCR_EP_HOST = Of node 2 ip
DCR_EP_PORT = 9351
DCR_EP_ASM_LOAD_PATH = /dev/raw
[GRP]
DCR_GRP_TYPE = DB
DCR_GRP_NAME = GRP_DSC
DCR_GRP_N_EP = 2
DCR_GRP_DSKCHK_CNT = 60
[GRP_DSC]
DCR_EP_NAME = DSC0
DCR_EP_SEQNO = 0
DCR_CHECK_PORT = 9741
DCR_EP_PORT = 5238
[GRP_DSC]
DCR_EP_NAME = DSC1
DCR_EP_SEQNO = 1
DCR_CHECK_PORT = 9742
DCR_EP_PORT = 5238

[initialize dcr] one node can be configured

There are two methods to initialize a disk group using dmascmd:
First, write a file, paste and save the following contents (pay attention to the configuration file directory of the last two items)

vi asmcmd.txt
#asm script file
create dcrdisk '/dev/raw/raw1' 'dcr'
create votedisk '/dev/raw/raw2' 'vote'
create asmdisk '/dev/raw/raw3' 'LOG0'
create asmdisk '/dev/raw/raw4' 'DATA0'
init dcrdisk '/dev/raw/raw1' from '/dm8/data/dmdcr_cfg.ini' identified by 'abcd'
init votedisk '/dev/raw/raw2' from '/dm8/data/dmdcr_cfg.ini'


Then go to the bin directory of dm8 and execute:

./dmasmcmd script_file=/dm8/data/asmcmd.txt

Second, go directly to the bin directory and execute:

./dmasmcmd

Then enter each of the above commands in turn and press enter (note that the first line does not need to be executed, and there is no need to write a semicolon at the end)

[dmasvrmal.ini] both nodes are configured with the same content

vi dmasvrmal.ini


Write the following (note ip):

[MAL_INST1]
MAL_INST_NAME = ASM0
MAL_HOST = Of node 1 ip
MAL_PORT = 7236
[MAL_INST2]
MAL_INST_NAME = ASM1
MAL_HOST = Of node 2 ip
MAL_PORT = 7237

[dmdcr.ini] both nodes are configured

Node 1:

DMDCR_PATH = /dev/raw/raw1
DMDCR_MAL_PATH =/dm8/data/dmasvrmal.ini #MAL profile path used by dmasmsvr
DMDCR_SEQNO = 0
#ASM restart parameter, command line startup
DMDCR_ASM_RESTART_INTERVAL = 0
DMDCR_ASM_STARTUP_CMD = /dm8/bin/dmasmsvr dcr_ini=/dm8/data/dmdcr.ini
#DB restart parameter, command line startup
DMDCR_DB_RESTART_INTERVAL = 0
DMDCR_DB_STARTUP_CMD = /dm8/bin/dmserver path=/dm8/data/dsc0_config/dm.ini dcr_ini=/dm8/data/dmdcr.ini

Node 2:

DMDCR_PATH = /dev/raw/raw1
DMDCR_MAL_PATH =/dm8/data/dmasvrmal.ini #MAL profile path used by dmasmsvr
DMDCR_SEQNO = 1
#ASM restart parameter, command line startup
DMDCR_ASM_RESTART_INTERVAL = 0
DMDCR_ASM_STARTUP_CMD = /dm8/bin/dmasmsvr dcr_ini=/dm8/data/dmdcr.ini
#DB restart parameter, command line startup
DMDCR_DB_RESTART_INTERVAL = 0
DMDCR_DB_STARTUP_CMD = /dm8/bin/dmserver path=/dm8/data/dsc1_config/dm.ini dcr_ini=/dm8/data/dmdcr.ini

DMDCR_SEQNO is 0 and 1, corresponding to node 1 and node 2. The path at the end is the database name and configuration file. Note that the database names of the two nodes may be inconsistent. Remember to modify them according to the actual situation.

Above DMDCR_ASM_RESTART_INTERVAL,DMDCR_ DB_ RESTART_ The two places of interval are dmasmsvr and dmserver. Configure 0 first, that is, start manually, and then change to automatic after subsequent configuration.

3. Start the dmcss and dmasmsvr services (the foreground starts, pay attention to the new session), and both nodes start

Both nodes go to the bin directory and execute:

./dmcss dcr_ini=/dm8/data/dmdcr.ini


Then, two nodes hold a new meeting:

./dmasmsvr dcr_ini=/dm8/data/dmdcr.ini

Return to the session started by dmcss. When you see the return of node 1 or node 2, there are many contents and no errors, indicating success

5. Creating a disk group using dmsmtool

Select a node, open a new session, start the dmasmtool tool, and select the bin directory

./dmasmtool dcr_ini=/dm8/data/dmdcr.ini

Then execute in sequence (also without semicolon):

create diskgroup 'DMLOG' asmdisk '/dev/raw/raw3'
create diskgroup 'DMDATA' asmdisk '/dev/raw/raw4'
exit

[configure dminit.ini], one node is enough

Go to the configuration file directory, i.e. / dm8/data, and create a configuration file

vi dminit.ini

Write the following (note ip):

db_name = dsc
system_path = +DMDATA/data
system = +DMDATA/data/dsc/system.dbf
system_size = 128
roll = +DMDATA/data/dsc/roll.dbf
roll_size = 128
main = +DMDATA/data/dsc/main.dbf
main_size = 128
ctl_path = +DMDATA/data/dsc/dm.ctl
ctl_size = 8
log_size = 256
dcr_path = /dev/raw/raw1
dcr_seqno = 0
auto_overwrite = 1
page_size = 16
[DSC0]
config_path = /dm/data/dsc0_config
port_num = 5241
mal_host = Of node 1 ip
mal_port = 9340
log_path = +DMLOG/log/dsc0_log01.log
log_path = +DMLOG/log/dsc0_log02.log
[DSC1]
config_path = /dm/data/dsc1_config
port_num = 5241
mal_host = Of node 2 ip
mal_port = 9341
log_path = +DMLOG/log/dsc1_log01.log
log_path = +DMLOG/log/dsc1_log02.log

6. Initialize database (1 node)

(1) Go to the bin directory and start dminit

./dminit control=/dm8/data/dminit.ini

Success and time are returned at the end, indicating success


(2) After the dm8/data node is successfully initialized, you will see that there are two directories under the dm8/data node, and then you will see that the two directories are successfully initialized, respectively
(3) Each directory contains DM Ini and dmmal ini
(4) Copy the directory of node 2 to the machine of node 2 and put it in the same location

scp -r dsc1_config/ dmdba@Of node 2 ip:/dm8/data/


(5) Then delete the dsc1 on node 1_ Config directory

7. Start the database service and go to the bin directory

Node 1:

./dmserver /dm8/data/dsc0_config/dm.ini

Node 2:

./dmserver /dm8/data/dsc1_config/dm.ini

It is also started by the front desk. Pay attention to new meetings.

8. Configure remote archiving

First set the DM of 2 nodes INI file, arch_ Set ini to 1 and go down

vi dmarch.ini

Node 1:

[ARCHIVE_LOCAL1]
ARCH_TYPE = LOCAL
ARCH_DEST = /dmarch/dsc0/arch
ARCH_FILE_SIZE = 128
ARCH_SPACE_LIMIT = 0
[ARCHIVE_REMOTE]
ARCH_TYPE = REMOTE
ARCH_DEST = DSC1
ARCH_FILE_SIZE = 128
ARCH_SPACE_LIMIT = 0
ARCH_INCOMING_PATH = /dmarch/dsc0/arch_remote

Node 2:

[ARCHIVE_LOCAL1]
ARCH_TYPE = LOCAL
ARCH_DEST = /dmarch/dsc1/arch
ARCH_FILE_SIZE = 128
ARCH_SPACE_LIMIT = 0
[ARCHIVE_REMOTE]
ARCH_TYPE = REMOTE
ARCH_DEST = DSC0
ARCH_FILE_SIZE = 128
ARCH_SPACE_LIMIT = 0
ARCH_INCOMING_PATH = /dmarch/dsc1/arch_remote

Here, you need to build a directory for archiving logs in advance

9. Configure background startup

First stop the two nodes, the database service
Then, the dmcss sessions are stopped. For both nodes, enter exit and enter
Then stop dmasmsvr, two nodes, enter exit and enter
Go to the service in the bin directory_ Template directory

Node 1:

cp DmCSSService ../DmCSSService_dsc0
vi DmCSSService_dsc0

Modify DCR_INI_PATH="/dm8/data/dmdcr.ini" 

Node 2:

cp DmCSSService ../DmCSSService_dsc1
vi DmCSSService_dsc1

Modify DCR_INI_PATH="/dm8/data/dmdcr.ini" 

The same is true for the remaining DmASMSvrService and DmService

Two nodes, return to the configuration file directory, / dm8/data, and modify dmdcr ini
DMDCR_ASM_RESTART_INTERVAL = 10
DMDCR_DB_RESTART_INTERVAL = 30

The number indicates seconds, that is, pull up after 10 seconds. If it is set to 0, it will not be pulled, and it needs to be pulled manually

Then return to the bin directory and execute:

Node 1:

./DmCSSService_dsc0 start

Node 2:

./DmCSSService_dsc1 start

Use ps -ef | grep dm8. After 1 minute, check whether dmcss normally pulls up dmsmsvr and dmserver. Pull up normally. The time depends on the time taken to start the instance. You need to wait patiently for a while.

After startup, you can directly execute the DmCSSService. dmcss will automatically pull up dmamsvr and dmserver

10. Configure DMCSSM monitor (select any node)

Go to the configuration file directory, / dm8/data

[dmcssm.ini]

vi dmcssm.ini


Write the following (note ip):
 

#Dmdcr_ cfg. DCR in ini_ Oguid consistent
CSSM_OGUID = 63635
#Configure the connection information of all CSS,
#Dmdcr_ cfg. DCR of CSS configuration item in ini_ EP_ Host and DCR_EP_PORT consistent
CSSM_CSS_IP = Of node 1 ip:9341
CSSM_CSS_IP = Of node 2 ip:9343
CSSM_LOG_PATH =/dm8/log #Monitor log file storage path
CSSM_LOG_FILE_SIZE = 32 #Maximum 32M per log file
CSSM_LOG_SPACE_LIMIT = 0 #Unlimited total log file space

Start the monitor to view the instance
 

./dmcssm ini_path=/dm8/data/dmcssm.ini

Enter show
Finally, take db as an example, Inst_ The status field is OPEN, VTD_ The status field is WORKING and the active field is TRUE, indicating that the db service is normal.

Community address: https://eco.dameng.com

Keywords: Operation & Maintenance Database DBA

Added by prue_ on Sun, 02 Jan 2022 18:58:12 +0200