Kafka 2.5.0 cluster installation (single machine pseudo cluster)
Installation of Zookeeper 3.6.3 single node
Download and unzip Zookeeper
-
Download address
https://zookeeper.apache.org/releases.html
Download the binary version here. You don't need to compile apache-zookeeper-3.6.3-bin tar. gz -
Create the folder zookeeper under the / path and enter the directory to extract it
[root@localhost zookeeper]# tar -zxvf apache-zookeeper-3.6.3-bin.tar.gz
-
Rename the extracted directory to zookeeper-3.6.3
[root@localhost zookeeper]# mv apache-zookeeper-3.6.3-bin zookeeper-3.6.3
Establish data storage directory
/zookeeper/zookeeper-3.6.3/tmp/data
Modify profile
-
Enter conf, copy and rename the configuration file
[root@localhost conf]# cp zoo_sample.cfg zoo.cfg
-
Modify zoo CFG, modify the data storage directory and port number from 2181 to 2188 (not required)
dataDir=/zookeeper/zookeeper-3.6.3/tmp/data clientPort=2188
Start Zookeeper
-
Enter the bin directory and execute the startup command
[root@localhost zookeeper-3.6.3]# cd bin/ [root@localhost bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: /zookeeper/zookeeper-3.6.3/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
Check whether the startup is successful
[root@localhost bin]# ps -ef | grep zookeeper
View zk status
[root@localhost bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /zookeeper/zookeeper-3.6.3/bin/../conf/zoo.cfg Client port found: 2188. Client address: localhost. Client SSL: false. Mode: standalone
Install Kafka single machine multi node pseudo cluster
Environmental dependence
-
CentOS7,JDK8,kafka_2.13-2.5.0.tgz, start three Kafka nodes in a single machine
-
Turn off the firewall
#close command service firewalld stop chkconfig firewalld off
Download and unzip Kafka
-
Get the download address (click the specific version):
http://kafka.apache.org/downloads Download Binary version instead of source code -
Create a folder kafka/cluster under / path and enter the directory
[root@localhost /]# mkdir kafka [root@localhost /]# cd kafka [root@localhost kafka]# mkdir cluster [root@localhost kafka]# cd cluster
-
Unzip the compressed files in the superior kafka directory to the current directory
[root@localhost cluster]# tar -zxvf /kafka/kafka_2.13-2.5.0.tgz
-
Modify the unzipped file name to kafka2 five
[root@localhost cluster]# mv kafka_2.13-2.5.0 kafka2.5
Create log file paths for three nodes
-
Create three log file directories kafka-logs9093, kafka-logs9094 and kafka-logs9095 under the cluster folder
[root@localhost cluster]# mkdir kafka-logs9093 [root@localhost cluster]# mkdir kafka-logs9094 [root@localhost cluster]# mkdir kafka-logs9095 [root@localhost cluster]# ls kafka2.5 kafka-logs9093 kafka-logs9094 kafka-logs9095
Modify profile
-
Set server Three copies of properties server9093.0 properties,server9094.properties,server9095.properties
[root@localhost config]# cp server.properties server9093.properties [root@localhost config]# cp server.properties server9094.properties [root@localhost config]# cp server.properties server9095.properties
-
server9093. Where the configuration of properties is modified:
broker.id=3 listeners=PLAINTEXT://192.168.237.128:9093 log.dirs=/kafka/cluster/kafka-logs9093 zookeeper.connect=localhost:2188
-
server9094. Where the configuration of properties is modified:
broker.id=4 listeners=PLAINTEXT://192.168.237.128:9094 log.dirs=/kafka/cluster/kafka-logs9094 zookeeper.connect=localhost:2188
-
server9095. Where the configuration of properties is modified:
broker.id=5 listeners=PLAINTEXT://192.168.237.128:9095 log.dirs=/kafka/cluster/kafka-logs9095 zookeeper.connect=localhost:2188
Start 3 Kafka services
-
Step 1: start ZK and then kafka.
-
Enter the bin path and start three kakfa services
[root@localhost bin]# ./kafka-server-start.sh -daemon ../config/server9093.properties
Create Topic under Cluster
-
In the bin directory, create a topic named clustertest. There is only one copy and one partition:
[root@localhost bin]# sh kafka-topics.sh --create --zookeeper localhost:2188 --replication-factor 1 --partitions 1 --topic clustertest
-
To view a topic that has been created:
[root@localhost bin]# sh kafka-topics.sh -list -zookeeper localhost:2188 clustertest
Start Consumer under Cluster
-
In the bin directory
[root@localhost bin]# sh kafka-console-consumer.sh --bootstrap-server 192.168.237.128:9093,192.168.237.128:9094,192.168.237.128:9095 --topic clustertest --from-beginning
Start Producer under Cluster
-
In the bin directory
[root@localhost bin]# sh kafka-console-producer.sh --broker-list 192.168.237.128:9093,192.168.237.128:9094,192.168.237.128:9095 --topic clustertest
In the cluster, the Producer window sends messages, and the Consumer window verifies the received messages
Examples
Create a Topic with 3 copies
[root@localhost bin]# sh kafka-topics.sh --create --zookeeper localhost:2188 --replication-factor 3 --partitions 1 --topic test-alix
Sending messages using the cluster Producer window
[root@localhost bin]# sh kafka-console-producer.sh --broker-list 192.168.237.128:9093,192.168.237.128:9094,192.168.237.128:9095 --topic test-alix >test partitions >test replication
Use three broker s to verify the Consumer window and receive messages
[root@localhost bin]# sh kafka-console-consumer.sh --bootstrap-server 192.168.237.128:9093 --topic test-alix --from-beginning test partitions test replication
[root@localhost bin]# sh kafka-console-consumer.sh --bootstrap-server 192.168.237.128:9094 --topic test-alix --from-beginning test partitions test replication
[root@localhost bin]# sh kafka-console-consumer.sh --bootstrap-server 192.168.237.128:9095 --topic test-alix --from-beginning test partitions test replication