Flink CDC and kafka carry out multi-source merging and downstream synchronization scheme
1, Foreword This paper mainly aims at the problem that Flink SQL cannot realize multi-source consolidation of multi database and multi table by using Flink CDC, and how to update the downstream Kafka synchronously after multi-source consolidation, because at present, Flink SQL can only carry out the job operation of single table Flink CDC, whic ...
Added by jefkin on Tue, 01 Feb 2022 23:22:40 +0200
Source code analysis of Offset management module OffsetManager of Kafka
Inside Kafka, there is a__ consumer_ The topic of offsets. This topic mainly saves the offset of each consumption group to the consumption of the partition under the topic, that is, to which offset it is consumed. In order to realize the function of offset management of consumption group, when a consumer obtains a message, it is necessary for t ...
Added by Braveheart on Sun, 30 Jan 2022 20:57:22 +0200
Kafka security authentication SASL/PLAINTEXT, account password authentication
Environment centos7 kafka cluster and zookeeper cluster do not have user password by default.
1. Configure zookeeper cluster SASL
All nodes of zookeeper are peer-to-peer, but the roles of each node may be different. The configuration of all nodes in the following steps is the same.
1. Add SASL support for zookeeper in the configuration file ...
Added by saami123 on Sun, 30 Jan 2022 11:34:58 +0200
Kafka theory + practice + Zookeeper + depth + error set of Internet of things
Internet of things Kafka configuration Kafka theory + Zookeeper deep error set
It is used in the company's Internet of things project. kafka is used as message middleware and kafka is used as the company's new technology. Currently, it uses single point without cluster.
Kafka configuration of Internet of things
Zookeeper: used to register an ...
Added by nblackwood on Sat, 22 Jan 2022 21:25:10 +0200
Microservice test development - how to test rocketmq when used in the project
1. What is MQ
mq is A message middleware. The simplest understanding is that service A sends A message to service B, and service B performs A series of actions after receiving the message
What problem can this solve
Peak shaving: if services are called through the interface, whether http or rpc, but it is a synchronous action, it means tha ...
Added by LOUDMOUTH on Sat, 22 Jan 2022 19:26:56 +0200
Blog recommendation | Apache Pulsar realizes fast log retrieval based on Log4j2+Kafka+ELK
This article is reproduced from the official account StreamCloudNative, author Xue Song, who works in the new world software as a senior software engineer. Editor: chicken chop, StreamNative.About Apache PulsarApache Pulsar is a top-level project of the Apache Software Foundation. It is a native distributed message flow platform for the n ...
Added by waq2062 on Thu, 20 Jan 2022 12:45:34 +0200
Kafka's timingwheel algorithm
There are a lot of delay operations in Kafka, such as delayed production, delayed pull and delayed deletion. Kafka does not use the Timer or DelayQueue provided by JDK to realize the delay function, but implements a Timer (SystemTimer) for the delay function based on the time wheel algorithm.
The average time complexity of inserting and deleti ...
Added by fasmy98 on Tue, 18 Jan 2022 06:54:36 +0200
kafka source code analysis consumption message
Analysis of kafka consumption message source code
Example code of consumption message
package com.example.demo.kafka;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaCon ...
Added by gauravupadhyaya on Mon, 17 Jan 2022 20:19:48 +0200
Does Flink Checkpoint support the maintenance of Kafka data consumption status?
Author: Wen naisongWhen using Flink to consume Kafka data in real time, it involves the maintenance of offset status. In order to ensure the restart of Flink job or the failure retry of Operator level during running, if you want to achieve "breakpoint continuation", you need the support of Flink Checkpoint. The question is, if you sim ...
Added by halcyonalt on Sat, 15 Jan 2022 09:53:19 +0200
13, kafka consumer findCoordinator
Here, let's sort out the relevant knowledge of consumeGroup
1. First, we will set groupId for each consumer. Consumers with the same groupId and subscribed to the same topic will form a consumeGroup, as shown in Figure 1
2. For the topic on the Server side, there is the concept of partition, as shown in Figure 2
3. Now we have multiple con ...
Added by mitch.craig on Mon, 10 Jan 2022 02:36:52 +0200