Pyecharts makes large screen of epidemic visualization data
Inspiration for this article: https://blog.csdn.net/qq_43613793/article/details/104268536 Thank the blogger for providing learning articles!
brief introduction
Echarts is a data visualization open source by Baidu. With good interactivity and exquisite chart design, echarts has been recognized by many developers. Python is an expressive lang ...
Added by ottoman_ on Tue, 12 Oct 2021 05:29:18 +0300
Research on fast inserting large amount of INSERT data into PostgreSQL database
background
In some application scenarios, you need to quickly load a large amount of data into the PostgreSQL database, such as database migration, SQL log analysis, etc. How many ways to quickly insert data on PG? What is the efficiency of each scheme? How can I tune for faster data loading?
Scene setting
SQL log analysis is a tool for coll ...
Added by alin19 on Tue, 12 Oct 2021 03:29:32 +0300
Detailed use of Flink
Detailed use of Flink
1. Installation and deployment
install
Step 1: add flink-1.10.1-bin-scala_2.12.tgz upload to the server and decompress Step 2: modify the conf/flink-conf.yaml file # Modify the jobmanager.rpc.address parameter to the jobmanager machine
jobmanager.rpc.address: hadoop151
Step 3: modify the conf / slave file # slave ma ...
Added by godwisam on Tue, 12 Oct 2021 02:05:15 +0300
Enterprise Architecture Case for flume Learning
Advances in flume learning
Flume Transactions
The primary purpose is to ensure data consistency, either with success or with failure.
Transaction schematics
Flume Agent Internal Principles
To summarize: That is to say Source Collected in event Not directly to channel Instead, a ChannelProcessor,this processor Will let us event Goes ...
Added by Jax2 on Mon, 11 Oct 2021 19:39:19 +0300
Zeppelin combines Flink to query hudi data
About Zeppelin
Zeppelin is a Web-based notebook that supports data-driven, interactive data analysis and collaboration using SQL, Scala, Python, R, and so on.
Zeppelin supports multiple language backends, and the Apache Zeppelin interpreter allows any language/data processing backend to be inserted into Zeppelin. Currently, Apache Zeppelin s ...
Added by thefamouseric on Sat, 09 Oct 2021 19:06:18 +0300
Graduation project - weather data analysis
1 Preface
Hi, everyone, this is senior student Dan Cheng. Today I'd like to introduce a project to you
Emotional analysis of film review based on GRU
You can use it for graduation design
Bi design help, problem opening guidance, technical solutions
🇶746876041
2 project introduction
This example will analyze and visualize the meteorol ...
Added by BK87 on Fri, 08 Oct 2021 13:08:16 +0300
Json writes dynamic columns to the database
[questions]
Recently, a website needs to obtain json data from the api of another website and store it in its own database. But I know nothing about json operation, so please consult your great God. No more nonsense. The code is as follows json file content I have cleaned up most of the data with the same structure: (mainly imei's Service) &nb ...
Added by chaking on Fri, 08 Oct 2021 07:37:39 +0300
Multi table associated query of database
Table association concept Table represents an entity in life, such as department table dept and employee table emp. Table association represents the relationship between tables, such as department and employee, commodity and commodity classification, teacher and student, classroom and student.
At the same time, we should also know that table ...
Added by ToddAtWSU on Tue, 05 Oct 2021 23:33:21 +0300
Learning to use hadoop
Tip: after the article is written, the directory can be generated automatically. Please refer to the help document on the right for how to generate it
1, The role of hadoop?
What is hadoop?
Hadoop is an open source framework that can write and run distributed applications to process large-scale data. It is designed for offline and larg ...
Added by wittanthony on Tue, 05 Oct 2021 00:56:46 +0300
Introduction and usage of Apache Doris dynamic partition
​
1. Introduction
In some usage scenarios, the user will partition the table by day and perform routine tasks regularly every day. At this time, the user needs to manually manage the partition. Otherwise, the data import may fail because the user does not create a partition, which brings additional maintenance costs to the user.
Through the ...
Added by lost305 on Tue, 28 Sep 2021 08:33:08 +0300