Hive DML data operation (data import and export)

1, Data import

1. Load data into the table (load)

1.1 syntax

hive> load data [local] inpath '/opt/module/datas/student.txt' [overwrite] into table student [partition (partcol1=val1,...)];

(1) load data: indicates loading data
(2) Local: indicates loading data from local to hive table; Otherwise, load data from HDFS to hive table
(3) inpath: indicates the path to load data
(4) Overwrite: it means to overwrite the existing data in the table, otherwise it means to append
(5) into table: indicates which table to load
(6) student: represents a specific table
(7) Partition: indicates to upload to the specified partition

1.2 practical cases

(0) create a table

hive (default)> create table student(id string, name string) 
row format delimited fields terminated by '\t';

(1) Load local files to hive

hive (default)> load data local inpath '/opt/module/datas/student.txt' into table default.student;

(2) Load HDFS file into hive
Upload files to HDFS

hive (default)> dfs -put /opt/module/datas/student.txt /user/atguigu/hive;

Load data on HDFS

hive (default)> load data inpath '/user/atguigu/hive/student.txt' into table default.student;

(3) Loading data overwrites the existing data in the table
Upload files to HDFS

hive (default)> dfs -put /opt/module/datas/student.txt /user/atguigu/hive;

Loading data overwrites the existing data in the table

hive (default)> load data inpath '/user/atguigu/hive/student.txt' overwrite into table default.student;

2. Insert data into the table through query statements (insert)

1) Create a partition table

create table student_par(id int, name string) 
partitioned by (month string) 
row format delimited fields terminated by '\t';

2) Basic insert data

hive (default)> insert into table  student_par partition(month='201709') values(1,'wangwu'),(2,'zhaoliu');

3) Basic mode insertion (based on the query results of a single table)

hive (default)> insert overwrite table student partition(month='201708')
             select id, name from student where month='201709';

Insert into: insert into a table or partition by appending data. The original data will not be deleted
insert overwrite: overwrites the existing data in the table or partition

Note: insert does not support inserting partial fields

4) Multi table (multi partition) insertion mode (based on query results of multiple tables)

hive (default)> from student
              insert overwrite table student partition(month='201707')
              select id, name where month='201709'
              insert overwrite table student partition(month='201706')
              select id, name where month='201709';

3. Create tables and load data in query statements (As Select)

Create a table based on the query results (the query results are added to the newly created table)

create table if not exists student3
as select id, name from student;

4. Specify the loading data path through Location when creating a table

1) Upload data to hdfs

hive (default)> dfs -mkdir /student;
hive (default)> dfs -put /opt/module/datas/student.txt /student;

2) Create a table and specify the location on hdfs

hive (default)> create external table if not exists student5(
              id int, name string
              )
              row format delimited fields terminated by '\t'
              location '/student;

3) Query data

hive (default)> select * from student5;

5. Import data into the specified Hive table

Note: export first, and then import the data.

hive (default)> import table student2 partition(month='201709') from
 '/user/hive/warehouse/export/student';

2, Data export

1. Insert export

1) Export query results to local

hive (default)> insert overwrite local directory '/opt/module/datas/export/student'
            select * from student;

2) Format and export query results to local

hive(default)>insert overwrite local directory '/opt/module/datas/export/student1'
           ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'             select * from student;

3) Export the query results to HDFS (no local)

hive (default)> insert overwrite directory '/user/atguigu/student2'
             ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' 
             select * from student;

2. Export Hadoop commands to local

hive (default)> dfs -get /user/hive/warehouse/student/month=201709/000000_0
/opt/module/datas/export/student3.txt;

3. Hive Shell command export

Basic syntax: (hive -f/-e execute statement or script > file)

bin/hive -e 'select * from default.student;' > /opt/module/datas/export/student4.txt;

4. Export to HDFS

(defahiveult)> export table default.student to
 '/user/hive/warehouse/export/student';

export and import are mainly used for Hive table migration between two Hadoop platform clusters.

5. Clear the data in the table (Truncate)

Note: Truncate can only delete management tables, not data in external tables

hive (default)> truncate table student;

Keywords: Big Data Hadoop hive

Added by infyportalgroup on Sat, 18 Sep 2021 06:08:19 +0300