The bytea binary type in openGauss/MogDB database is limited by the segment size compilation parameter and cannot exceed 1GB by default. If the data stored in the field exceeds 1GB, the lo (Large Object) extension type can be used.
01
lo type needs to create lo extension first
$ gsql -p5432 -Uomm postgres -r gsql ((MogDB 2.0.1 build f892ccb7) compiled at 2021-07-09 16:15:21 commit 0 last mr ) Non-SSL connection (SSL connection is recommended when requiring high-security) Type "help" for help. postgres=# create extension lo; CREATE EXTENSION
After creating the lo extension, we create a new test_lo table, info field uses Lo type.
postgres=# create table test_lo(id int,info lo); CREATE TABLE
Create test_ The lo table manages triggers and uses Lo for update and delete operations_ Manage function, otherwise isolated large objects will be generated.
postgres=# create trigger test_lo before UPDATE OR DELETE ON test_lo FOR EACH ROW EXECUTE procedure lo_manage(info); WARNING: Trigger function with non-plpgsql type is not recommended. DETAIL: Non-plpgsql trigger function are not shippable by default. HINT: Unshippable trigger may lead to bad performance. CREATE TRIGGER
Generate 2GB files using dd:
postgres=# \! dd if=/dev/zero of=test_lo bs=1M count=2048 && sync 2048 recorded+0 Read in 2048 recorded+0 Writing 2147483648 Byte (2).1 GB,2.0 GiB)Copied, 0.805435 s,2.7 GB/s
02
Test lo_ The import function imports data into a data table
postgres=# insert into test_lo values(1,lo_import('/home/omm/test_lo')); INSERT 0 1
You can see that the data can be imported normally. If the lo type is not used, the following error will be prompted if the bytea type is used.
ERROR: requested length too large
03
Test lo_ The export function exports the data table data to a file
postgres=# select lo_export(test_lo.info,'/home/omm/test_ext_lo') from test_lo where id=1; lo_export ----------- 1 (1 row)
You can see that the data is exported normally.
To view the imported and exported data files, you can also use the diff command for comparison.
postgres=# \! ls -lh test_* -rw-r--r-- 1 omm dbgrp 2.0G 12 17 / 13:00 test_ext_lo -rw------- 1 omm dbgrp 2.0G 12 December 17:58 test_lo
04
View data table large object field size
It is carried out in two steps. First, check the oid of the large object field (the lo type field only stores an oid reference pointer in the user table, and does not actually store data)
postgres=# select * from test_lo; id | info ----+------- 1 | 16392 (1 row)
The actual data is stored in PG using multiple bytea records_ In the largeobject table, you can query the size of statistical fields according to oid.
postgres=# select loid,pg_size_pretty(sum(octet_length(data))) from pg_largeobject where loid =16392 group by loid; loid | pg_size_pretty -------+---------------- 16392 | 2048 MB (1 row)
You can also use the following functions to query:
create or replace function get_lo_size(oid) returns bigint volatile strict as $function$ declare fd integer; sz bigint; begin fd := lo_open($1, x'40000'::int); perform lo_lseek64(fd, 0, 2); sz := lo_tell64(fd); perform lo_close(fd); return sz; end; $function$ language plpgsql;
The query results are as follows:
postgres=# select pg_size_pretty(get_lo_size(16392)); pg_size_pretty ---------------- 2048 MB (1 row)
Then test the use of JDBC application layer:
05
JDBC java file warehousing
public static void main(String[] args) throws Exception{ Class.forName("org.postgresql.Driver"); Connection conn = DriverManager.getConnection("jdbc:postgresql://ip:port/dbname","username","password"); conn.setAutoCommit(false); LargeObjectManager lobj = conn.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI(); long oid = lobj.createLO(LargeObjectManager.READ | LargeObjectManager.WRITE); LargeObject obj = lobj.open(oid, LargeObjectManager.WRITE); File file = new File("c:/work/test_lo"); FileInputStream fis = new FileInputStream(file); byte buf[] = new byte[10*1024*1024]; int s, tl = 0; while ((s = fis.read(buf, 0, 2048)) > 0) { obj.write(buf, 0, s); tl += s; } obj.close(); PreparedStatement ps = conn.prepareStatement("INSERT INTO test_lo VALUES (?, ?)"); ps.setInt(1, 100); ps.setLong(2, oid); ps.executeUpdate(); ps.close(); fis.close(); conn.commit(); conn.close(); }
06
JDBC Java read data output to file
public static void main(String[] args) throws Exception{ Class.forName("org.postgresql.Driver"); Connection conn = DriverManager.getConnection("jdbc:postgresql://ip:port/dbname","username","password"); conn.setAutoCommit(false); LargeObjectManager lobj = conn.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI(); PreparedStatement ps = conn.prepareStatement("SELECT info FROM test_lo WHERE id = ?"); ps.setInt(1, 100); ResultSet rs = ps.executeQuery(); File file = new File("c:/work/test_out_lo"); FileOutputStream fos = new FileOutputStream(file); while (rs.next()) { long oid = rs.getLong(1); LargeObject obj = lobj.open(oid, LargeObjectManager.READ); byte buf[] = new byte[10*1024*1024]; int s, tl = 0; while ((s = obj.read(buf, 0, 2048)) > 0) { fos.write(buf, 0, s); tl += s; } obj.close(); } rs.close(); ps.close(); fos.close(); conn.commit(); conn.close(); }
JDBC Java large object example reference: (copy link to browse in browser)
https://jdbc.postgresql.org/d...
Ink Sky Wheel original link: https://www.modb.pro/db/21475... (copy the link to the browser or click the end of the text to read the original text)
About the author
Peng Chong, PG technical consultant of Yunhe enmo, online name "Duomi Papi", PG community certification expert, China's first PostgreSQL ACE Partner, has been engaged in software research and development based on PostgreSQL database for many years, and is good at PL/PGSQL business migration and optimization, Oracle to PostgreSQL migration and upgrading, and heterogeneous database integration; As a columnist of PostgreSQL practice of motianlun, he is keen on sharing PostgreSQL practice technology, actively promotes PostgreSQL in his own post, and is committed to making more contributions to PG community.