This article brings you relevant knowledge about mysql. It mainly introduces the streaming query and cursor query methods in MySQL. It has a good reference value and I hope it will be helpful to everyone. .
Recommended learning: mysql video tutorial
1. Business scenario
Now the business system needs to start from the MySQL database Read 500w data rows for processing
- Migrate data
- Export data
- Batch processing data
Second, list the following three A processing method
- Regular query: read 500w data into JVM memory at one time, or read in pages
- Streaming query: read one piece at a time and load it into JVM memory. Business processing
- Cursor query: Like streaming, control how many pieces of data are read at a time through the fetchSize parameter
2.1 General query
By default, the complete retrieval result set is stored in memory. In most cases, this is the most efficient way to operate and is easier to implement.
Assuming that the data volume of a single table is 5 million, no one will load it into the memory at one time, and paging is generally used.
Here, the test demo is just to monitor the JVM, so paging is not used and the data is loaded into the memory at one time
@Test public void generalQuery() throws Exception { // 1核2G:查询一百条记录:47ms // 1核2G:查询一千条记录:2050 ms // 1核2G:查询一万条记录:26589 ms // 1核2G:查询五万条记录:135966 ms String sql = "select * from wh_b_inventory limit 10000"; ps = conn.prepareStatement(sql); ResultSet rs = ps.executeQuery(sql); int count = 0; while (rs.next()) { count++; } System.out.println(count); }
JVM monitoring
We will reduce the memory size -Xms70m -Xmx70m
During the entire query process, the heap memory usage gradually increases, and eventually leads to OOM:
java.lang.OutOfMemoryError: GC overhead limit exceeded
1. Frequently triggering GC
2. There is a hidden danger of OOM
2.2 Streaming query
One thing to note about streaming queries: all rows in the result set must be read (or closed) before any other queries can be issued on the connection, otherwise an exception will be thrown and its query will exclusively occupy the connection.
From the test results, streaming query does not improve the query speed
@Test public void streamQuery() throws Exception { // 1核2G:查询一百条记录:138ms // 1核2G:查询一千条记录:2304 ms // 1核2G:查询一万条记录:26536 ms // 1核2G:查询五万条记录:135931 ms String sql = "select * from wh_b_inventory limit 50000"; statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); statement.setFetchSize(Integer.MIN_VALUE); ResultSet rs = statement.executeQuery(sql); int count = 0; while (rs.next()) { count++; } System.out.println(count); }
JVM monitoring
We will reduce the heap memory to -Xms70m -Xmx70m
We found that even though the heap memory was only 70m, OOM still did not occur
2.3 Cursor query
Note:
1. Need to splice parameters in the database connection information useCursorFetch=true
2. Secondly, set the number of data read by Statement each time, such as reading 1000 at a time
Judging from the test results, cursor query has shortened the query speed to a certain extent##@Test
public void cursorQuery() throws Exception {
Class.forName("com.mysql.jdbc.Driver");
// 注意这里需要拼接参数,否则就是普通查询
conn = DriverManager.getConnection("jdbc:mysql://101.34.50.82:3306/mysql-demo?useCursorFetch=true", "root", "123456");
start = System.currentTimeMillis();
// 1核2G:查询一百条记录:52 ms
// 1核2G:查询一千条记录:1095 ms
// 1核2G:查询一万条记录:17432 ms
// 1核2G:查询五万条记录:90244 ms
String sql = "select * from wh_b_inventory limit 50000";
((JDBC4Connection) conn).setUseCursorFetch(true);
statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(1000);
ResultSet rs = statement.executeQuery(sql);
int count = 0;
while (rs.next()) {
count++;
}
System.out.println(count);
}
We will reduce the heap memory - Xms70m -Xmx70m
We found that in a single-threaded situation, cursor query and streaming query can avoid OOM very well, and cursor query can optimize query speed.
3. RowData
The logic of ResultSet.next() is to implement the class ResultSetImpl and get it from RowData every time The data for the next row. RowData is an interface, and the implementation relationship diagram is as follows
##3.1 RowDataStatic
First, it will check whether there is data in its internal buffer that has not been returned. If there is, return the next row.
- If all reading is completed, trigger a new request to MySQL Server to read the fetchSize quantity result
- And buffer the return result to the internal buffer, and then return the first row of data
- In summary:
The default RowDataStatic reads all The data is transferred to the client memory, which is our JVM;
RowDataDynamic reads one piece of data for each IO call; RowDataCursor reads fetchSize rows at a time, and then initiates a request call after the consumption is completed. 4. JDBC Communication PrincipleThe interaction between JDBC and the MySQL server is completed through Socket. Corresponding to network programming, MySQL can be regarded as a SocketServer, so a complete request link should be:JDBC Client -> Client Socket -> MySQL -> Retrieve data return -> MySQL Kernel Socket Buffer -> Network -> Client Socket Buffer -> JDBC Client
4.1 generalQuery General query
General query will load all the data queried into the JVM and then process it.
If the amount of query data is too large, it will continue to experience GC, and then there will be a memory overflow
4.2 streamQuery streaming query
The server is ready to return from the first piece of data When the data is loaded into the buffer, the data is loaded into the kernel buffer of the client machine through the TCP link. The inputStream.read() method of JDBC will be awakened to read the data. The only difference is that the stream is turned on. When reading, only one package size of data is read from the kernel each time, and only one row of data is returned. If one package cannot assemble one row of data, another package will be read.
4.3 cursorQuery cursor query
When the cursor is turned on, when the server returns data, it will return the data according to the size of fetchSize, and the client will return the data every time when receiving the data. Change the buffer data and read all the data. If the data has 100 million data, if FetchSize is set to 1000, 100,000 round-trip communications will be performed;
Because MySQL does not know when the client has finished consuming the data , and its own corresponding table may have DML write operations. At this time, MySQL needs to create a temporary space to store the data that needs to be taken away.
So when you enable useCursorFetch to read a large table, you will see several phenomena on MySQL:
- 1. IOPS soars
- 2. Disk Space soars
- 3. After the client JDBC initiates SQL, it waits for a long time for SQL response data. During this time, the server is preparing data
- 4. After the data preparation is completed, data transmission begins stage, the network response begins to surge, and the IOPS changes from "read and write" to "read".
- IOPS (Input/Output Per Second): The number of disk reads and writes per second
- 5.CPU and memory will increase by a certain percentage
5. Concurrency scenarios
Concurrent calls: Jmete 10 threads concurrent calls in 1 second
Streaming query memory performance report is as follows
Concurrent calls are also OK for memory usage and do not exist Stacked increase
The cursor query memory performance report is as follows
6. Summary
1. Both cursor query and streaming query can avoid OOM in a single thread;
2. Cursor query is faster than streaming query in terms of query speed. Compared with ordinary query, streaming query cannot shorten the query. Time;
3. In concurrent scenarios, the trend of streaming query heap memory is more stable, and there is no additive increase.
Recommended learning: mysql video tutorial
The above is the detailed content of Streaming query and cursor query methods in MySQL (summary sharing). For more information, please follow other related articles on the PHP Chinese website!

MySQLdiffersfromotherSQLdialectsinsyntaxforLIMIT,auto-increment,stringcomparison,subqueries,andperformanceanalysis.1)MySQLusesLIMIT,whileSQLServerusesTOPandOracleusesROWNUM.2)MySQL'sAUTO_INCREMENTcontrastswithPostgreSQL'sSERIALandOracle'ssequenceandt

MySQL partitioning improves performance and simplifies maintenance. 1) Divide large tables into small pieces by specific criteria (such as date ranges), 2) physically divide data into independent files, 3) MySQL can focus on related partitions when querying, 4) Query optimizer can skip unrelated partitions, 5) Choosing the right partition strategy and maintaining it regularly is key.

How to grant and revoke permissions in MySQL? 1. Use the GRANT statement to grant permissions, such as GRANTALLPRIVILEGESONdatabase_name.TO'username'@'host'; 2. Use the REVOKE statement to revoke permissions, such as REVOKEALLPRIVILEGESONdatabase_name.FROM'username'@'host' to ensure timely communication of permission changes.

InnoDB is suitable for applications that require transaction support and high concurrency, while MyISAM is suitable for applications that require more reads and less writes. 1.InnoDB supports transaction and bank-level locks, suitable for e-commerce and banking systems. 2.MyISAM provides fast read and indexing, suitable for blogging and content management systems.

There are four main JOIN types in MySQL: INNERJOIN, LEFTJOIN, RIGHTJOIN and FULLOUTERJOIN. 1.INNERJOIN returns all rows in the two tables that meet the JOIN conditions. 2.LEFTJOIN returns all rows in the left table, even if there are no matching rows in the right table. 3. RIGHTJOIN is contrary to LEFTJOIN and returns all rows in the right table. 4.FULLOUTERJOIN returns all rows in the two tables that meet or do not meet JOIN conditions.

MySQLoffersvariousstorageengines,eachsuitedfordifferentusecases:1)InnoDBisidealforapplicationsneedingACIDcomplianceandhighconcurrency,supportingtransactionsandforeignkeys.2)MyISAMisbestforread-heavyworkloads,lackingtransactionsupport.3)Memoryengineis

Common security vulnerabilities in MySQL include SQL injection, weak passwords, improper permission configuration, and unupdated software. 1. SQL injection can be prevented by using preprocessing statements. 2. Weak passwords can be avoided by forcibly using strong password strategies. 3. Improper permission configuration can be resolved through regular review and adjustment of user permissions. 4. Unupdated software can be patched by regularly checking and updating the MySQL version.

Identifying slow queries in MySQL can be achieved by enabling slow query logs and setting thresholds. 1. Enable slow query logs and set thresholds. 2. View and analyze slow query log files, and use tools such as mysqldumpslow or pt-query-digest for in-depth analysis. 3. Optimizing slow queries can be achieved through index optimization, query rewriting and avoiding the use of SELECT*.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

Dreamweaver CS6
Visual web development tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Dreamweaver Mac version
Visual web development tools

SublimeText3 English version
Recommended: Win version, supports code prompts!
