This article brings you relevant knowledge about mysql. It mainly introduces the streaming query and cursor query methods in MySQL. It has a good reference value and I hope it will be helpful to everyone. .
Recommended learning: mysql video tutorial
Now the business system needs to start from the MySQL database Read 500w data rows for processing
By default, the complete retrieval result set is stored in memory. In most cases, this is the most efficient way to operate and is easier to implement.
Assuming that the data volume of a single table is 5 million, no one will load it into the memory at one time, and paging is generally used.
Here, the test demo is just to monitor the JVM, so paging is not used and the data is loaded into the memory at one time
@Test public void generalQuery() throws Exception { // 1核2G:查询一百条记录:47ms // 1核2G:查询一千条记录:2050 ms // 1核2G:查询一万条记录:26589 ms // 1核2G:查询五万条记录:135966 ms String sql = "select * from wh_b_inventory limit 10000"; ps = conn.prepareStatement(sql); ResultSet rs = ps.executeQuery(sql); int count = 0; while (rs.next()) { count++; } System.out.println(count); }
JVM monitoring
We will reduce the memory size -Xms70m -Xmx70m
During the entire query process, the heap memory usage gradually increases, and eventually leads to OOM:
java.lang.OutOfMemoryError: GC overhead limit exceeded
1. Frequently triggering GC
2. There is a hidden danger of OOM
One thing to note about streaming queries: all rows in the result set must be read (or closed) before any other queries can be issued on the connection, otherwise an exception will be thrown and its query will exclusively occupy the connection.
From the test results, streaming query does not improve the query speed
@Test public void streamQuery() throws Exception { // 1核2G:查询一百条记录:138ms // 1核2G:查询一千条记录:2304 ms // 1核2G:查询一万条记录:26536 ms // 1核2G:查询五万条记录:135931 ms String sql = "select * from wh_b_inventory limit 50000"; statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); statement.setFetchSize(Integer.MIN_VALUE); ResultSet rs = statement.executeQuery(sql); int count = 0; while (rs.next()) { count++; } System.out.println(count); }
JVM monitoring
We will reduce the heap memory to -Xms70m -Xmx70m
We found that even though the heap memory was only 70m, OOM still did not occur
Note:
1. Need to splice parameters in the database connection information useCursorFetch=true
2. Secondly, set the number of data read by Statement each time, such as reading 1000 at a time
Judging from the test results, cursor query has shortened the query speed to a certain extent##@Test
public void cursorQuery() throws Exception {
Class.forName("com.mysql.jdbc.Driver");
// 注意这里需要拼接参数,否则就是普通查询
conn = DriverManager.getConnection("jdbc:mysql://101.34.50.82:3306/mysql-demo?useCursorFetch=true", "root", "123456");
start = System.currentTimeMillis();
// 1核2G:查询一百条记录:52 ms
// 1核2G:查询一千条记录:1095 ms
// 1核2G:查询一万条记录:17432 ms
// 1核2G:查询五万条记录:90244 ms
String sql = "select * from wh_b_inventory limit 50000";
((JDBC4Connection) conn).setUseCursorFetch(true);
statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(1000);
ResultSet rs = statement.executeQuery(sql);
int count = 0;
while (rs.next()) {
count++;
}
System.out.println(count);
}
We will reduce the heap memory - Xms70m -Xmx70m
We found that in a single-threaded situation, cursor query and streaming query can avoid OOM very well, and cursor query can optimize query speed.
3. RowData
##3.1 RowDataStatic
By default, ResultSet will use the RowDataStatic instance, and the ResultSet will be used when generating the RowDataStatic object. Read all the records in the memory into the memory, and then read them from the memory one by one through next()3.2 RowDataDynamicWhen using streaming processing, the ResultSet uses the RowDataDynamic object, and this Each time the object next() is called, it will initiate IO to read a single row of data3.3 RowDataCursorThe call to RowDataCursor is batch processing, and then cached internally. The process is as follows:First, it will check whether there is data in its internal buffer that has not been returned. If there is, return the next row.
The default RowDataStatic reads all The data is transferred to the client memory, which is our JVM;
RowDataDynamic reads one piece of data for each IO call; RowDataCursor reads fetchSize rows at a time, and then initiates a request call after the consumption is completed. 4. JDBC Communication PrincipleThe interaction between JDBC and the MySQL server is completed through Socket. Corresponding to network programming, MySQL can be regarded as a SocketServer, so a complete request link should be:JDBC Client -> Client Socket -> MySQL -> Retrieve data return -> MySQL Kernel Socket Buffer -> Network -> Client Socket Buffer -> JDBC Client
General query will load all the data queried into the JVM and then process it.
If the amount of query data is too large, it will continue to experience GC, and then there will be a memory overflow
The server is ready to return from the first piece of data When the data is loaded into the buffer, the data is loaded into the kernel buffer of the client machine through the TCP link. The inputStream.read() method of JDBC will be awakened to read the data. The only difference is that the stream is turned on. When reading, only one package size of data is read from the kernel each time, and only one row of data is returned. If one package cannot assemble one row of data, another package will be read.
When the cursor is turned on, when the server returns data, it will return the data according to the size of fetchSize, and the client will return the data every time when receiving the data. Change the buffer data and read all the data. If the data has 100 million data, if FetchSize is set to 1000, 100,000 round-trip communications will be performed;
Because MySQL does not know when the client has finished consuming the data , and its own corresponding table may have DML write operations. At this time, MySQL needs to create a temporary space to store the data that needs to be taken away.
So when you enable useCursorFetch to read a large table, you will see several phenomena on MySQL:
Concurrent calls: Jmete 10 threads concurrent calls in 1 second
Streaming query memory performance report is as follows
Concurrent calls are also OK for memory usage and do not exist Stacked increase
The cursor query memory performance report is as follows
1. Both cursor query and streaming query can avoid OOM in a single thread;
2. Cursor query is faster than streaming query in terms of query speed. Compared with ordinary query, streaming query cannot shorten the query. Time;
3. In concurrent scenarios, the trend of streaming query heap memory is more stable, and there is no additive increase.
Recommended learning: mysql video tutorial
The above is the detailed content of Streaming query and cursor query methods in MySQL (summary sharing). For more information, please follow other related articles on the PHP Chinese website!