Home >Backend Development >PHP Tutorial >Optimization of high-concurrency low-cardinality multi-field arbitrary combination queries_PHP tutorial

Optimization of high-concurrency low-cardinality multi-field arbitrary combination queries_PHP tutorial

WBOY
WBOYOriginal
2016-07-12 08:57:34969browse

Optimization of high concurrent low cardinality multi-field arbitrary combination query

1. Question

First explain the "low cardinality multi-field arbitrary combination query" that appears in this title. something. This refers to queries that meet the following conditions:
1. The search conditions involve a combination of multiple field conditions
2. The combination of these fields is uncertain
3. The selection of each individual field The sex is not good

This type of query has many usage scenarios, such as the product display page of e-commerce. Users will enter various combinations of query conditions: category, supplier, brand, promotion, price, etc..., and in the end they often have to sort and paginate the results.

The troublesome aspects of this type of problem are:
1. There are a large number of records. If a full table scan is performed, the performance will be low and it cannot meet the requirements of high concurrent access.
2. The selectivity of any single field involved in the query conditions is very low, and the query efficiency problem cannot be solved through a single field index.
3. If you build an ordinary Btree multi-field index, hundreds or thousands of indexes may be built due to too many combinations of user input conditions, which is unrealistic and difficult to maintain.

2. Solution

There are 2 solutions that I think of for this kind of problem

2.1 bitmap index

The characteristic of bitmap is to store the key and all values ​​are equal to The bitmap of the row set of this key. For combined queries involving multiple keys, you only need to perform an AND or operation on the bitmaps corresponding to these keys. Since the size of bitmap is very small and the efficiency of bit AND-OR operations is also very high, bitmap is very suitable for this type of query.
Bitmap index also has disadvantages. Updating a record will lock the entire table, which is not suitable for scenarios with many concurrent writes. Another problem is that among common relational databases, only Oracle seems to support bitmap indexes, and many times we want to use open source databases.

2.2 Inverted index

Inverted index is similar to bitmap. It stores the key and the row set whose value is equal to this key. The row set may be a list or a tree or Other storage forms. For combined queries of multiple keys, just perform set operations on the results of these keys.
Inverted index is generally used for full-text retrieval, but many systems also use it to support structured data search, such as Elasticsearch. Elasticsearch supports fast search of JSON documents, compound queries, sorting, aggregation, distributed deployment and many other great features. But considering the following factors, we prefer to find a solution in a relational database.
-There is no need to use the advanced features provided by search engines for fuzzy matching. In fact, we need exact matching or simple fuzzy matching.
-The amount of data is not large enough to require a distributed search cluster.
-The original data is already in the relational database, so you don’t want to worry about data synchronization.
-I have developed an application based on the interface of a relational database and do not want to start over.
-Having mastered the operation and maintenance management of relational databases, I don’t know how many pitfalls I have to go through in a brand new system.
- Considering the performance difference between Java and C, the performance of the built-in relational database solution may not be inferior to that of professional search engines.

3. PostgreSQL solution

If the scope of the solution is limited to open source relational databases, there may be only one answer, which is PostgreSQL's gin index.
PostgreSQL's gin index is an inverted index. It is not only used for full-text search but also for regular data types, such as int and varchar.
For multi-dimensional queries, we can build indexes like this:
1. Create a unique multi-field gin index for all low-cardinality fields involved in equivalent conditions
2. For equivalent queries with good selectivity or For the fields involved in the range query, build a btree index

Some students may have questions. It is also a multi-field index. Why does gin's multi-field index only need to build one, but btree's multi-field index does? Consider various query combinations and build several. This is because each field in the gin multi-field index is equivalent and there is no leading field. Therefore, as long as a unique gin multi-field index is built, all query combinations can be covered; while the btree multi-field index is different. If the query conditions do not contain the suoyi leading field, the index cannot be used.

Each key stored internally in the multi-field gin index is in the form of (column number, key datum), so different fields can be distinguished without confusion. The stored value is the set of ctids of all records matching key. This set is stored in the form of btree when the number of records is relatively large, and has been compressed, so the storage space occupied by the gin index is very small, only about one-twentieth of the equivalent btree index, which is also derived from another Improved performance.

For multiple fields involved in multi-dimensional queries, fields included in the multi-field gin index, the ctid set is merged by the gin index (taking the union or intersection), and then the obtained ctid set and other indexes are obtained The ctid collection is then merged with BitmapAnd or BitmapOr. The efficiency of merging ctid sets within the gin index is much higher than that of merging ctid sets between indexes, and the gin index optimizes low-cardinality fields better, so making full use of the characteristics of the gin index is better than building a separate btree index for each field and then using BitmapAnd Or BitmapOr merges the result set much more efficiently.


4. A real case

4.1 Original query

The following SQL is a simplified version of a real SQL in a system.

  1. SELECT CASE WHEN gpppur.GB_BEGINDATE <= '2016-02-29 14:36:00' AND gpppur.GB_ENDDATE > '2016-02-29 14: 36:00' THEN 1
  2. WHEN gpppur.PREVIEW_BEGINDT <= '2016-02-29 14:36:00' AND gpppur.PREVIEW_ENDDT > '2016-02-29 14:36: 00' THEN 2
  3. ELSE 3 END AS flag,
  4. gpppur.*
  5. FROM T_MPS_INFO gpppur
  6. WHERE gpppur.ATTRACT_TP = 0
  7. AND gpppur.COLUMN_ID = 1
  8. AND gpppur.FIELD2 = 1
  9. AND gpppur. STATUS = 1
  10. ORDER BY flag ASC,gpppur.PC_SORT_NUM ASC,gpppur.GB_BEGINDATE DESC
  11. LIMIT 0,45
Using MySQL database , the total data volume is 60w, in which a multi-field index of FIELD2 STATUS is built.
The value distribution of the four fields involved in the query conditions is as follows:

  1. postgres=# select ATTRACT_TP,count(*) from T_MPS_INFO group by ATTRACT_TP;
  2. attract_tp | count
  3. ------------ --------
  4. | 16196
  5. 6 | 251
  6. 2 | 50
  7. 1 | 3692
  8. 3 | 143
  9. 10 | 314
  10. 4 | 214
  11. 5 | 194333
  12. 9 | 326485
  13. 7 | 1029
  14. 0 | 6458
  15. (11 rows)

  16. postgres=# select COLUMN_ID,count(*) from T_MPS_INFO group by COLUMN_ID;
  17. column_id | count
  18. ------------ - -------
  19. | 2557
  20. 285 | 20
  21. 120 | 194
  22. 351 | 2
  23. 337 | 79
  24. 227 | 26
  25. 311 | 9
  26. 347 | 2
  27. 228 | 21
  28. 318 | 1
  29. 314 | 9
  30. 54 | 10
  31. 133 | 27
  32. 2147483647 | 1
  33. 336 | 1056
  34. 364 | 1
  35. 131 | 10
  36. 243 | 5
  37. 115 | 393
  38. 61 | 73
  39. 226 | 40
  40. 196 | 16
  41. 350 | 5
  42. 373 | 72
  43. 377 | 2
  44. 260 | 4
  45. 184 | 181
  46. 363 | 1
  47. 341 | 392
  48. 64 | 1
  49. 344 | 199271
  50. 235 | 17
  51. 294 | 755
  52. 352 | 3
  53. 368 | 1
  54. 225 | 1
  55. 199 | 8
  56. 374 | 2
  57. 248 | 8
  58. 84 | 1
  59. 362 | 1
  60. 361 | 331979
  61. 319 | 7
  62. 244 | 65
  63. 125 | 2
  64. 130 | 1
  65. 272 | 65
  66. 66 | 2
  67. 240 | 2
  68. 775 | 1
  69. 253 | 49
  70. 60 | 45
  71. 121 | 5
  72. 257 | 3
  73. 365 | 1
  74. 0 | 1
  75. 217 | 5
  76. 270 | 1
  77. 122 | 39
  78. 56 | 49
  79. 355 | 5
  80. 161 | 1
  81. 329 | 1
  82. 222 | 9
  83. 261 | 275
  84. 2 | 3816
  85. 57 | 19
  86. 307 | 4
  87. 310 | 8
  88. 97 | 37
  89. 202 | 20
  90. 203 | 3
  91. 85 | 1
  92. 375 | 641
  93. 58 | 98
  94. 1 | 6479
  95. 59 | 114
  96. 185 | 7
  97. 338 | 10
  98. 379 | 17
  99. (80 rows)

  100. postgres=# select FIELD2,count(*) from T_MPS_INFO group by FIELD2;
  101. field2 | count
  102. -------- --------
  103. | 2297
  104. 6 | 469
  105. 2 | 320
  106. 1 | 11452
  107. 3 | 286
  108. 10 | 394
  109. 4 | 291
  110. 5 | 200497
  111. 9 | 331979
  112. 0 | 2
  113. 7 | 1178
  114. (11 rows)

  115. postgres=# select STATUS,count(*) from T_MPS_INFO group by STATUS;
  116. status | count
  117. - ------- --------
  118. | 2297
  119. 0 | 15002
  120. 3 | 5
  121. 4 | 1
  122. 1 | 531829
  123. 2 | 31
  124. (6 rows)

Since the value distribution of these fields is extremely uneven, we construct the following Lua script to generate different select statements to simulate the load.
qx.lua:

  1. pathtest = string.match(test, "(.*/)") or ""

  2. dofile(pathtest .. "common.lua")

  3. function thread_init(thread_id)
  4. set_vars()
  5. end

  6. function event(thread_id)
  7. local ATTRACT_TP,COLUMN_ID,FIELD2,STATUS
  8. ATTRACT_TP = sb_rand_uniform(0, 10)
  9. COLUMN_ID = sb_rand_uniform(1, 100)
  10. FIELD2 = sb_rand_uniform(0, 10)
  11. STATUS = sb_rand_uniform(0, 4)

  12. rs = db_query("SELECT CASE WHEN gpppur.GB_BEGINDATE <= '2016-02-29 14:36:00' AND gpppur.GB_ENDDATE > '2016-02-29 14:36:00' THEN 1
  13. WHEN gpppur.PREVIEW_BEGINDT <= '2016-02-29 14:36:00' AND gpppur.PREVIEW_ENDDT > '2016-02-29 14:36:00' THEN 2
  14. ELSE 3 END AS flag,
  15. gpppur.*
  16. FROM T_MPS_INFO gpppur
  17. WHERE gpppur.ATTRACT_TP = "..ATTRACT_TP.."
  18. AND gpppur.COLUMN_ID = "..COLUMN_ID.."
  19. AND gpppur.FIELD2 = "..FIELD2.."
  20. AND gpppur.STATUS = "..STATUS.."
  21. ORDER BY flag ASC,gpppur.PC_SORT_NUM ASC,gpppur.GB_BEGINDATE DESC
  22. LIMIT 45")
  23. end

然后用sysbench进行压测,结果在32并发时测得的qps是64。

  1. [root@rh6375Gt20150507 ~]# sysbench --db-driver=mysql --test=/opt/sysbench-0.5/sysbench/tests/db/qx.lua --mysql-db=test --mysql-user=mysql --mysql-password=mysql --mysql-host=srdsdevapp69 --num-threads=32 --max-time=5 run
  2. sysbench 0.5: multi-threaded system evaluation benchmark
  3. Running the test with following options:
  4. Number of threads: 32
  5. Random number generator seed is 0 and will be ignored
  6. Threads started!
  7. OLTP test statistics:
  8. queries performed:
  9. read: 825
  10. write: 0
  11. other: 0
  12. total: 825
  13. transactions: 0 (0.00 per sec.)
  14. read/write requests: 825 (64.20 per sec.)
  15. other operations: 0 (0.00 per sec.)
  16. ignored errors: 0 (0.00 per sec.)
  17. reconnects: 0 (0.00 per sec.)
  18. General statistics:
  19. total time: 12.8496s
  20. total number of events: 825
  21. total time taken by event execution: 399.6003s
  22. response time:
  23. min: 1.01ms
  24. avg: 484.36ms
  25. max: 12602.74ms
  26. approx. 95 percentile: 222.79ms
  27. Threads fairness:
  28. events (avg/stddev): 25.7812/24.12
  29. execution time (avg/stddev): 12.4875/0.23

4.2 优化后的查询

对于上面那个特定的SQL虽然我们可以通过建一个包含所有等值查询条件中4个字段(ATTRACT_TP,COLUMN_ID,FIELD2,STATUS)的组合索引进行优化,但是需要说明的是,这条SQL只是各种查询组合产生的1000多种不同SQL中的一个,每个SQL涉及的查询字段的组合是不一样的,我们不可能为每种组合都单独建一个多字段索引。
所以我们想到了PostgreSQL的gin索引。为了使用PostgreSQL的gin索引,先把MySQL的表定义,索引和数据原封不动的迁移到PostgreSQL。
在添加gin索引前,先做了一个测试。另人惊讶的是,还没有开始进行优化,PostgreSQL测出的性能已经是MySQL的5倍(335/64=5)了。

  1. [root@rh6375Gt20150507 ~]# sysbench --db-driver=pgsql --test=/opt/sysbench-0.5/sysbench/tests/db/qx.lua --pgsql-db=postgres --pgsql-user=postgres --pgsql-password=postgres --pgsql-host=srdsdevapp69 --num-threads=32 --max-time=5 run
  2. sysbench 0.5: multi-threaded system evaluation benchmark

  3. Running the test with following options:
  4. Number of threads: 32
  5. Random number generator seed is 0 and will be ignored


  6. Threads

  7. OLTP test statistics:
  8. queries performed:
  9. read: 1948
  10. write: 0
  11. other: 0
  12. total: 1948
  13. transactions: 0 (0.00 per sec.)
  14. read/write requests: 1948 (335.52 per sec.)
  15. other operations: 0 (0.00 per sec.)
  16. ignored errors: 0 (0.00 per sec.)
  17. reconnects: 0 (0.00 per sec.)

  18. General statistics:
  19. total time: 5.8059s
  20. total number of events: 1948
  21. total time taken by event execution: 172.0538s
  22. response time:
  23. min: 0.90ms
  24. avg: 88.32ms
  25. max: 2885.69ms
  26. approx. 95 percentile: 80.01ms

  27. Threads fairness:
  28. events (avg/stddev): 60.8750/27.85
  29. execution time (avg/stddev): 5.3767/0.29

下一步,添加gin索引。

  1. postgres=# create extension btree_gin;
  2. CREATE EXTENSION
  3. postgres=# create index idx3 on t_mps_info using gin(attract_tp, column_id, field2, status);
  4. CREATE INDEX

再进行压测,测出的qps是5412,是MySQL的85倍(5412/64=85)。

  1. [root@rh6375Gt20150507 ~]# sysbench --db-driver=pgsql --test=/opt/sysbench-0.5/sysbench/tests/db/qx.lua --pgsql-db=postgres --pgsql-user=postgres --pgsql-password=postgres --pgsql-host=srdsdevapp69 --num-threads=32 --max-time=5 run
  2. sysbench 0.5: multi-threaded system evaluation benchmark

  3. Running the test with following options:
  4. Number of threads: 32
  5. Random number generator seed is 0 and will be ignored


  6. Threads

  7. OLTP test statistics:
  8. queries performed:
  9. read: 10000
  10. write: 0
  11. other: 0
  12. total: 10000
  13. transactions: 0 (0.00 per sec.)
  14. read/write requests: 10000 (5412.80 per sec.)
  15. other operations: 0 (0.00 per sec.)
  16. ignored errors: 0 (0.00 per sec.)
  17. reconnects: 0 (0.00 per sec.)

  18. General statistics:
  19. total time: 1.8475s
  20. total number of events: 10000
  21. total time taken by event execution: 58.2706s
  22. response time:
  23. min: 0.95ms
  24. avg: 5.83ms
  25. max: 68.36ms
  26. approx. 95 percentile: 9.42ms

  27. Threads fairness:
  28. events (avg/stddev): 312.5000/47.80
  29. execution time (avg/stddev): 1.8210/0.02

4.3 补充

作为对比,我们又在MySQL上添加了包含attract_tp, column_id, field2和status这4个字段的多字段索引,测出的qps是4000多,仍然不如PostgreSQL。可见业界广为流传的MySQL的简单查询性能优于PostgreSQL的说法不可信!(对于复杂查询PostgreSQL的性能大大优于MySQL应该是大家的共识。我例子中的SQL不能算是复杂查询吧?)


5. 总结

gin索引(还包括类似的gist,spgist索引)是PostgreSQL的一大特色,基于它可以挖掘出很多好玩的用法。对于本文提到的场景,有兴趣的同学可以把它和Oracle的bitmap索引以及基于搜索引擎(Elasticsearch,Solr等)的方案做个对比。另外,本人所知有限,如果有其它更好的方案,希望能让我知道。



www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/1108026.htmlTechArticleOptimization of high concurrency, low cardinality, multi-field queries in any combination 1. Question: First, explain the "low" that appears in this title What does “cardinality multi-field arbitrary combination query” refer to. This means satisfying...
Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn