幾年前,我的任務是解決我工作的公司的一個關鍵系統的效能問題。這是一個艱鉅的挑戰,廢寢忘食,更掉頭髮,後端使用了PostgreSQL,經過一番努力和挖掘,解決方案原來就這麼簡單:
ALTER USER foo SET work_mem='32MB';
現在,說實話,這可能會也可能不會立即解決您的效能問題。這在很大程度上取決於您的查詢模式和系統的工作負載。但是,如果您是後端開發人員,我希望這篇文章能為您解決問題的工具庫添加另一個工具,尤其是 PostgreSQL ?
在這篇文章中,我們將創建一個場景來模擬效能下降,並探索一些工具來調查問題,例如 EXPLAIN、用於負載測試的 k6,甚至深入研究 PostgreSQL 的原始程式碼。我也會分享一些文章來幫助大家解決相關問題。
讓我們創造一個簡單的系統來分析足球員的表現。目前,唯一的業務規則是回答這個問題:
以下 SQL 建立我們的資料模型並填入它:
CREATE TABLE players ( player_id SERIAL PRIMARY KEY, nationality TEXT, age INT, position TEXT ); CREATE TABLE matches ( match_id SERIAL PRIMARY KEY, match_date DATE, home_team TEXT, away_team TEXT ); CREATE TABLE player_stats ( player_stat_id SERIAL PRIMARY KEY, player_id INT REFERENCES players(player_id), match_id INT REFERENCES matches(match_id), goals INT, assists INT, minutes_played INT ); -- Populate players with a range of nationalities, ages, and positions INSERT INTO players (nationality, age, position) SELECT ('Country' || (1 + random()*100)::int), -- 100 different nationalities (18 + random()*20)::int, -- Ages between 18 and 38 (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int] FROM generate_series(1, 10000);
初始化和填充資料庫的腳本可在 github 儲存庫中找到。
是的,我們可以設計資料庫來提高系統效能,但這裡的主要目標是探索未最佳化的場景。相信我,您可能會遇到這樣的系統,其中要么是糟糕的初始設計選擇,要么是意外的增長需要付出巨大的努力來提高性能。
為了模擬與 work_mem 配置相關的問題,讓我們創建一個查詢來回答這個問題:誰是對進球貢獻最大的前 2000 名球員?
SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps JOIN players p ON ps.player_id = p.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000;
好吧,但是我們要如何辨識這個查詢中的瓶頸呢?與其他 DBMS 一樣,PostgreSQL 支援 EXPLAIN 指令,這有助於我們了解查詢規劃器執行的每個步驟(最佳化與否)。
我們可以分析以下細節:
您可以在這裡了解有關 PostgreSQL 規劃器/最佳化器的更多資訊:
光說不練,所以讓我們深入研究一個實際的例子。首先,我們將 work_mem 減少到其可能的最小值,即 64kB,如原始碼中所定義:
ALTER USER foo SET work_mem='32MB';
接下來我們來分析EXPLAIN指令的輸出:
CREATE TABLE players ( player_id SERIAL PRIMARY KEY, nationality TEXT, age INT, position TEXT ); CREATE TABLE matches ( match_id SERIAL PRIMARY KEY, match_date DATE, home_team TEXT, away_team TEXT ); CREATE TABLE player_stats ( player_stat_id SERIAL PRIMARY KEY, player_id INT REFERENCES players(player_id), match_id INT REFERENCES matches(match_id), goals INT, assists INT, minutes_played INT ); -- Populate players with a range of nationalities, ages, and positions INSERT INTO players (nationality, age, position) SELECT ('Country' || (1 + random()*100)::int), -- 100 different nationalities (18 + random()*20)::int, -- Ages between 18 and 38 (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int] FROM generate_series(1, 10000);
我們可以看到執行時間為82.718ms,並且使用的排序演算法是外部合併。該演算法依賴磁碟而不是內存,因為資料超出了 64kB work_mem 限制。
供您參考,tuplesort.c 模組透過在此行將狀態設為 SORTEDONTAPE 來標記排序演算法何時使用磁碟。磁碟交互由 logtape.c 模組處理。
如果您是視覺型的人(像我一樣),有一些工具可以幫助您理解 EXPLAIN 輸出,例如 https://explain.dalibo.com/。以下是顯示具有排序步驟的節點的範例,包括排序方法:外部合併和已使用排序空間等詳細資訊:2.2MB:
「統計」部分對於分析更複雜的查詢特別有用,因為它提供了每個查詢節點的執行時間詳細資訊。在我們的範例中,它突出顯示了排序節點之一中可疑的高執行時間(接近 42 毫秒):
如 EXPLAIN 輸出所示,效能問題的主要原因之一是使用磁碟的 Sort 節點。此問題的一個副作用是寫入 I/O 指標出現峰值,尤其是在工作負載較高的系統中(我希望您正在監視這些指標;如果沒有,祝您在需要時好運!)。是的,即使是唯讀查詢也會導致寫入峰值,因為排序演算法會將資料寫入暫存檔案。
當我們使用 work_mem=4MB(PostgreSQL 中的預設值)執行相同的查詢時,執行時間減少了 50% 以上。
SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps JOIN players p ON ps.player_id = p.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000;
在此 EXPLAIN 輸出中,其中一個排序節點現在使用記憶體演算法堆排序。就上下文而言,只有當堆排序比快速排序執行成本更低時,規劃器才會選擇堆排序。您可以在 PostgreSQL 原始碼中更深入地了解決策過程。
此外,先前佔用近 40 毫秒執行時間的第二個 Sort 節點完全從執行計畫中消失。發生此更改是因為規劃器現在選擇 HashJoin 而不是 MergeJoin,因為哈希操作適合內存,消耗大約 480kB。
有關連接演算法的更多詳細信息,請查看以下文章:
預設的 work_mem 並不總是足以處理系統的工作負載。您可以使用以下方法在使用者層級調整此值:
ALTER USER foo SET work_mem='32MB';
注意:如果您使用連接池或連接池程序,回收舊會話以使新配置生效非常重要。
您也可以在資料庫事務層級控制此配置。讓我們執行一個簡單的 API,透過 k6 的負載測試來了解和測量 work_mem 變更的影響:
k6-test.js
CREATE TABLE players ( player_id SERIAL PRIMARY KEY, nationality TEXT, age INT, position TEXT ); CREATE TABLE matches ( match_id SERIAL PRIMARY KEY, match_date DATE, home_team TEXT, away_team TEXT ); CREATE TABLE player_stats ( player_stat_id SERIAL PRIMARY KEY, player_id INT REFERENCES players(player_id), match_id INT REFERENCES matches(match_id), goals INT, assists INT, minutes_played INT ); -- Populate players with a range of nationalities, ages, and positions INSERT INTO players (nationality, age, position) SELECT ('Country' || (1 + random()*100)::int), -- 100 different nationalities (18 + random()*20)::int, -- Ages between 18 and 38 (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int] FROM generate_series(1, 10000);
API 是在 Go 中實現的,並公開了兩個使用不同 work_mem 配置執行查詢的端點:
main.go
SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps JOIN players p ON ps.player_id = p.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000;
下面是 docker-compose 文件,其中包含運行負載測試所需的所有依賴項:
docker-compose.yaml
/* * workMem is forced to be at least 64KB, the current minimum valid value * for the work_mem GUC. This is a defense against parallel sort callers * that divide out memory among many workers in a way that leaves each * with very little memory. */ state->allowedMem = Max(workMem, 64) * (int64) 1024;
我們可以設定 ENDPOINT 環境變數來定義要測試的場景:/low-work-mem 或 /optimized-work-mem。使用:docker compose up --abort-on-container-exit 執行測試。在本範例中,我使用了 Docker 版本 20.10.22。
測試端點:/low-work-mem - work_mem=64kB
BEGIN; -- 1. Initialize a transaction. SET LOCAL work_mem = '64kB'; -- 2. Change work_mem at transaction level, another running transactions at the same session will have the default value(4MB). SHOW work_mem; -- 3. Check the modified work_mem value. EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) -- 4. Run explain with options that help us to analyses and indetifies bottlenecks. SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps INNER JOIN players p ON p.player_id = ps.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000; -- QUERY PLAN | --------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Limit (cost=18978.96..18983.96 rows=2000 width=12) (actual time=81.589..81.840 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Buffers: shared hit=667, temp read=860 written=977 | -> Sort (cost=18978.96..19003.96 rows=10000 width=12) (actual time=81.587..81.724 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Sort Key: (sum((ps.goals + ps.assists))) DESC | Sort Method: external merge Disk: 280kB | Buffers: shared hit=667, temp read=860 written=977 | -> GroupAggregate (cost=15076.66..17971.58 rows=10000 width=12) (actual time=40.293..79.264 rows=9998 loops=1) | Output: p.player_id, sum((ps.goals + ps.assists)) | Group Key: p.player_id | Buffers: shared hit=667, temp read=816 written=900 | -> Merge Join (cost=15076.66..17121.58 rows=100000 width=12) (actual time=40.281..71.313 rows=100000 loops=1) | Output: p.player_id, ps.goals, ps.assists | Merge Cond: (p.player_id = ps.player_id) | Buffers: shared hit=667, temp read=816 written=900 | -> Index Only Scan using players_pkey on public.players p (cost=0.29..270.29 rows=10000 width=4) (actual time=0.025..1.014 rows=10000 loops=1)| Output: p.player_id | Heap Fetches: 0 | Buffers: shared hit=30 | -> Materialize (cost=15076.32..15576.32 rows=100000 width=12) (actual time=40.250..57.942 rows=100000 loops=1) | Output: ps.goals, ps.assists, ps.player_id | Buffers: shared hit=637, temp read=816 written=900 | -> Sort (cost=15076.32..15326.32 rows=100000 width=12) (actual time=40.247..49.339 rows=100000 loops=1) | Output: ps.goals, ps.assists, ps.player_id | Sort Key: ps.player_id | Sort Method: external merge Disk: 2208kB | Buffers: shared hit=637, temp read=816 written=900 | -> Seq Scan on public.player_stats ps (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.011..8.378 rows=100000 loops=1) | Output: ps.goals, ps.assists, ps.player_id | Buffers: shared hit=637 | Planning: | Buffers: shared hit=6 | Planning Time: 0.309 ms | Execution Time: 82.718 ms | COMMIT; -- 5. You can also execute a ROLLBACK, in case you want to analyze queries like INSERT, UPDATE and DELETE.
測試端點:/optimized-work-mem - work_mem=4MB
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps INNER JOIN players p ON p.player_id = ps.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000; -- QUERY PLAN | ----------------------------------------------------------------------------------------------------------------------------------------------------+ Limit (cost=3646.90..3651.90 rows=2000 width=12) (actual time=41.672..41.871 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Buffers: shared hit=711 | -> Sort (cost=3646.90..3671.90 rows=10000 width=12) (actual time=41.670..41.758 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Sort Key: (sum((ps.goals + ps.assists))) DESC | Sort Method: top-N heapsort Memory: 227kB | Buffers: shared hit=711 | -> HashAggregate (cost=2948.61..3048.61 rows=10000 width=12) (actual time=38.760..40.073 rows=9998 loops=1) | Output: p.player_id, sum((ps.goals + ps.assists)) | Group Key: p.player_id | Batches: 1 Memory Usage: 1169kB | Buffers: shared hit=711 | -> Hash Join (cost=299.00..2198.61 rows=100000 width=12) (actual time=2.322..24.273 rows=100000 loops=1) | Output: p.player_id, ps.goals, ps.assists | Inner Unique: true | Hash Cond: (ps.player_id = p.player_id) | Buffers: shared hit=711 | -> Seq Scan on public.player_stats ps (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.008..4.831 rows=100000 loops=1)| Output: ps.player_stat_id, ps.player_id, ps.match_id, ps.goals, ps.assists, ps.minutes_played | Buffers: shared hit=637 | -> Hash (cost=174.00..174.00 rows=10000 width=4) (actual time=2.298..2.299 rows=10000 loops=1) | Output: p.player_id | Buckets: 16384 Batches: 1 Memory Usage: 480kB | Buffers: shared hit=74 | -> Seq Scan on public.players p (cost=0.00..174.00 rows=10000 width=4) (actual time=0.004..0.944 rows=10000 loops=1) | Output: p.player_id | Buffers: shared hit=74 | Planning: | Buffers: shared hit=6 | Planning Time: 0.236 ms | Execution Time: 41.998 ms |
結果表明,具有較高 work_mem 的端點的效能優於具有較低配置的端點。在測試負載下,p90 延遲下降了超過 43ms,吞吐量顯著提高。
如果百分位指標對您來說是新的,我建議您學習並理解它們。這些指標對於指導效能分析非常有幫助。以下是一些可幫助您入門的資源:
在夢到這個問題、多次醒來嘗試新的解決方案並最終發現 work_mem 可以提供幫助之後,下一個挑戰是找出此配置的正確值。 ?
work_mem 的預設值 4MB,與許多其他 PostgreSQL 設定一樣,是保守的。這使得 PostgreSQL 可以在運算能力有限的小型機器上運作。但是,我們必須小心,不要因記憶體不足錯誤而導致 PostgreSQL 實例崩潰。 單一查詢,如果足夠複雜,可能會消耗work_mem 指定記憶體的數倍,取決於排序、合併聯接、雜湊聯接等操作的數量(受hash_mem_multiplier 影響),等等。如官方文件所述:
選擇值時必須牢記這一事實。排序操作用於 ORDER BY、DISTINCT 和合併連線。哈希表用於哈希連接、基於哈希的聚合、記憶節點和 IN 子查詢的基於哈希的處理。
不幸的是,沒有設定 work_mem 的神奇公式。這取決於系統的可用記憶體、工作負載和查詢模式。 TimescaleDB 團隊有一個自動調整工具,並且這個主題得到了廣泛討論。以下是一些可以指導您的優秀資源:
但歸根結底,恕我直言,答案是:測驗。今天測試。明天測試。 永遠測試。繼續測試,直到找到適合您的用例的可接受的值,該值可以增強查詢效能而不破壞資料庫。 ?
以上是PostgreSQL 效能調優:work_mem 的力量的詳細內容。更多資訊請關注PHP中文網其他相關文章!