几年前,我的任务是解决我工作的公司的一个关键系统的性能问题。这是一个艰巨的挑战,废寝忘食,更掉头发,后端使用了PostgreSQL,经过一番努力和挖掘,解决方案原来就这么简单:
ALTER USER foo SET work_mem='32MB';
现在,说实话,这可能会也可能不会立即解决您的性能问题。这在很大程度上取决于您的查询模式和系统的工作负载。但是,如果您是后端开发人员,我希望这篇文章能为您解决问题的工具库添加另一个工具,尤其是 PostgreSQL ?
在这篇文章中,我们将创建一个场景来模拟性能下降,并探索一些工具来调查问题,例如 EXPLAIN、用于负载测试的 k6,甚至深入研究 PostgreSQL 的源代码。我也会分享一些文章来帮助大家解决相关问题。
让我们创建一个简单的系统来分析足球运动员的表现。目前,唯一的业务规则是回答这个问题:
以下 SQL 创建我们的数据模型并填充它:
CREATE TABLE players ( player_id SERIAL PRIMARY KEY, nationality TEXT, age INT, position TEXT ); CREATE TABLE matches ( match_id SERIAL PRIMARY KEY, match_date DATE, home_team TEXT, away_team TEXT ); CREATE TABLE player_stats ( player_stat_id SERIAL PRIMARY KEY, player_id INT REFERENCES players(player_id), match_id INT REFERENCES matches(match_id), goals INT, assists INT, minutes_played INT ); -- Populate players with a range of nationalities, ages, and positions INSERT INTO players (nationality, age, position) SELECT ('Country' || (1 + random()*100)::int), -- 100 different nationalities (18 + random()*20)::int, -- Ages between 18 and 38 (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int] FROM generate_series(1, 10000);
初始化和填充数据库的脚本可在 github 存储库中找到。
是的,我们可以设计数据库来提高系统性能,但这里的主要目标是探索未优化的场景。相信我,您可能会遇到这样的系统,其中要么是糟糕的初始设计选择,要么是意外的增长需要付出巨大的努力来提高性能。
为了模拟与 work_mem 配置相关的问题,让我们创建一个查询来回答这个问题:谁是对进球贡献最大的前 2000 名球员?
SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps JOIN players p ON ps.player_id = p.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000;
好吧,但是我们如何识别这个查询中的瓶颈呢?与其他 DBMS 一样,PostgreSQL 支持 EXPLAIN 命令,这有助于我们了解查询规划器执行的每个步骤(优化与否)。
我们可以分析以下细节:
您可以在这里了解有关 PostgreSQL 规划器/优化器的更多信息:
光说不练,所以让我们深入研究一个实际的例子。首先,我们将 work_mem 减少到其可能的最小值,即 64kB,如源代码中所定义:
ALTER USER foo SET work_mem='32MB';
接下来我们来分析一下EXPLAIN命令的输出:
CREATE TABLE players ( player_id SERIAL PRIMARY KEY, nationality TEXT, age INT, position TEXT ); CREATE TABLE matches ( match_id SERIAL PRIMARY KEY, match_date DATE, home_team TEXT, away_team TEXT ); CREATE TABLE player_stats ( player_stat_id SERIAL PRIMARY KEY, player_id INT REFERENCES players(player_id), match_id INT REFERENCES matches(match_id), goals INT, assists INT, minutes_played INT ); -- Populate players with a range of nationalities, ages, and positions INSERT INTO players (nationality, age, position) SELECT ('Country' || (1 + random()*100)::int), -- 100 different nationalities (18 + random()*20)::int, -- Ages between 18 and 38 (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int] FROM generate_series(1, 10000);
我们可以看到执行时间为82.718ms,并且使用的排序算法是外部合并。该算法依赖于磁盘而不是内存,因为数据超出了 64kB work_mem 限制。
供您参考,tuplesort.c 模块通过在此行将状态设置为 SORTEDONTAPE 来标记排序算法何时使用磁盘。磁盘交互由 logtape.c 模块处理。
如果您是一个视觉型的人(像我一样),有一些工具可以帮助您理解 EXPLAIN 输出,例如 https://explain.dalibo.com/。下面是显示具有排序步骤的节点的示例,包括排序方法:外部合并和已用排序空间等详细信息:2.2MB:
“统计”部分对于分析更复杂的查询特别有用,因为它提供了每个查询节点的执行时间详细信息。在我们的示例中,它突出显示了排序节点之一中可疑的高执行时间(接近 42 毫秒):
如 EXPLAIN 输出所示,性能问题的主要原因之一是使用磁盘的 Sort 节点。此问题的一个副作用是写入 I/O 指标出现峰值,尤其是在工作负载较高的系统中(我希望您正在监视这些指标;如果没有,祝您在需要时好运!)。是的,即使是只读查询也会导致写入峰值,因为排序算法会将数据写入临时文件。
当我们使用 work_mem=4MB(PostgreSQL 中的默认值)执行相同的查询时,执行时间减少了 50% 以上。
SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps JOIN players p ON ps.player_id = p.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000;
在此 EXPLAIN 输出中,其中一个排序节点现在使用内存算法堆排序。就上下文而言,只有当堆排序比快速排序执行成本更低时,规划器才会选择堆排序。您可以在 PostgreSQL 源代码中更深入地了解决策过程。
此外,之前占用近 40 毫秒执行时间的第二个 Sort 节点完全从执行计划中消失。发生此更改是因为规划器现在选择 HashJoin 而不是 MergeJoin,因为哈希操作适合内存,消耗大约 480kB。
有关连接算法的更多详细信息,请查看以下文章:
默认的 work_mem 并不总是足以处理系统的工作负载。您可以使用以下方法在用户级别调整此值:
ALTER USER foo SET work_mem='32MB';
注意:如果您使用连接池或连接池程序,回收旧会话以使新配置生效非常重要。
您还可以在数据库事务级别控制此配置。让我们运行一个简单的 API,通过 k6 的负载测试来了解和测量 work_mem 更改的影响:
k6-test.js
CREATE TABLE players ( player_id SERIAL PRIMARY KEY, nationality TEXT, age INT, position TEXT ); CREATE TABLE matches ( match_id SERIAL PRIMARY KEY, match_date DATE, home_team TEXT, away_team TEXT ); CREATE TABLE player_stats ( player_stat_id SERIAL PRIMARY KEY, player_id INT REFERENCES players(player_id), match_id INT REFERENCES matches(match_id), goals INT, assists INT, minutes_played INT ); -- Populate players with a range of nationalities, ages, and positions INSERT INTO players (nationality, age, position) SELECT ('Country' || (1 + random()*100)::int), -- 100 different nationalities (18 + random()*20)::int, -- Ages between 18 and 38 (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int] FROM generate_series(1, 10000);
API 是在 Go 中实现的,并公开了两个使用不同 work_mem 配置执行查询的端点:
main.go
SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps JOIN players p ON ps.player_id = p.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000;
下面是 docker-compose 文件,其中包含运行负载测试所需的所有依赖项:
docker-compose.yaml
/* * workMem is forced to be at least 64KB, the current minimum valid value * for the work_mem GUC. This is a defense against parallel sort callers * that divide out memory among many workers in a way that leaves each * with very little memory. */ state->allowedMem = Max(workMem, 64) * (int64) 1024;
我们可以设置 ENDPOINT 环境变量来定义要测试的场景:/low-work-mem 或 /optimized-work-mem。使用:docker compose up --abort-on-container-exit 运行测试。在本示例中,我使用了 Docker 版本 20.10.22。
测试端点:/low-work-mem - work_mem=64kB
BEGIN; -- 1. Initialize a transaction. SET LOCAL work_mem = '64kB'; -- 2. Change work_mem at transaction level, another running transactions at the same session will have the default value(4MB). SHOW work_mem; -- 3. Check the modified work_mem value. EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) -- 4. Run explain with options that help us to analyses and indetifies bottlenecks. SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps INNER JOIN players p ON p.player_id = ps.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000; -- QUERY PLAN | --------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Limit (cost=18978.96..18983.96 rows=2000 width=12) (actual time=81.589..81.840 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Buffers: shared hit=667, temp read=860 written=977 | -> Sort (cost=18978.96..19003.96 rows=10000 width=12) (actual time=81.587..81.724 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Sort Key: (sum((ps.goals + ps.assists))) DESC | Sort Method: external merge Disk: 280kB | Buffers: shared hit=667, temp read=860 written=977 | -> GroupAggregate (cost=15076.66..17971.58 rows=10000 width=12) (actual time=40.293..79.264 rows=9998 loops=1) | Output: p.player_id, sum((ps.goals + ps.assists)) | Group Key: p.player_id | Buffers: shared hit=667, temp read=816 written=900 | -> Merge Join (cost=15076.66..17121.58 rows=100000 width=12) (actual time=40.281..71.313 rows=100000 loops=1) | Output: p.player_id, ps.goals, ps.assists | Merge Cond: (p.player_id = ps.player_id) | Buffers: shared hit=667, temp read=816 written=900 | -> Index Only Scan using players_pkey on public.players p (cost=0.29..270.29 rows=10000 width=4) (actual time=0.025..1.014 rows=10000 loops=1)| Output: p.player_id | Heap Fetches: 0 | Buffers: shared hit=30 | -> Materialize (cost=15076.32..15576.32 rows=100000 width=12) (actual time=40.250..57.942 rows=100000 loops=1) | Output: ps.goals, ps.assists, ps.player_id | Buffers: shared hit=637, temp read=816 written=900 | -> Sort (cost=15076.32..15326.32 rows=100000 width=12) (actual time=40.247..49.339 rows=100000 loops=1) | Output: ps.goals, ps.assists, ps.player_id | Sort Key: ps.player_id | Sort Method: external merge Disk: 2208kB | Buffers: shared hit=637, temp read=816 written=900 | -> Seq Scan on public.player_stats ps (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.011..8.378 rows=100000 loops=1) | Output: ps.goals, ps.assists, ps.player_id | Buffers: shared hit=637 | Planning: | Buffers: shared hit=6 | Planning Time: 0.309 ms | Execution Time: 82.718 ms | COMMIT; -- 5. You can also execute a ROLLBACK, in case you want to analyze queries like INSERT, UPDATE and DELETE.
测试端点:/optimized-work-mem - work_mem=4MB
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score FROM player_stats ps INNER JOIN players p ON p.player_id = ps.player_id GROUP BY p.player_id ORDER BY total_score DESC LIMIT 2000; -- QUERY PLAN | ----------------------------------------------------------------------------------------------------------------------------------------------------+ Limit (cost=3646.90..3651.90 rows=2000 width=12) (actual time=41.672..41.871 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Buffers: shared hit=711 | -> Sort (cost=3646.90..3671.90 rows=10000 width=12) (actual time=41.670..41.758 rows=2000 loops=1) | Output: p.player_id, (sum((ps.goals + ps.assists))) | Sort Key: (sum((ps.goals + ps.assists))) DESC | Sort Method: top-N heapsort Memory: 227kB | Buffers: shared hit=711 | -> HashAggregate (cost=2948.61..3048.61 rows=10000 width=12) (actual time=38.760..40.073 rows=9998 loops=1) | Output: p.player_id, sum((ps.goals + ps.assists)) | Group Key: p.player_id | Batches: 1 Memory Usage: 1169kB | Buffers: shared hit=711 | -> Hash Join (cost=299.00..2198.61 rows=100000 width=12) (actual time=2.322..24.273 rows=100000 loops=1) | Output: p.player_id, ps.goals, ps.assists | Inner Unique: true | Hash Cond: (ps.player_id = p.player_id) | Buffers: shared hit=711 | -> Seq Scan on public.player_stats ps (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.008..4.831 rows=100000 loops=1)| Output: ps.player_stat_id, ps.player_id, ps.match_id, ps.goals, ps.assists, ps.minutes_played | Buffers: shared hit=637 | -> Hash (cost=174.00..174.00 rows=10000 width=4) (actual time=2.298..2.299 rows=10000 loops=1) | Output: p.player_id | Buckets: 16384 Batches: 1 Memory Usage: 480kB | Buffers: shared hit=74 | -> Seq Scan on public.players p (cost=0.00..174.00 rows=10000 width=4) (actual time=0.004..0.944 rows=10000 loops=1) | Output: p.player_id | Buffers: shared hit=74 | Planning: | Buffers: shared hit=6 | Planning Time: 0.236 ms | Execution Time: 41.998 ms |
结果表明,具有较高 work_mem 的端点的性能优于具有较低配置的端点。在测试负载下,p90 延迟下降了超过 43ms,吞吐量显着提高。
如果百分位指标对您来说是新的,我建议您学习并理解它们。这些指标对于指导性能分析非常有帮助。以下是一些可帮助您入门的资源:
在梦到这个问题、多次醒来尝试新的解决方案并最终发现 work_mem 可以提供帮助之后,下一个挑战是找出此配置的正确值。 ?
work_mem 的默认值 4MB,与许多其他 PostgreSQL 设置一样,是保守的。这使得 PostgreSQL 可以在计算能力有限的小型机器上运行。但是,我们必须小心,不要因内存不足错误而导致 PostgreSQL 实例崩溃。 单个查询,如果足够复杂,可能会消耗 work_mem 指定内存的数倍,具体取决于排序、合并联接、哈希联接等操作的数量(受 hash_mem_multiplier 影响),等等。正如官方文档中所述:
选择值时必须牢记这一事实。排序操作用于 ORDER BY、DISTINCT 和合并连接。哈希表用于哈希连接、基于哈希的聚合、记忆节点和 IN 子查询的基于哈希的处理。
不幸的是,没有设置 work_mem 的神奇公式。这取决于系统的可用内存、工作负载和查询模式。 TimescaleDB 团队有一个自动调整工具,并且该主题得到了广泛讨论。以下是一些可以指导您的优秀资源:
但归根结底,恕我直言,答案是:测试。今天测试。明天测试。 永远测试。继续测试,直到找到适合您的用例的可接受的值,该值可以增强查询性能而不破坏数据库。 ?
以上是PostgreSQL 性能调优:work_mem 的力量的详细内容。更多信息请关注PHP中文网其他相关文章!