Home >Backend Development >Golang >PostgreSQL Performance Tuning: The Power of work_mem

PostgreSQL Performance Tuning: The Power of work_mem

Mary-Kate Olsen
Mary-Kate OlsenOriginal
2024-12-05 03:36:10735browse

Briefing

Years ago, I was tasked with solving a performance issue in a critical system for the company I worked at. It was a tough challenge, sleepless nights and even more hair loss, The backend used PostgreSQL, and after a lot of effort and digging, the solution turned out to be as simple as one line:

ALTER USER foo SET work_mem='32MB';

Now, to be honest, this might or might not solve your performance issue right away. It depends heavily on your query patterns and the workload of your system. However, if you're a backend developer, I hope this post adds another tool to your arsenal for tackling problems, especially with PostgreSQL ?

In this post, we’ll create a scenario to simulate performance degradation and explore some tools to investigate the problem, like EXPLAIN, k6 for load testing, and even a dive into PostgreSQL’s source code. I’ll also share some articles to help you solve related issues.

  • ➡️ github repository with the complete implementation

Case Study

Let’s create a simple system to analyze soccer player performance. For now, the only business rule is to answer this question:

  • Who are the top N players involved in scoring the most?

The following SQL creates our data model and populates it:

CREATE TABLE players (
    player_id SERIAL PRIMARY KEY,
    nationality TEXT,
    age INT,
    position TEXT
);

CREATE TABLE matches (
    match_id SERIAL PRIMARY KEY,
    match_date DATE,
    home_team TEXT,
    away_team TEXT
);

CREATE TABLE player_stats (
    player_stat_id SERIAL PRIMARY KEY,
    player_id INT REFERENCES players(player_id),
    match_id INT REFERENCES matches(match_id),
    goals INT,
    assists INT,
    minutes_played INT
);

-- Populate players with a range of nationalities, ages, and positions
INSERT INTO players (nationality, age, position)
SELECT
    ('Country' || (1 + random()*100)::int),  -- 100 different nationalities
    (18 + random()*20)::int,                 -- Ages between 18 and 38
    (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int]
FROM generate_series(1, 10000);

The script to initialize and populate the database is available in the github repository.

Yes, we could design out database to improve system performance, but the main goal here is to explore unoptimized scenarios. Trust me, you'll likely encounter systems like this, where either poor initial design choices or unexpected growth require significant effort to improve performance.

Debugging the problem

To simulate the issue related to the work_mem configuration, let’s create a query to answer this question: Who are the top 2000 players contributing the most to goals?

SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score
FROM player_stats ps
JOIN players p ON ps.player_id = p.player_id
GROUP BY p.player_id
ORDER BY total_score DESC
LIMIT 2000;

Alright, but how can we identify bottlenecks in this query? Like other DBMSs, PostgreSQL supports the EXPLAIN command, which helps us understand each step executed by the query planner (optimized or not).

We can analyze details such as:

  • What kind of scan was used? Index scan, Index Only scan, Seq Scan, etc.
  • Which index was used, and under what conditions?
  • If sorting is involved, what type of algorithm was used? Does it rely entirely on memory, or does it require disk usage?
  • The usage of shared buffers.
  • Execution time estimation.

You can learn more about the PostgreSQL planner/optimizer here:

  • official documentation
  • pganalyze - basics of postgres query planning
  • cybertec - how to interpret postgresql explain

Talk is cheap

Talk is cheap, so let’s dive into a practical example. First, we’ll reduce the work_mem to its smallest possible value, which is 64kB, as defined in the source code:

ALTER USER foo SET work_mem='32MB';

Next, let’s analyze the output of the EXPLAIN command:

CREATE TABLE players (
    player_id SERIAL PRIMARY KEY,
    nationality TEXT,
    age INT,
    position TEXT
);

CREATE TABLE matches (
    match_id SERIAL PRIMARY KEY,
    match_date DATE,
    home_team TEXT,
    away_team TEXT
);

CREATE TABLE player_stats (
    player_stat_id SERIAL PRIMARY KEY,
    player_id INT REFERENCES players(player_id),
    match_id INT REFERENCES matches(match_id),
    goals INT,
    assists INT,
    minutes_played INT
);

-- Populate players with a range of nationalities, ages, and positions
INSERT INTO players (nationality, age, position)
SELECT
    ('Country' || (1 + random()*100)::int),  -- 100 different nationalities
    (18 + random()*20)::int,                 -- Ages between 18 and 38
    (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int]
FROM generate_series(1, 10000);

We can see that the execution time was 82.718ms, and the Sort Algorithm used was external merge. This algorithm relies on disk instead of memory because the data exceeded the 64kB work_mem limit.

For your information, the tuplesort.c module flags when the Sort algorithm will use disk by setting the state to SORTEDONTAPE at this line. Disk interactions is handled by the logtape.c module.

If you're a visual person (like me), there are tools that can help you understand the EXPLAIN output, such as https://explain.dalibo.com/. Below is an example showing a node with the Sort step, including details like Sort Method: external merge and Sort Space Used: 2.2MB:

PostgreSQL Performance Tuning: The Power of work_mem

The "Stats" section is especially useful for analyzing more complex queries, as it provides execution time details for each query node. In our example, it highlights a suspiciously high execution time—nearly 42ms—in one of the Sort nodes:

PostgreSQL Performance Tuning: The Power of work_mem

  • You can visualize and analyze this query plan here: https://explain.dalibo.com/plan/2gd0a8c8fab6a532#stats

As the EXPLAIN output shows, one of the main reasons for the performance problem is the Sort node using disk. A side effect of this issue, especially in systems with high workloads, is spikes in Write I/O metrics (I hope you’re monitoring these; if not, good luck when you need them!). And yes, even read-only queries can cause write spikes, as the Sort algorithm writes data to temporary files.

Solution

When we execute the same query with work_mem=4MB (the default in PostgreSQL), the execution time decreases by over 50%.

SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score
FROM player_stats ps
JOIN players p ON ps.player_id = p.player_id
GROUP BY p.player_id
ORDER BY total_score DESC
LIMIT 2000;
  • For a visual analysis, check this link: https://explain.dalibo.com/plan/b094ec2f1cfg44f6#

In this EXPLAIN output, one of the Sort nodes now uses an in-memory algorithm, heapsort. For context, the planner opts for heapsort only when it’s cheaper to execute than quicksort. You can dive deeper into the decision-making process in the PostgreSQL source code.

Additionally, the second Sort node, which previously accounted for almost 40ms of execution time, disappears entirely from the execution plan. This change occurs because the planner now selects a HashJoin instead of a MergeJoin, as the hash operation fits in memory, consuming approximately 480kB.

For more details about join algorithms, check out these articles:

  • HashJoin Algorithm
  • MergeJoin Algorithm

Impact on the API

The default work_mem isn’t always sufficient to handle your system’s workload. You can adjust this value at the user level using:

ALTER USER foo SET work_mem='32MB';

Note: If you’re using a connection pool or a connection pooler, it’s important to recycle old sessions for the new configuration to take effect.

You can also control this configuration at the database transaction level. Let’s run a simple API to understand and measure the impact of work_mem changes using load testing with k6:

  • k6-test.js

    CREATE TABLE players (
        player_id SERIAL PRIMARY KEY,
        nationality TEXT,
        age INT,
        position TEXT
    );
    
    CREATE TABLE matches (
        match_id SERIAL PRIMARY KEY,
        match_date DATE,
        home_team TEXT,
        away_team TEXT
    );
    
    CREATE TABLE player_stats (
        player_stat_id SERIAL PRIMARY KEY,
        player_id INT REFERENCES players(player_id),
        match_id INT REFERENCES matches(match_id),
        goals INT,
        assists INT,
        minutes_played INT
    );
    
    -- Populate players with a range of nationalities, ages, and positions
    INSERT INTO players (nationality, age, position)
    SELECT
        ('Country' || (1 + random()*100)::int),  -- 100 different nationalities
        (18 + random()*20)::int,                 -- Ages between 18 and 38
        (ARRAY['Forward', 'Midfielder', 'Defender', 'Goalkeeper'])[ceil(random()*4)::int]
    FROM generate_series(1, 10000);
    

The API was implemented in Go and exposes two endpoints that execute the query with different work_mem configurations:

  • main.go

    SELECT p.player_id, SUM(ps.goals + ps.assists) AS total_score
    FROM player_stats ps
    JOIN players p ON ps.player_id = p.player_id
    GROUP BY p.player_id
    ORDER BY total_score DESC
    LIMIT 2000;
    

Below is the docker-compose file containing all the dependencies needed to run the load test:

  • docker-compose.yaml

        /*
         * workMem is forced to be at least 64KB, the current minimum valid value
         * for the work_mem GUC.  This is a defense against parallel sort callers
         * that divide out memory among many workers in a way that leaves each
         * with very little memory.
         */
        state->allowedMem = Max(workMem, 64) * (int64) 1024;
    

We can set the ENDPOINT environment variable to define the scenario to test: /low-work-mem or /optimized-work-mem. Run the test using: docker compose up --abort-on-container-exit. For this example, I used Docker version 20.10.22.

  • Test ENDPOINT: /low-work-mem - work_mem=64kB

    BEGIN; -- 1. Initialize a transaction.
    
    SET LOCAL work_mem = '64kB'; -- 2. Change work_mem at transaction level, another running transactions at the same session will have the default value(4MB).
    
    SHOW work_mem; -- 3. Check the modified work_mem value.
    
    EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) -- 4. Run explain with options that help us to analyses and indetifies bottlenecks. 
    SELECT 
        p.player_id, 
        SUM(ps.goals + ps.assists) AS total_score 
    FROM 
        player_stats ps
    INNER JOIN 
        players p ON p.player_id = ps.player_id
    GROUP BY 
        p.player_id
    ORDER BY 
        total_score DESC
    LIMIT 2000;
    --
    
    QUERY PLAN                                                                                                                                                          |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    Limit  (cost=18978.96..18983.96 rows=2000 width=12) (actual time=81.589..81.840 rows=2000 loops=1)                                                                  |
      Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                                               |
      Buffers: shared hit=667, temp read=860 written=977                                                                                                                |
      ->  Sort  (cost=18978.96..19003.96 rows=10000 width=12) (actual time=81.587..81.724 rows=2000 loops=1)                                                            |
            Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                                         |
            Sort Key: (sum((ps.goals + ps.assists))) DESC                                                                                                               |
            Sort Method: external merge  Disk: 280kB                                                                                                                    |
            Buffers: shared hit=667, temp read=860 written=977                                                                                                          |
            ->  GroupAggregate  (cost=15076.66..17971.58 rows=10000 width=12) (actual time=40.293..79.264 rows=9998 loops=1)                                            |
                  Output: p.player_id, sum((ps.goals + ps.assists))                                                                                                     |
                  Group Key: p.player_id                                                                                                                                |
                  Buffers: shared hit=667, temp read=816 written=900                                                                                                    |
                  ->  Merge Join  (cost=15076.66..17121.58 rows=100000 width=12) (actual time=40.281..71.313 rows=100000 loops=1)                                       |
                        Output: p.player_id, ps.goals, ps.assists                                                                                                       |
                        Merge Cond: (p.player_id = ps.player_id)                                                                                                        |
                        Buffers: shared hit=667, temp read=816 written=900                                                                                              |
                        ->  Index Only Scan using players_pkey on public.players p  (cost=0.29..270.29 rows=10000 width=4) (actual time=0.025..1.014 rows=10000 loops=1)|
                              Output: p.player_id                                                                                                                       |
                              Heap Fetches: 0                                                                                                                           |
                              Buffers: shared hit=30                                                                                                                    |
                        ->  Materialize  (cost=15076.32..15576.32 rows=100000 width=12) (actual time=40.250..57.942 rows=100000 loops=1)                                |
                              Output: ps.goals, ps.assists, ps.player_id                                                                                                |
                              Buffers: shared hit=637, temp read=816 written=900                                                                                        |
                              ->  Sort  (cost=15076.32..15326.32 rows=100000 width=12) (actual time=40.247..49.339 rows=100000 loops=1)                                 |
                                    Output: ps.goals, ps.assists, ps.player_id                                                                                          |
                                    Sort Key: ps.player_id                                                                                                              |
                                    Sort Method: external merge  Disk: 2208kB                                                                                           |
                                    Buffers: shared hit=637, temp read=816 written=900                                                                                  |
                                    ->  Seq Scan on public.player_stats ps  (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.011..8.378 rows=100000 loops=1)    |
                                          Output: ps.goals, ps.assists, ps.player_id                                                                                    |
                                          Buffers: shared hit=637                                                                                                       |
    Planning:                                                                                                                                                           |
      Buffers: shared hit=6                                                                                                                                             |
    Planning Time: 0.309 ms                                                                                                                                             |
    Execution Time: 82.718 ms                                                                                                                                    |
    
    COMMIT; -- 5. You can also execute a ROLLBACK, in case you want to analyze queries like INSERT, UPDATE and DELETE.
    
  • Test ENDPOINT: /optimized-work-mem - work_mem=4MB

    EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) 
    SELECT 
        p.player_id, 
        SUM(ps.goals + ps.assists) AS total_score 
    FROM 
        player_stats ps
    INNER JOIN 
        players p ON p.player_id = ps.player_id
    GROUP BY 
        p.player_id
    ORDER BY 
        total_score DESC
    LIMIT 2000;
    --
    QUERY PLAN                                                                                                                                          |
    ----------------------------------------------------------------------------------------------------------------------------------------------------+
    Limit  (cost=3646.90..3651.90 rows=2000 width=12) (actual time=41.672..41.871 rows=2000 loops=1)                                                    |
      Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                               |
      Buffers: shared hit=711                                                                                                                           |
      ->  Sort  (cost=3646.90..3671.90 rows=10000 width=12) (actual time=41.670..41.758 rows=2000 loops=1)                                              |
            Output: p.player_id, (sum((ps.goals + ps.assists)))                                                                                         |
            Sort Key: (sum((ps.goals + ps.assists))) DESC                                                                                               |
            Sort Method: top-N heapsort  Memory: 227kB                                                                                                  |
            Buffers: shared hit=711                                                                                                                     |
            ->  HashAggregate  (cost=2948.61..3048.61 rows=10000 width=12) (actual time=38.760..40.073 rows=9998 loops=1)                               |
                  Output: p.player_id, sum((ps.goals + ps.assists))                                                                                     |
                  Group Key: p.player_id                                                                                                                |
                  Batches: 1  Memory Usage: 1169kB                                                                                                      |
                  Buffers: shared hit=711                                                                                                               |
                  ->  Hash Join  (cost=299.00..2198.61 rows=100000 width=12) (actual time=2.322..24.273 rows=100000 loops=1)                            |
                        Output: p.player_id, ps.goals, ps.assists                                                                                       |
                        Inner Unique: true                                                                                                              |
                        Hash Cond: (ps.player_id = p.player_id)                                                                                         |
                        Buffers: shared hit=711                                                                                                         |
                        ->  Seq Scan on public.player_stats ps  (cost=0.00..1637.00 rows=100000 width=12) (actual time=0.008..4.831 rows=100000 loops=1)|
                              Output: ps.player_stat_id, ps.player_id, ps.match_id, ps.goals, ps.assists, ps.minutes_played                             |
                              Buffers: shared hit=637                                                                                                   |
                        ->  Hash  (cost=174.00..174.00 rows=10000 width=4) (actual time=2.298..2.299 rows=10000 loops=1)                                |
                              Output: p.player_id                                                                                                       |
                              Buckets: 16384  Batches: 1  Memory Usage: 480kB                                                                           |
                              Buffers: shared hit=74                                                                                                    |
                              ->  Seq Scan on public.players p  (cost=0.00..174.00 rows=10000 width=4) (actual time=0.004..0.944 rows=10000 loops=1)    |
                                    Output: p.player_id                                                                                                 |
                                    Buffers: shared hit=74                                                                                              |
    Planning:                                                                                                                                           |
      Buffers: shared hit=6                                                                                                                             |
    Planning Time: 0.236 ms                                                                                                                             |
    Execution Time: 41.998 ms                                                                                                                           |
    

The results demonstrate that the endpoint with a higher work_mem outperformed the one with a lower configuration. The p90 latency dropped by over 43ms, and throughput improved significantly under the test workload.

If percentile metrics are new to you, I recommend studying and understanding them. These metrics are incredibly helpful for guiding performance analyses. Here are some resources to get you started:

  • k6 response time
  • p90 vs p99

Conclusion

After dreaming about the problem, waking up multiple times to try new solutions, and finally discovering that work_mem could help, the next challenge is figuring out the right value for this configuration. ?

The default value of 4MB for work_mem, like many other PostgreSQL settings, is conservative. This allows PostgreSQL to run on smaller machines with limited computational power. However, we must be cautious not to crash the PostgreSQL instance with out-of-memory errors. A single query, if complex enough, can consume multiple times the memory specified by work_mem, depending on the number of operations like Sorts, Merge Joins, Hash Joins (influenced by hash_mem_multiplier), and more. As noted in the official documentation:

it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, memoize nodes and hash-based processing of IN subqueries.

Unfortunately, there’s no magic formula for setting work_mem. It depends on your system’s available memory, workload, and query patterns. The TimescaleDB Team has a tool to autotune and the topic is widely discussed. Here are some excellent resources to guide you:

  • Everything you know about work_mem is wrong
  • How should I tune work_mem for a given system

But at the end of the day, IMHO, the answer is: TEST. TEST TODAY. TEST TOMORROW. TEST FOREVER. Keep testing until you find an acceptable value for your use case that enhances query performance without blowing up your database. ?

The above is the detailed content of PostgreSQL Performance Tuning: The Power of work_mem. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn