Home  >  Article  >  Web Front-end  >  Let’s Make Jest Run Much Faster

Let’s Make Jest Run Much Faster

Patricia Arquette
Patricia ArquetteOriginal
2024-10-31 12:23:54464browse

Let’s Make Jest Run Much Faster

But first, we need to understand why it is so slow.

Practical Example

Consider a simple React component.

import React from "react";
import { deepClone } from "./utils";

export function App() {
  const obj = { foo: 'bar' };

  return (
    <div>
      <p>Object looks like this: {JSON.stringify(deepClone(obj))}</p>
    </div>
  );
}

App component depends only on one utility function - deepClone. The utils file looks like this.

import _ from 'lodash';
import moment from 'moment';
import * as mui from '@mui/material';

export const deepClone = (obj) => _.cloneDeep(obj);
export const getFormattedDate = (date) => moment(date).format('YYYY-MM-DD');

export const isButton = (instance) => instance === mui.Button;

It exports three one-line helper functions. That's it.
Now, here's a big question: How long do you think it will take to execute this test?

import React from "react";
import { render, screen } from "@testing-library/react";
import { App } from "./app";
import "@testing-library/jest-dom";

test("renders the app", () => {
  render(<App />);
});

The answer? An eternity!

 PASS  src/tests/react-app/react-app.test.js
  √ renders the date and sum correctly (25 ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        5.045 s

It took 5 seconds on my machine, to execute a one-liner test case for a one-liner React component.

Analyzing Performance

To analyze what is happening behind the scenes, we can use either Chrome's profiler - I recommend watching this insightful video by Kent C. Dodds.
Alternatively, you can use a jest-neat-runner library, which simplifies the profiling process. Set the NEAT_REPORT_MODULE_LOAD_ABOVE_MS option to 150 and enable NEAT_REPORT_TRANSFORM. This configuration will print out the modules that take more than 150ms to load and provide information on how long it took to process (open and transpile) the files.

Let's use the latter. This is the output.

> jest src/tests/react-app/

From src\tests\react-app\utils.js -> @mui/material in 1759ms
From node_modules\@mui\material\node\styles\adaptV4Theme.js -> @mui/system in 509ms
From src\tests\react-app\react-app.test.js -> @testing-library/react in 317ms
From node_modules\@testing-library\react\dist\pure.js -> @testing-library/dom in 266ms
From node_modules\@mui\system\ThemeProvider\ThemeProvider.js -> @mui/private-theming in 166ms
From node_modules\@testing-library\dom\dist\role-helpers.js -> aria-query in 161ms

We're loading "@mui/material" library for almost 2 seconds without even using it!

Root Cause In Many Projects?

Messy Dependencies

In my experience, performance problems with jest mainly stem from the large number of transitive dependencies that aren't even used at runtime. As showcased in our example above, if you don't pay enough attention to what files you import into your application, you might end up in the same situation as me.

In my case, the App component only depends on the deepClone utility function. However, since deepClone is exported from the utils file, all the dependencies within the utils file were also loaded along with it.

Files that contain a lot of loosely related functions and heavy dependencies might significantly slow down your application and tests.

Barrel Files

Jest is not a friend with ESM modules, which leads it to fallback to CommonJS. Consequently, tree-shaking doesn't function correctly. This is particularly problematic when relying on modules imported from barrel files (index files).
For instance, when you import a small component or function from a barrel file, Jest will load everything else as well - which obviously causes an unnecessary overhead.

What Now?

Adjusting the Import Strategy Manually

Aside from removing the barrel files and refactoring the entire codebase by breaking up files with numerous dependencies into smaller, more focused modules. We can identify modules that take a long time to load and look for smaller alternative modules or check if the imported module exports individual parts separately (i.e., named imports) instead of using the barrel file.
Meaning, instead of

import React from "react";
import { deepClone } from "./utils";

export function App() {
  const obj = { foo: 'bar' };

  return (
    <div>
      <p>Object looks like this: {JSON.stringify(deepClone(obj))}</p>
    </div>
  );
}

do

import _ from 'lodash';
import moment from 'moment';
import * as mui from '@mui/material';

export const deepClone = (obj) => _.cloneDeep(obj);
export const getFormattedDate = (date) => moment(date).format('YYYY-MM-DD');

export const isButton = (instance) => instance === mui.Button;

If we're not using the module at all, we can mock it via jest.mock to avoid loading it completely.
However, these adjustments can be quite time-consuming.

Runtime Cache Approach

A more effective method involves using the jest-neat-runner library with the NEAT_RUNTIME_CACHE option. When this option is on, the library tracks the real runtime usage of all modules (per test file) and stores dependencies that we do not need for subsequent test runs into a cache. Let me show what it does on the example above

import React from "react";
import { render, screen } from "@testing-library/react";
import { App } from "./app";
import "@testing-library/jest-dom";

test("renders the app", () => {
  render(<App />);
});

We reduced the execution time from five seconds to two by skipping the loading of 26 unnecessary libraries, including the MUI library.
Be cautious - there are several caveats when using NEAT_RUNTIME_CACHE, so make sure to read the README before using it.

Other Optimization Techniques

Transpilation optimisations: Examine how many files need to be transpiled and use the most effective transpiler (like SWC or esbuild). If you want to save time, the NEAT_REPORT_TRANSFORM option in jest-neat-runner will provide detailed information on how much time and how many modules it takes to transpile.

Caching Modules in Memory: By default, Jest does not cache modules in memory, meaning every test run must open, parse, and load the module into memory. If you have a vast suite of tests and enough memory, consider using the NEAT_TRANSFORM_CACHE option to speed things up.

What About the CI Pipeline?

Parallel Runs: CircleCI and GitHub Actions support parallel runs. This means you can spin up more machines and divide the load using the shard parameter in Jest.
Storing the Jest and Neat Cache: This is crucial for taking advantage of Jest and jest-neat-runner in the CI. Be sure to set the cacheDirectory option in Jest. Then, store the directory after the test run, and restore the cache before running the tests. Caveat: If you're using parallelism, ensure you store unique caches for each node. For instance, CircleCI exposes the CIRCLE_NODE_INDEX environment variable, which you can leverage when storing the cache. This is how it looks in the CircleCI.

import React from "react";
import { deepClone } from "./utils";

export function App() {
  const obj = { foo: 'bar' };

  return (
    <div>
      <p>Object looks like this: {JSON.stringify(deepClone(obj))}</p>
    </div>
  );
}

By following these guidelines, you can significantly enhance Jest's performance in your projects.

The above is the detailed content of Let’s Make Jest Run Much Faster. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn