Home >Web Front-end >Vue.js >You Yuxi responded: Is Vite really 10 times slower than Turbopack?
Original text: https://github.com/yyx990803/vite-vs-next-turbo-hmr/discussions/8
Author: You Yu Stream
A week ago, Vercel announced Turbopack, Webpack’s Rust-based successor.
In the announcement, Turbopack claims to be “10x faster than Vite.” This phrase is repeated in various Vercel marketing materials, including tweets, blog posts, and marketing emails sent to Vercel users. Turbopack's documentation also includes benchmark graphs, which initially showed that Next.js 13 with TurboPack could perform a React HMR hot update in 0.01s, compared to 0.09s for Vite. There are also benchmarks for cold start performance, but since no comparison was found where the cold start speed is 10 times faster than Vite, we can only assume that "10 times faster" is based on HMR performance. [Related recommendations: vuejs video tutorial, web front-end development]
Vercel does not use any links to the benchmarks used to demonstrate these numbers in marketing materials or documentation. So I was curious and decided to test my claim using the just-released benchmark of Next 13 and Vite 3.2. The code and methods are open source here.
The gist of my approach is to compare HMR performance by measuring the delta between the following two timestamps:
The time the source file was modified, by a separate node.js process to observe file changes;
The time to re-render the updated React component is determined by calling Date.now()
directly in the render function of the component Record. Note that this call occurs during the component's virtual DOM render phase, so it is not affected by React reconciliation or actual DOM updates.
benchmark also measured the numbers for two different cases:
The "root" case, where the component imports 1,000 different children components and rendered together.
"Leaf" case, the component is imported by the root, but has no child components of its own.
Before getting into the numbers, there are a few additional differences worth mentioning:
Next Whether to use React Server Component (RSC).
Whether Vite uses SWC instead of Babel for React escaping.
Next 13 introduces a major architectural shift as components now default to server components unless the user uses "use -client" directive explicitly selects client mode. Not only is this the default setting, the Next documentation also recommends users to keep server component mode where possible to improve end-user performance.
My initial benchmark tested the HMR performance of Next 13's root and leaf components in server mode. The results show that Next 13 is actually slower in both cases, and the difference in leaf components is significant.
Round 1 snapshot (Next w/ RSC, Vite w/ Babel)
When I posted these numbers on Twitter , it was quickly pointed out that I should benchmark the Next component without RSC to make it equal. So I added a "useclient" directive in the Next root component to opt into client mode. In fact, in client mode, Next HMR is significantly improved, 2x faster than Vite:
Round 2 snapshot (Next w/o RSC, Vite w/ Babel)
Our goal is to make the benchmark only focus on HMR performance differences. To make sure we're actually comparing the same thing, we should also eliminate another variable: Vite's default React preset uses Babel to transform React HMR and JSX.
React HMR and JSX transformations are not features coupled to the build tools. This can be done via Babel (js based) or SWC (rust based). Esbuild can also convert JSX but lacks support for HMR. SWC is significantly faster than Babel (20x single-threaded, 70x multi-core). The reason Vite currently defaults to Babel is a trade-off between installation size and practicality. The installation size of SWC is quite large (58MB in node_modules, while Vite itself is only 19MB), and many users still rely on Babel for other transformations, so a Babel pass is inevitable for them. Of course, this may change in the future.
Vite core does not depend on Babel. Just use vite-plugin-swc-react-refresh to replace the default React plugin. After the switch, we see significant improvements over Vite in the root case over Next:
Interestingly, the growth curve here shows that Next/turbo is slower in the root case than in the leaf case 4 times slower, while Vite is only 2.4 times slower. This means the Vite HMR performs better in larger components.
Additionally, switching to SWC should also improve Vite’s cold start metrics in the Vercel benchmark.
Because this is a composite test involving Node.js and native Rust parts, there will be extraordinary results on different hardware difference. The results I posted were collected on my M1 MacBook Pro. Other users have run the same benchmark on different hardware and reported different results.
In some cases, Vite on the root case is faster.
In other cases, the Vite was significantly faster in both cases.
After I published my benchmark, Vercel published a Blog post, clarified their benchmark methods and provided their benchmarks for public verification. While this may be a day one thing, it's definitely a step in the right direction.
After reading the post and benchmark code, here are a few key takeaways:
The Vite implementation still uses the default Babel-based React plugin.
#In the case of 1k components, there is a rounding problem with numbers. Turbopack's 15ms is rounded to 0.01s, and Vite's 87ms is rounded to 0.09s. This widened the gap, which was originally close to 6x, to 10x.
Vercel's benchmark uses the "browser eval time" of the update module as the end timestamp, not the React component re-render time.
The post includes a chart showing that Turbopack can be 10x faster than Vite when the total number of modules exceeds 30k.
To sum up, "10 times faster than Vite" must be true under the following conditions:
Vite does not use the same SWC conversion.
The application contains more than 30k modules
Benchmark only measures the time when hot update modules are evaluated, while Not when the changes are actually applied.
Since Vercel's benchmark test measures "module evaluation time" to exclude differences caused by React's HMR runtime, we can assume that the goal of the benchmark test is to do justice to the HMR mechanism inherent to Vite and Turbopack Comparison.
Unfortunately, given this premise, Vite still uses Babel in benchmark tests, which is unfair and invalidates the 10x speed claim. This should be considered an inaccurate test before using SWC converted Vite to correct the numbers.
Also, I believe most people will agree:
For the vast majority of users, 30k module count is a highly unlikely scenario. As Vite uses SWC, the number of modules required to achieve the 10x requirement may become even more impractical. While this is theoretically possible, it would be disingenuous to use it to justify Vercel's ongoing marketing success.
Users are more concerned with end-to-end HMR performance, i.e. the time from saving to seeing reflected changes, rather than theoretical "module evaluation" time. When seeing “updates 10 times faster,” the average user will consider the former rather than the latter. Vercel conveniently omits this warning in its marketing. In fact, the end-to-end HMR (default) of the server component in Next is slower than in Vite.
As a Vite author, I'm glad to see a well-funded company like Vercel investing heavily in improving front-end tools. We could even take advantage of Turbopack in Vite in the future if applicable. I believe healthy competition in the OSS space will ultimately benefit all developers.
However, I also believe that competition in open source software should be based on open communication, fair comparison and mutual respect. It is disappointing and concerning to see aggressive marketing using cherry-picked, non-peer-reviewed, borderline misleading numbers that are typically only seen in commercial competitions. As a company built on the success of OSS, I believe Vercel can do better.
(Learning video sharing: Basic Programming Video)
The above is the detailed content of You Yuxi responded: Is Vite really 10 times slower than Turbopack?. For more information, please follow other related articles on the PHP Chinese website!