Original text: https://github.com/yyx990803/vite-vs-next-turbo-hmr/discussions/8
Author: You Yu Stream
A week ago, Vercel announced Turbopack, Webpack’s Rust-based successor.
In the announcement, Turbopack claims to be “10x faster than Vite.” This phrase is repeated in various Vercel marketing materials, including tweets, blog posts, and marketing emails sent to Vercel users. Turbopack's documentation also includes benchmark graphs, which initially showed that Next.js 13 with TurboPack could perform a React HMR hot update in 0.01s, compared to 0.09s for Vite. There are also benchmarks for cold start performance, but since no comparison was found where the cold start speed is 10 times faster than Vite, we can only assume that "10 times faster" is based on HMR performance. [Related recommendations: vuejs video tutorial, web front-end development]
Vercel does not use any links to the benchmarks used to demonstrate these numbers in marketing materials or documentation. So I was curious and decided to test my claim using the just-released benchmark of Next 13 and Vite 3.2. The code and methods are open source here.
The gist of my approach is to compare HMR performance by measuring the delta between the following two timestamps:
The time the source file was modified, by a separate node.js process to observe file changes;
The time to re-render the updated React component is determined by calling
Date.now()
directly in the render function of the component Record. Note that this call occurs during the component's virtual DOM render phase, so it is not affected by React reconciliation or actual DOM updates.
benchmark also measured the numbers for two different cases:
The "root" case, where the component imports 1,000 different children components and rendered together.
"Leaf" case, the component is imported by the root, but has no child components of its own.
Differences
Before getting into the numbers, there are a few additional differences worth mentioning:
Next Whether to use React Server Component (RSC).
Whether Vite uses SWC instead of Babel for React escaping.
React Server Components
Next 13 introduces a major architectural shift as components now default to server components unless the user uses "use -client" directive explicitly selects client mode. Not only is this the default setting, the Next documentation also recommends users to keep server component mode where possible to improve end-user performance.
My initial benchmark tested the HMR performance of Next 13's root and leaf components in server mode. The results show that Next 13 is actually slower in both cases, and the difference in leaf components is significant.
Round 1 snapshot (Next w/ RSC, Vite w/ Babel)
When I posted these numbers on Twitter , it was quickly pointed out that I should benchmark the Next component without RSC to make it equal. So I added a "useclient" directive in the Next root component to opt into client mode. In fact, in client mode, Next HMR is significantly improved, 2x faster than Vite:
Round 2 snapshot (Next w/o RSC, Vite w/ Babel)
SWC vs. Babel Transforms
Our goal is to make the benchmark only focus on HMR performance differences. To make sure we're actually comparing the same thing, we should also eliminate another variable: Vite's default React preset uses Babel to transform React HMR and JSX.
React HMR and JSX transformations are not features coupled to the build tools. This can be done via Babel (js based) or SWC (rust based). Esbuild can also convert JSX but lacks support for HMR. SWC is significantly faster than Babel (20x single-threaded, 70x multi-core). The reason Vite currently defaults to Babel is a trade-off between installation size and practicality. The installation size of SWC is quite large (58MB in node_modules, while Vite itself is only 19MB), and many users still rely on Babel for other transformations, so a Babel pass is inevitable for them. Of course, this may change in the future.
Vite core does not depend on Babel. Just use vite-plugin-swc-react-refresh to replace the default React plugin. After the switch, we see significant improvements over Vite in the root case over Next:
Interestingly, the growth curve here shows that Next/turbo is slower in the root case than in the leaf case 4 times slower, while Vite is only 2.4 times slower. This means the Vite HMR performs better in larger components.
Additionally, switching to SWC should also improve Vite’s cold start metrics in the Vercel benchmark.
Performance on different hardware
Because this is a composite test involving Node.js and native Rust parts, there will be extraordinary results on different hardware difference. The results I posted were collected on my M1 MacBook Pro. Other users have run the same benchmark on different hardware and reported different results.
In some cases, Vite on the root case is faster.
In other cases, the Vite was significantly faster in both cases.
Vercel’s clarification
After I published my benchmark, Vercel published a Blog post, clarified their benchmark methods and provided their benchmarks for public verification. While this may be a day one thing, it's definitely a step in the right direction.
After reading the post and benchmark code, here are a few key takeaways:
The Vite implementation still uses the default Babel-based React plugin.
#In the case of 1k components, there is a rounding problem with numbers. Turbopack's 15ms is rounded to 0.01s, and Vite's 87ms is rounded to 0.09s. This widened the gap, which was originally close to 6x, to 10x.
Vercel's benchmark uses the "browser eval time" of the update module as the end timestamp, not the React component re-render time.
The post includes a chart showing that Turbopack can be 10x faster than Vite when the total number of modules exceeds 30k.
To sum up, "10 times faster than Vite" must be true under the following conditions:
Vite does not use the same SWC conversion.
The application contains more than 30k modules
Benchmark only measures the time when hot update modules are evaluated, while Not when the changes are actually applied.
What is a “fair” comparison?
Since Vercel's benchmark test measures "module evaluation time" to exclude differences caused by React's HMR runtime, we can assume that the goal of the benchmark test is to do justice to the HMR mechanism inherent to Vite and Turbopack Comparison.
Unfortunately, given this premise, Vite still uses Babel in benchmark tests, which is unfair and invalidates the 10x speed claim. This should be considered an inaccurate test before using SWC converted Vite to correct the numbers.
Also, I believe most people will agree:
For the vast majority of users, 30k module count is a highly unlikely scenario. As Vite uses SWC, the number of modules required to achieve the 10x requirement may become even more impractical. While this is theoretically possible, it would be disingenuous to use it to justify Vercel's ongoing marketing success.
Users are more concerned with end-to-end HMR performance, i.e. the time from saving to seeing reflected changes, rather than theoretical "module evaluation" time. When seeing “updates 10 times faster,” the average user will consider the former rather than the latter. Vercel conveniently omits this warning in its marketing. In fact, the end-to-end HMR (default) of the server component in Next is slower than in Vite.
As a Vite author, I'm glad to see a well-funded company like Vercel investing heavily in improving front-end tools. We could even take advantage of Turbopack in Vite in the future if applicable. I believe healthy competition in the OSS space will ultimately benefit all developers.
However, I also believe that competition in open source software should be based on open communication, fair comparison and mutual respect. It is disappointing and concerning to see aggressive marketing using cherry-picked, non-peer-reviewed, borderline misleading numbers that are typically only seen in commercial competitions. As a company built on the success of OSS, I believe Vercel can do better.
(Learning video sharing: Basic Programming Video)
The above is the detailed content of You Yuxi responded: Is Vite really 10 times slower than Turbopack?. For more information, please follow other related articles on the PHP Chinese website!

JavaScript 不提供任何内存管理操作。相反,内存由 JavaScript VM 通过内存回收过程管理,该过程称为垃圾收集。

Node 19已正式发布,下面本篇文章就来带大家详解了解一下Node.js 19的 6 大特性,希望对大家有所帮助!

vscode自身是支持vue文件组件跳转到定义的,但是支持的力度是非常弱的。我们在vue-cli的配置的下,可以写很多灵活的用法,这样可以提升我们的生产效率。但是正是这些灵活的写法,导致了vscode自身提供的功能无法支持跳转到文件定义。为了兼容这些灵活的写法,提高工作效率,所以写了一个vscode支持vue文件跳转到定义的插件。

选择一个Node的Docker镜像看起来像是一件小事,但是镜像的大小和潜在漏洞可能会对你的CI/CD流程和安全造成重大的影响。那我们如何选择一个最好Node.js Docker镜像呢?

本篇文章给大家整理和分享几个前端文件处理相关的实用工具库,共分成6大类一一介绍给大家,希望对大家有所帮助。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

SublimeText3 Mac version
God-level code editing software (SublimeText3)

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1
Easy-to-use and free code editor

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.
