Home >Web Front-end >JS Tutorial >How fast can a browser be?
React.js is famous for its efficient UI rendering. One of the important reasons is that it maintains a virtual DOM. Users can directly operate on the virtual DOM. React.js uses the diff algorithm to determine the need to perform operations on the browser DOM. Minimal operation, thus avoiding the performance loss caused by manually modifying the DOM in large quantities. Wait, obviously a layer is added in the middle, why does the result become faster? The core idea of React.js is that DOM operations are slow, so DOM operations may need to be minimized in exchange for overall performance improvements. It is obvious to all that DOM operations are slow, but other JavaScript scripts must run fast?
Before V8 came out, the answer to this question was no. Google's early business model was based on the Web. When it wrote an extremely complex Web app like Gmail in the browser, it could not have been aware of the unbearable performance of the browser, which was mainly due to JavaScript. Execution speed is too slow. In September 2008, Google decided to change this situation by building a JavaScript engine - V8. When the Chrome browser equipped with V8 appeared on the market, its speed was far behind all browsers at the time. Unprecedented improvements in browser performance make complex web apps possible.
In the past seven years, browser performance has continued to rise along with CPU performance, but it has never achieved the breakthrough growth it experienced in 2008. What kind of technology does V8 use to improve the performance of JavaScript so much?
Optimization of V8
To talk about how to make JavaScript faster, we should first talk about why it is slow. As we all know, JavaScript was developed by Brendan Eich in more than a week. Compared with the current Swift, which is the result of four years of work by a team at Apple, you should not have high expectations for it in the first place. In fact, Brendan Eich did not realize that he was developing a language of such a size. In order for programmers to be flexible when writing, he designed JavaScript as a weakly typed language, and the properties of objects can be added and deleted at runtime. Concepts such as inheritance, polymorphism, templates, virtual functions, and dynamic binding in C++ that stump a large group of people no longer exist in JavaScript. So who will do the work? Naturally there is only JavaScript engine. Since it doesn't know the variable type, it does a lot of type derivation at runtime. When Parser completes its work and builds an abstract syntax tree (AST), the engine will translate the AST into bytecode (bytecode) and hand it over to the bytecode interpreter for execution. One of the steps that slows performance the most is the stage when the interpreter executes the bytecode. Looking back at that time, didn't everyone know that the interpreter performance was low? Actually no, the reason for this design was that people at the time generally believed that JavaScript, as a language developed for designers (are front-end engineers feeling cold?), does not require too high performance. This is cost-effective and meets demand. .
The main work V8 does is to remove this part that slows down the engine. It directly generates CPU executable machine code from AST. This just-in-time compilation technology is called JIT (Just in time). If you're curious enough, a natural thought is, how on earth is this done?
Let’s give an example:
function Foo(x, y) { this.x = x; this.y = y; } var foo = new Foo(7, 8); var bar = new Foo(8, 7); foo.z = 9;
Attribute reading
The first is the data structure. How are you going to index the object's properties? We are already too familiar with the key: value data structure in JSON, but can it be indexed by key in memory? Can the location of value in memory be determined? Of course you can, as long as you maintain a table for each object, which stores the location of the value corresponding to each key in the memory, right?
The trap here is that you need to maintain such a table for each object. Why? Let's see how it is done in C language.
struct Foo { int x, y; }; struct Foo foo, bar; foo.x = 7; foo.y = 8; bar.x = 8; bar.y = 7; // Cant' set foo.z
Think carefully about the textbooks in college. The addresses of foo.x and foo.y can be calculated directly. This is because the types of members x and y are determined, and foo.x = "Hello" is completely possible in JavaScript, but cannot be done in C language.
V8 不想给每个对象都维护一个这样的表。它也想让 JavaScript 拥有 C/C++ 直接用偏移就读出属性的特性。所以它的解决思路就是让动态类型静态化。V8 实现了一个叫做隐藏类(Hidden Class)的特性,即给每个对象分配一个隐藏类。对于 foo 对象,它生成一个类似于这样的类:
class Foo { int x, y; }
当新建一个 bar 对象的时候,它的 x 和 y 属性恰好都是 int 类型,那么它和 foo 对象就共享了这个隐藏类。把类型确定以后,读取属性就只是在内存中增加一个偏移的事情了。而当 foo 新建了 z 属性的时候,V8 发现原来的类不能用了,于是就会给 foo 新建一个隐藏类。修改属性类型也是类似。
Inline caching
由上可知,当访问一个对象的属性的时候,V8 首先要做的就是确定对象当前的隐藏类。但每次这样做的开销也很大,那很容易想到的另一个计算机中常用的解决方案,就是缓存。在第一次访问给定对象属性的时候,V8 将假设所有同一部分代码的其他对象也都使用了这个对象的隐藏类,于是会告诉其他对象直接使用这个类的信息。在访问其他对象的时候,如果校验正确,那么只需要一条指令就可以得到所需的属性,如果失败,V8 就会自动取消刚才的优化。上面这段话用代码来表述就是:
foo.x
# ebx = the foo object cmp [ebx,<hidden class offset>],<cached hidden class> jne <inline cache miss> mov eax,[ebx, <cached x offset>]
这极大提升了 V8 引擎的速度。
随着 Intel 宣布 Tick-Tock 模型的延缓,CPU 处理速度不再能像之前一样稳步增长了,那么浏览器还能继续变快吗?V8 的优化是浏览器性能的终点吗?
JavaScript 的问题在于错误地假设前端工程师都是水平不高的编程人员(如果不是,你应该不会读到这里),岂图让程序员写得舒服而让计算机执行得痛苦。在现代浏览器引擎已经优化到这个地步的时候,我们不禁想问:为什么一定是 JavaScript ?前端工程师是不是可以让出一步,让自己多做一点点事情,而让引擎得以更高效地优化性能?JavaScript 成为事实上的浏览器脚本标准有历史原因,但这不能是我们停止进步的借口。
当 Web Assembly 正式宣布的时候,我才确定了不仅仅是我一个名不见经传的小程序员有这样的想法,那些世界上最顶级的头脑已经开始行动了。浏览器在大量需求的驱动下正在朝着一个高性能的方向前进,浏览器究竟可以有多快,2015 可能是这条路上另一个转折点。