Home >web3.0 >Interpretation of Vitalik's new article: Why does Rollup, whose blob space is not used efficiently, fall into development difficulties?

Interpretation of Vitalik's new article: Why does Rollup, whose blob space is not used efficiently, fall into development difficulties?

WBOY
WBOYforward
2024-04-01 20:16:13539browse

解读 Vitalik 新文:为什么 Blob 空间未被高效使用的 Rollup 陷入了发展困境?

How to understand @VitalikButerin’s new article’s thoughts on the expansion of Ethereum? Some people say that Vitalik’s order for Blob inscription is outrageous.

So how do Blob packets work? Why is the blob space not being used efficiently after the Cancun upgrade? DAS data availability sampling in preparation for sharding?

In my opinion, Vitalik is worried about the development of Rollup because the performance of Cancun is usable after the upgrade. Why? Next, let me talk about my understanding:

As explained many times before, Blob is a temporary data package that is decoupled from EVM calldata and can be directly called by the consensus layer. The direct benefit is ,EVM does not need to access Blob data when executing ,transactions, thus resulting in lower execution layer ,computing costs.

Currently balancing a series of factors, the size of 1 Blob is 128k, and a Batch transaction to the main network can carry up to two Blobs. Ideally, the ultimate goal of a main network block is to carry 16MB About 128 blob packets.

Therefore, the Rollup project team must balance factors such as the number of Blob blocks, TPS transaction capacity, and Blob main network node storage costs as much as possible, with the goal of using the Blob space with the optimal cost performance.

Take "Optimism" as an example. Currently, there are about 500,000 transactions a day. On average, a transaction is batched to the main network every 2 minutes, carrying 1 Blob data packet at a time. Why carry one? Because there are only so many TPSs that cannot be used up. Of course, you can also carry two. Then the capacity of each blob will not be full, but it will increase the storage cost, which is unnecessary.

What should we do when the volume of transactions off the Rollup chain increases, for example, 50 million transactions are processed every day? 1. Compress compresses the transaction volume of each Batch and allows as much transactions as possible in the Blob space; 2. Increases the number of Blobs; 3. Shortens the frequency of Batch transactions;

2) Due to the main network The amount of data carried by a block is affected by Gas Limit and storage cost. 128 blobs per block is the ideal state. Currently, we do not use that many. Optimism only uses 1 every 2 minutes, leaving it to the layer2 project to improve TPS and expand There is still a lot of room for market user volume and ecological prosperity.

Therefore, for a period of time after the Cancun upgrade, Rollup did not "volume" in terms of the number and frequency of Blobs used, as well as Blob space bidding usage.

The reason why Vitalik mentioned Blobscription inscriptions is because this type of inscription can temporarily increase the transaction volume, which will lead to an increase in the demand for Blob usage, thus expanding the volume. Using inscriptions as an example can provide a deeper understanding of the working mechanism of Blobs. Vitalik really What I want to express has nothing to do with the inscription.

Because in theory, if there is a layer2 project party that performs high-frequency and high-capacity batch transactions to the main network, and fills up the Blob block every time, as long as it is willing to bear the high cost of forged transaction batches. It will affect the normal use of Blob by other layer2, but under the current situation, just like someone buying computing power to carry out a 51% hard fork attack on BTC, it is theoretically feasible, but in practice it lacks profit motivation.

Therefore, the gas cost of the second layer will be stable in the "lower" range for a long time, which will give the layer 2 market a long-term golden development window of "increasing troops and food".

3) So, what if one day the layer 2 market prospers to a certain extent, and the number of transactions from Batch to the main network reaches a huge amount every day, but what if the current Blob data packets are not enough? Ethereum has already provided a solution: using data availability sampling technology (DAS):

Simple understanding means that the data that originally needs to be stored in one node can be distributed among multiple nodes at the same time, for example, Each node stores 1/8 of all Blob data, and 8 nodes form a group to meet the DA capability, which is equivalent to expanding the current Blob storage capacity by 8 times. This is actually what Sharding will do in the future.

But now Vitalik has reiterated this many times, very charmingly, and seems to be warning the majority of layer2 project parties: Don’t always complain that Ethereum’s DA capacity is expensive. With your current TPS capacity, you have not developed the capacity of Blob data packets. To the extreme, hurry up and increase your firepower to develop the ecosystem and expand users and transaction volume. Don't always think about DA running away to engage in one-click chain creation.

Later, Vitalik added that among the current core rollups, only Arbitum has reached stage 1. Although @DeGateDex, Fuel, etc. have reached stage 2, they have not yet been familiar with the wider group. Stage 2 is the ultimate goal of Rollup security. Very few Rollups have reached Stage 1, and most rollups are in Stage 0. It can be seen that the development of the Rollup industry really worries Vitalik.

4) In fact, in terms of the expansion bottleneck problem, there is still a lot of room for the Rollup layer2 solution to improve performance.

1. Use Blob space more efficiently through data compression. OP-Rollup currently has a dedicated compressor component to perform this work. ZK-Rollup’s own off-chain compression SNARK/STARK proof is submitted to the main network. It's "compressing";

2. Максимально уменьшите зависимость уровня 2 от основной сети и используйте только технологию оптимистического доказательства для обеспечения безопасности L2 в особых обстоятельствах. Например, большая часть данных Plasma находится в цепочке, но в сценариях ввода и вывода средств, это основная сеть, поэтому основная сеть может гарантировать свою безопасность.

Это означает, что уровень 2 должен учитывать только важные операции, такие как депозиты и снятие средств, тесно связанные с основной сетью. Это не только снизит нагрузку на основную сеть, но и повысит производительность самого L2. Параллелизм Sequencer упоминался ранее. Возможности обработки, проверка вне цепочки, классификация и предварительная обработка большого количества транзакций, а также гибридное объединение, продвигаемое @MetisL2, обычные транзакции через OP-Rollup, специальные запросы на снятие средств через ZK Route, и т. д., у всех схожие соображения.

Выше

Следует сказать, что статья Виталика, размышляющая о будущем плане расширения Ethereum, очень поучительна. В частности, он был недоволен текущим состоянием разработки уровня 2, с оптимизмом смотрел на производительность Blobs и с нетерпением ждал будущего технологии шардинга. Он даже указал на некоторые направления для уровня 2, которые стоит оптимизировать, и т. д.

На самом деле, единственная неопределенность теперь остается в самом слое 2. Как ускорить разработку?

The above is the detailed content of Interpretation of Vitalik's new article: Why does Rollup, whose blob space is not used efficiently, fall into development difficulties?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:chaincatcher.com. If there is any infringement, please contact admin@php.cn delete