搜尋

By Lars Hofhansl Modern CPU cores can execute hundreds of instructions in the time it takes to reload the L1 cache. "RAM is the new disk" as a coworker at Salesforce likes to say. The L1-cache is the new RAM I might add. As we add more and

By Lars Hofhansl

Modern CPU cores can execute hundreds of instructions in the time it takes to reload the L1 cache. "RAM is the new disk" as a coworker at Salesforce likes to say. The L1-cache is the new RAM I might add.

As we add more and more CPU cores, we can easily be memory IO bound unless we are a careful.

Many common problems I have seen over the years were related to:
  1. concurrency problems
    Aside from safety and liveliness considerations, a typical problem is too much synchronization limiting potential parallel execution.
  2. unneeded or unintended memory barriers
    Memory barriers are required in Java by the following language constructs:
    • synchronized - sets read and write barriers as needed (details depend on JVM, version, and settings)
    • volatile - sets a read barrier before a read to a volatile, and write barrier after a write
    • final - set a write barrier after the assignment
    • AtomicInteger, AtomicLong, etc - uses volatiles and hardware CAS instructions
  3. unnecessary, unintended, or repeated memory copy or access
    Memory copying is often seen in Java for example because of the lack of in-array pointers, or really just general unawareness and the expectation that "garbage collector will clean up the mess." Well, it does, but not without a price.
(Entire collections of books are dedicated to each of these topics, so I won't embarrass myself by going into more detail.)

Like any software project of reasonable size, HBase has problems of all the above categories.

Profiling in Java has become extremely convenient. Just start jVisualVM which ships with the SunOracleJDK, pick the process to profile (in my case a local HBase regionserver) and start profiling.

Over the past few weeks I did some on and off profiling in HBase, which lead to the following issues:

HBASE-6603 - RegionMetricsStorage.incrNumericMetric is called too often

Ironically here it was the collection of a performance metric that caused a measurable slowdown of up 15%(!) for very wide rows (> 10k columns).
The metric was maintained as an AtomicLong, which introduced a memory barrier in one of the hottest code paths in HBase.
The good folks at Facebook have found the same issue at roughly the same time. (It turns that they were also... uhm... the folks who introduced the problem.)

HBASE-6621 - Reduce calls to Bytes.toInt

A KeyValue (the data structure that represents "columns" in HBase) is currently backed by a single byte[]. The sizes of the various parts are encoded in this byte[] and have to read and decoded; each time an extra memory access. In many cases that can be avoided, leading to slight performance improvement.

HBASE-6711 - Avoid local results copy in StoreScanner

All references pertaining to a single row (i.e. KeyValue with the same row key) were copied at the StoreScanner layer. Removing this lead to another slight performance increase with wide rows.

HBASE-7180 - RegionScannerImpl.next() is inefficient

This introduces a mechanism for coprocessors to access RegionScanners at a lower level, thus allowing skipping of a lot of unnecessary setup for each next() call. In tight loops a coprocessor can make use of this new API to save another 10-15%.

HBASE-7279 - Avoid copying the rowkey in RegionScanner, StoreScanner, and ScanQueryMatcher

The row key of KeyValue was copied in the various scan related classes. To reduce that effect the row key was previously cached in the KeyValue class - leading to extra memory required for each KeyValue.
This change avoids all copying and hence also obviates the need for caching the row key.
A KeyValue now is hardly more than an array pointer (a byte[], an offset, and a length), and no data is copied any longer all the way from the block loaded from disk or cache to the RPC layer (unless the KeyValues are optionally encoded on disk, in which case they still need to be decoded in memory - we're working on improving that too).

Previously the size of a KeyValue on the scan path was at least 116 bytes + the length of the rowkey (which can be arbitrarily long). Now it is ~60 bytes, flat and including its own reference.
(remember during a course of a large scan we might be creating millions or even billions of KeyValue objects)

This is nice improvement both in term of scan performance (15-20% for small row keys of few bytes, much more for large ones) and in terms of produced garbage.
Since all copying is avoided, scanning now scales almost linearly with the number of cores.

HBASE-6852 - SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields

Other folks have been busy too. Here Cheng Hao found another problem with a scan related metric that caused a noticeable slowdown (even though I did not believe him first).
This removed another set of unnecessary memory barriers.

HBASE-7336 - HFileBlock.readAtOffset does not work well with multiple threads

This is slightly different issue caused by bad synchronization of the FSReader associated with a Storefile. There is only a single reader per storefile. So if the file's blocks are not cached - possibly because the scan indicated that it wants no caching, because it expects to touch too many blocks - the scanner threads are now competing for read access to the store file. That lead to outright terrible performance, such a scanners timing out even with just two scanners accessing the same file in tight loop.
This patch is a stop gap measure: Attempt to acquire the lock on the reader, if that failed switch to HDFS positional reads, which can read at an offset without affecting the state of the stream, and hence requires no locking.

Summary

Together these various changes can lead to ~40-50% scan performance improvement when using a single core. Even more when using multiple cores on the same machines (as is the case with HBase)

An entirely unscientific benchmark

20m rows, with two column families just a few dozen bytes each.

I performed two tests:

1. A scan that returns rows to the client

2. A scan that touches all rows via a filter but does not return anything to the client.

(This is useful to gauge the actual server side performance).


Further I tested with (1) no caching, all reads from disk (2) all data in the OS cache and (3) all data in HBase's block cache.


I compared 0.94.0 against the current 0.94 branch (what I will soon release as 0.94.4).


Results:

  • Scanning with scanner caching set to 10000:
    • 0.94.0
      no data in cache: 54s
      data in OS cache: 51s
      data in block cache: 35s

    • 0.94.4-snapshot
      no data in cache: 50s (IO bound between disk and network)
      data in OS cache: 43s
      data in block cache: 32s
      (limiting factor was shipping the results to the client)

  • all data filtered at the server (with a SingleValueColumnFilter that does not match anything, so each rows is still scanned)
    • 0.94.0
      no data in cache: 31s

      data in OS cache: 25s
      data in block cache: 11s

    • 0.94.4-snapshot
      no data in cache: 22s
      data in OS cache: 17s
      cache in block cache: 6.3s
I have not quantified the same with multiple concurrent scanners, yet.

So as you can see scan performance has significantly improved since 0.94.0.

Salesforce just hired some performance engineers from a well known chip manufacturer, and I plan to get some of their time to analyze HBase in even more details, to track down memory stalls, etc. 

陳述
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
您什麼時候應該使用複合索引與多個單列索引?您什麼時候應該使用複合索引與多個單列索引?Apr 11, 2025 am 12:06 AM

在數據庫優化中,應根據查詢需求選擇索引策略:1.當查詢涉及多個列且條件順序固定時,使用複合索引;2.當查詢涉及多個列但條件順序不固定時,使用多個單列索引。複合索引適用於優化多列查詢,單列索引則適合單列查詢。

如何識別和優化MySQL中的慢速查詢? (慢查詢日誌,performance_schema)如何識別和優化MySQL中的慢速查詢? (慢查詢日誌,performance_schema)Apr 10, 2025 am 09:36 AM

要優化MySQL慢查詢,需使用slowquerylog和performance_schema:1.啟用slowquerylog並設置閾值,記錄慢查詢;2.利用performance_schema分析查詢執行細節,找出性能瓶頸並優化。

MySQL和SQL:開發人員的基本技能MySQL和SQL:開發人員的基本技能Apr 10, 2025 am 09:30 AM

MySQL和SQL是開發者必備技能。 1.MySQL是開源的關係型數據庫管理系統,SQL是用於管理和操作數據庫的標準語言。 2.MySQL通過高效的數據存儲和檢索功能支持多種存儲引擎,SQL通過簡單語句完成複雜數據操作。 3.使用示例包括基本查詢和高級查詢,如按條件過濾和排序。 4.常見錯誤包括語法錯誤和性能問題,可通過檢查SQL語句和使用EXPLAIN命令優化。 5.性能優化技巧包括使用索引、避免全表掃描、優化JOIN操作和提升代碼可讀性。

描述MySQL異步主奴隸複製過程。描述MySQL異步主奴隸複製過程。Apr 10, 2025 am 09:30 AM

MySQL異步主從復制通過binlog實現數據同步,提升讀性能和高可用性。 1)主服務器記錄變更到binlog;2)從服務器通過I/O線程讀取binlog;3)從服務器的SQL線程應用binlog同步數據。

mysql:簡單的概念,用於輕鬆學習mysql:簡單的概念,用於輕鬆學習Apr 10, 2025 am 09:29 AM

MySQL是一個開源的關係型數據庫管理系統。 1)創建數據庫和表:使用CREATEDATABASE和CREATETABLE命令。 2)基本操作:INSERT、UPDATE、DELETE和SELECT。 3)高級操作:JOIN、子查詢和事務處理。 4)調試技巧:檢查語法、數據類型和權限。 5)優化建議:使用索引、避免SELECT*和使用事務。

MySQL:數據庫的用戶友好介紹MySQL:數據庫的用戶友好介紹Apr 10, 2025 am 09:27 AM

MySQL的安裝和基本操作包括:1.下載並安裝MySQL,設置根用戶密碼;2.使用SQL命令創建數據庫和表,如CREATEDATABASE和CREATETABLE;3.執行CRUD操作,使用INSERT,SELECT,UPDATE,DELETE命令;4.創建索引和存儲過程以優化性能和實現複雜邏輯。通過這些步驟,你可以從零開始構建和管理MySQL數據庫。

InnoDB緩衝池如何工作,為什麼對性能至關重要?InnoDB緩衝池如何工作,為什麼對性能至關重要?Apr 09, 2025 am 12:12 AM

InnoDBBufferPool通過將數據和索引頁加載到內存中來提升MySQL數據庫的性能。 1)數據頁加載到BufferPool中,減少磁盤I/O。 2)臟頁被標記並定期刷新到磁盤。 3)LRU算法管理數據頁淘汰。 4)預讀機制提前加載可能需要的數據頁。

MySQL:初學者的數據管理易用性MySQL:初學者的數據管理易用性Apr 09, 2025 am 12:07 AM

MySQL適合初學者使用,因為它安裝簡單、功能強大且易於管理數據。 1.安裝和配置簡單,適用於多種操作系統。 2.支持基本操作如創建數據庫和表、插入、查詢、更新和刪除數據。 3.提供高級功能如JOIN操作和子查詢。 4.可以通過索引、查詢優化和分錶分區來提升性能。 5.支持備份、恢復和安全措施,確保數據的安全和一致性。

See all articles

熱AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool

Undress AI Tool

免費脫衣圖片

Clothoff.io

Clothoff.io

AI脫衣器

AI Hentai Generator

AI Hentai Generator

免費產生 AI 無盡。

熱門文章

R.E.P.O.能量晶體解釋及其做什麼(黃色晶體)
3 週前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.最佳圖形設置
3 週前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.如果您聽不到任何人,如何修復音頻
3 週前By尊渡假赌尊渡假赌尊渡假赌
WWE 2K25:如何解鎖Myrise中的所有內容
3 週前By尊渡假赌尊渡假赌尊渡假赌

熱工具

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

將Eclipse與SAP NetWeaver應用伺服器整合。

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

這個專案正在遷移到osdn.net/projects/mingw的過程中,你可以繼續在那裡關注我們。 MinGW:GNU編譯器集合(GCC)的本機Windows移植版本,可自由分發的導入函式庫和用於建置本機Windows應用程式的頭檔;包括對MSVC執行時間的擴展,以支援C99功能。 MinGW的所有軟體都可以在64位元Windows平台上運作。

SublimeText3 Mac版

SublimeText3 Mac版

神級程式碼編輯軟體(SublimeText3)

VSCode Windows 64位元 下載

VSCode Windows 64位元 下載

微軟推出的免費、功能強大的一款IDE編輯器

SublimeText3 英文版

SublimeText3 英文版

推薦:為Win版本,支援程式碼提示!