Rumah >pangkalan data >tutorial mysql >Hadoop + Terracotta BigMemory: Run, Elephant, Run!

Hadoop + Terracotta BigMemory: Run, Elephant, Run!

WBOY
WBOYasal
2016-06-07 16:29:571546semak imbas

Hadoop + Terracotta BigMemory: Run, Elephant, Run!: While Hadoop is great for batch processing and storage of very large data sets, it can take hours to produce results. […] To address this challenge, Terracotta recently announced the > B

Hadoop + Terracotta BigMemory: Run, Elephant, Run!:

While Hadoop is great for batch processing and storage of very large data sets, it can take hours to produce results. […] To address this challenge, Terracotta recently announced the > BigMemory-Hadoop Connector, a game-changing solution that lets Hadoop jobs write data directly into BigMemory, Terracotta’s in-memory data management platform. This enables downstream applications to get instant access to Hadoop results by reading from BigMemory. Hadoop jobs also execute faster, as they can now write to memory instead of disk (HDFS). The result can be a significant boost in competitive advantage and enterprise profitability.

Think about online applications. When the database gets slow you add a caching layer. It looks like a similar direction is very tempting for the majority of in-memory data grid-like solutions.

? The top speed of an african bush elephant is 24.9mph/40kmh. According to this.

Original title and link: Hadoop + Terracotta BigMemory: Run, Elephant, Run! (NoSQL database?myNoSQL)

Hadoop + Terracotta BigMemory: Run, Elephant, Run!
Kenyataan:
Kandungan artikel ini disumbangkan secara sukarela oleh netizen, dan hak cipta adalah milik pengarang asal. Laman web ini tidak memikul tanggungjawab undang-undang yang sepadan. Jika anda menemui sebarang kandungan yang disyaki plagiarisme atau pelanggaran, sila hubungi admin@php.cn