Home >Common Problem >Hadoop is used for distributed computing, what is it

Hadoop is used for distributed computing, what is it

(*-*)浩
(*-*)浩Original
2019-11-18 14:01:233311browse

Hadoop is used for distributed computing, what is it

What is hadoop?

(1)Hadoop is an open source framework that can write and run distributed applications to process large-scale data. It is designed for offline and large-scale data analysis. The design is not suitable for the online transaction processing mode that randomly reads and writes several records. (Recommended learning: web front-end video tutorial)

Hadoop=HDFS (file system, data storage technology related) Mapreduce (data processing), Hadoop data source can be in any form , has better performance than relational databases in processing semi-structured and unstructured data, and has more flexible processing capabilities. Regardless of any data form, it will eventually be converted into key/value, which is the basic data unit. .

Use functional expressions to Mapreduce instead of SQL. SQL is a query statement, while Mapreduce uses scripts and codes. For relational databases, Hadoop, which is accustomed to SQL, is replaced by the open source tool hive.

(2)Hadoop is a distributed computing solution.

What can hadoop do?

Hadoop is good at log analysis, and Facebook uses Hive for log analysis. In 2009, 30% of non-programmers at Facebook used HiveQL for data analysis;

Taobao Hive is also used for custom filtering in searches; Pig can also be used for advanced data processing, including discovering people you may know on Twitter and LinkedIn, which can achieve a recommendation effect similar to Amazon.com's collaborative filtering.

Taobao product recommendations are also available! In Yahoo! 40% of Hadoop jobs are run using pig, including spam identification and filtering, and user feature modeling.

Hadoop is made up of many elements.

At the bottom is the Hadoop Distributed File System (HDFS), which stores files on all storage nodes in the Hadoop cluster.

The upper layer of HDFS is the MapReduce engine, which consists of JobTrackers and TaskTrackers. Through the introduction of the core distributed file system HDFS and MapReduce processing of the Hadoop distributed computing platform, as well as the data warehouse tool Hive and the distributed database Hbase, it basically covers all the technical core of the Hadoop distributed platform.

The above is the detailed content of Hadoop is used for distributed computing, what is it. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn