HDFS与关系型数据库数据交换利器—sqoop初探
Sqoop是一种用于 hadoop 与 RDBMS 进行数据传输的工具。 配置比较简单。 去apache官网下载最新的 sqoop 包。 下载地址:http://www.apache.org/dist/ sqoop /1.99.1/ 解压缩到服务器上。服务器要求本身有jdk, hadoop , hive 。 配置: conf/sqoop-env.sh #
Sqoop是一种用于hadoop与RDBMS进行数据传输的工具。
配置比较简单。
去apache官网下载最新的sqoop包。
下载地址:http://www.apache.org/dist/sqoop/1.99.1/
解压缩到服务器上。服务器要求本身有jdk,hadoop,hive。
配置:
conf/sqoop-env.sh
#Set path to where bin/hadoop is available
export HADOOP_HOME=/home/hadoop/hadoop-0.20.205.0
#Set the path to where bin/hive is available
export HIVE_HOME=/home/hadoop/hive-0.8.1
这时候就可以进行试验了。我们主要是利用其与hive进行交互,实际就是将关系型的数据库中的数据提交到hive,保存到HDFS中,以便于大数据的计算。
sqoop主要包含了以下命令,或者说功能。
codegen Import a table definition into Hive eval Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help List available commands import Import a table from a database to HDFS import-all-tables Import tables from a database to HDFS job Work with saved jobs list-databases List available databases on a server list-tables List available tables in a database merge Merge results of incremental imports metastore Run a standalone Sqoop metastore version Display version information <code> 这里主要是使用其中的import功能。export功能的命令语法类似。</code>
示例
./sqoop import --connect jdbc:mysql://lcoalhost:3306/dbname--username dbuser --password dbpassword --table tablename --hive-import --hive-table hivedb.hivetable --hive-drop-import-delims --hive-overwrite --num-mappers 6
以上命令的意思就是要将本地数据库dbname中的tablename表的数据导入到hivedb的hivetable表中。
其中一些常用的参数就不进行解释了。
–hive-import 标识本次导入的地址为hive
–hive-table 标识hive中的表信息
–hive-drop-import-delims 这个比较重要,因为数据从数据库中导入到HDFS中,如果包含了特殊的字符,对MR解析是存在问题的,比如数据库中
有text类型的字段,有可能包含\t,\n等参数,加入这个参数后,会自动将特殊字符进行处理。
–hive-overwrite 如果原有的hive表已经存在,则会进行覆盖操作。
–num-mappers 会指定执行本次导入的mapper任务数量。
还有一个比较重要的参数 –direct 这个参数可以通过数据库的dump功能进行数据导入,这样的性能比上例更好,但是其不能与–hive-drop-import-delims参数功能使用。所以还是要根据自己数据库的情况来进行判断使用何种命令。
如下是sqoop的import命令
Argument | Description |
---|---|
--connect <jdbc-uri></jdbc-uri>
|
Specify JDBC connect string |
--connection-manager <class-name></class-name>
|
Specify connection manager class to use |
--driver <class-name></class-name>
|
Manually specify JDBC driver class to use |
--hadoop-home <dir></dir>
|
Override $HADOOP_HOME |
--help
|
Print usage instructions |
-P
|
Read password from console |
--password <password></password>
|
Set authentication password |
--username <username></username>
|
Set authentication username |
--verbose
|
Print more information while working |
--connection-param-file <filename></filename>
|
Optional properties file that provides connection parameters |
Argument | Description |
---|---|
--hive-home <dir></dir>
|
Override $HIVE_HOME
|
--hive-import
|
Import tables into Hive (Uses Hive’s default delimiters if none are set.) |
--hive-overwrite
|
Overwrite existing data in the Hive table. |
--create-hive-table
|
If set, then the job will fail if the target hive |
table exits. By default this property is false. | |
--hive-table <table-name></table-name>
|
Sets the table name to use when importing to Hive. |
--hive-drop-import-delims
|
Drops \n, \r, and \01 from string fields when importing to Hive. |
--hive-delims-replacement
|
Replace \n, \r, and \01 from string fields with user defined string when importing to Hive. |
--hive-partition-key
|
Name of a hive field to partition are sharded on |
--hive-partition-value <v></v>
|
String-value that serves as partition key for this imported into hive in this job. |
--map-column-hive <map></map>
|
Override default mapping from SQL type to Hive type for configured columns. |
以下为一些参考示例
写入条件
sqoop import –table test –columns “id,name” –where “id>400″
使用dump功能
sqoop import –connect jdbc:mysql://server.foo.com/db –table bar –direct — –default-character-set=latin1
列类型重新定义
sqoop import … –map-column-java id=String,value=Integer
定义分割符
sqoop import –connect jdbc:mysql://db.foo.com/corp –table EMPLOYEES –fields-terminated-by ‘\t’ –lines-terminated-by ‘\n’ –optionally-enclosed-by ‘\”‘
原文地址:HDFS与关系型数据库数据交换利器—sqoop初探, 感谢原作者分享。

Stored procedures are precompiled SQL statements in MySQL for improving performance and simplifying complex operations. 1. Improve performance: After the first compilation, subsequent calls do not need to be recompiled. 2. Improve security: Restrict data table access through permission control. 3. Simplify complex operations: combine multiple SQL statements to simplify application layer logic.

The working principle of MySQL query cache is to store the results of SELECT query, and when the same query is executed again, the cached results are directly returned. 1) Query cache improves database reading performance and finds cached results through hash values. 2) Simple configuration, set query_cache_type and query_cache_size in MySQL configuration file. 3) Use the SQL_NO_CACHE keyword to disable the cache of specific queries. 4) In high-frequency update environments, query cache may cause performance bottlenecks and needs to be optimized for use through monitoring and adjustment of parameters.

The reasons why MySQL is widely used in various projects include: 1. High performance and scalability, supporting multiple storage engines; 2. Easy to use and maintain, simple configuration and rich tools; 3. Rich ecosystem, attracting a large number of community and third-party tool support; 4. Cross-platform support, suitable for multiple operating systems.

The steps for upgrading MySQL database include: 1. Backup the database, 2. Stop the current MySQL service, 3. Install the new version of MySQL, 4. Start the new version of MySQL service, 5. Recover the database. Compatibility issues are required during the upgrade process, and advanced tools such as PerconaToolkit can be used for testing and optimization.

MySQL backup policies include logical backup, physical backup, incremental backup, replication-based backup, and cloud backup. 1. Logical backup uses mysqldump to export database structure and data, which is suitable for small databases and version migrations. 2. Physical backups are fast and comprehensive by copying data files, but require database consistency. 3. Incremental backup uses binary logging to record changes, which is suitable for large databases. 4. Replication-based backup reduces the impact on the production system by backing up from the server. 5. Cloud backups such as AmazonRDS provide automation solutions, but costs and control need to be considered. When selecting a policy, database size, downtime tolerance, recovery time, and recovery point goals should be considered.

MySQLclusteringenhancesdatabaserobustnessandscalabilitybydistributingdataacrossmultiplenodes.ItusestheNDBenginefordatareplicationandfaulttolerance,ensuringhighavailability.Setupinvolvesconfiguringmanagement,data,andSQLnodes,withcarefulmonitoringandpe

Optimizing database schema design in MySQL can improve performance through the following steps: 1. Index optimization: Create indexes on common query columns, balancing the overhead of query and inserting updates. 2. Table structure optimization: Reduce data redundancy through normalization or anti-normalization and improve access efficiency. 3. Data type selection: Use appropriate data types, such as INT instead of VARCHAR, to reduce storage space. 4. Partitioning and sub-table: For large data volumes, use partitioning and sub-table to disperse data to improve query and maintenance efficiency.

TooptimizeMySQLperformance,followthesesteps:1)Implementproperindexingtospeedupqueries,2)UseEXPLAINtoanalyzeandoptimizequeryperformance,3)Adjustserverconfigurationsettingslikeinnodb_buffer_pool_sizeandmax_connections,4)Usepartitioningforlargetablestoi


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Atom editor mac version download
The most popular open source editor

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 English version
Recommended: Win version, supports code prompts!
