注意:以下内容在2.x版本与1.x版本同样适用,已在2.4.1与1.2.0进行测试。 一、前期准备 1、创建伪分布Hadoop环境,请参考官方文档。或者http://blog.csdn.net/jediael_lu/article/details/38637277 2、准备数据文件如下sample.txt: 12345679867623119010123
注意:以下内容在2.x版本与1.x版本同样适用,已在2.4.1与1.2.0进行测试。
一、前期准备
1、创建伪分布Hadoop环境,请参考官方文档。或者http://blog.csdn.net/jediael_lu/article/details/38637277
2、准备数据文件如下sample.txt:
123456798676231190101234567986762311901012345679867623119010123456798676231190101234561+00121534567890356
123456798676231190101234567986762311901012345679867623119010123456798676231190101234562+01122934567890456
123456798676231190201234567986762311901012345679867623119010123456798676231190101234562+02120234567893456
123456798676231190401234567986762311901012345679867623119010123456798676231190101234561+00321234567803456
123456798676231190101234567986762311902012345679867623119010123456798676231190101234561+00429234567903456
123456798676231190501234567986762311902012345679867623119010123456798676231190101234561+01021134568903456
123456798676231190201234567986762311902012345679867623119010123456798676231190101234561+01124234578903456
123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+04121234678903456
123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+00821235678903456
二、编写代码
1、创建Map
package org.jediael.hadoopDemo.maxtemperature; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class MaxTemperatureMapper extends Mapper<longwritable text intwritable> { private static final int MISSING = 9999; @Override public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String year = line.substring(15, 19); int airTemperature; if (line.charAt(87) == '+') { // parseInt doesn't like leading plus // signs airTemperature = Integer.parseInt(line.substring(88, 92)); } else { airTemperature = Integer.parseInt(line.substring(87, 92)); } String quality = line.substring(92, 93); if (airTemperature != MISSING && quality.matches("[01459]")) { context.write(new Text(year), new IntWritable(airTemperature)); } } } </longwritable>
2、创建Reduce
package org.jediael.hadoopDemo.maxtemperature; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class MaxTemperatureReducer extends Reducer<text intwritable text> { @Override public void reduce(Text key, Iterable<intwritable> values, Context context) throws IOException, InterruptedException { int maxValue = Integer.MIN_VALUE; for (IntWritable value : values) { maxValue = Math.max(maxValue, value.get()); } context.write(key, new IntWritable(maxValue)); } }</intwritable></text>
3、创建main方法
package org.jediael.hadoopDemo.maxtemperature; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class MaxTemperature { public static void main(String[] args) throws Exception { if (args.length != 2) { System.err .println("Usage: MaxTemperature <input path> <output path>"); System.exit(-1); } Job job = new Job(); job.setJarByClass(MaxTemperature.class); job.setJobName("Max temperature"); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.setMapperClass(MaxTemperatureMapper.class); job.setReducerClass(MaxTemperatureReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); System.exit(job.waitForCompletion(true) ? 0 : 1); } } </output>
4、导出成MaxTemp.jar,并上传至运行程序的服务器。
三、运行程序
1、创建input目录并将sample.txt复制到input目录
hadoop fs -put sample.txt /
2、运行程序
export HADOOP_CLASSPATH=MaxTemp.jar
hadoop org.jediael.hadoopDemo.maxtemperature.MaxTemperature /sample.txt output10
注意输出目录不能已经存在,否则会创建失败。
3、查看结果
(1)查看结果
[jediael@jediael44 code]$ hadoop fs -cat output10/*
14/07/09 14:51:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1901 42
1902 212
1903 412
1904 32
1905 102
(2)运行时输出
[jediael@jediael44 code]$ hadoop org.jediael.hadoopDemo.maxtemperature.MaxTemperature /sample.txt output10
14/07/09 14:50:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/07/09 14:50:41 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/07/09 14:50:42 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/07/09 14:50:43 INFO input.FileInputFormat: Total input paths to process : 1
14/07/09 14:50:43 INFO mapreduce.JobSubmitter: number of splits:1
14/07/09 14:50:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1404888618764_0001
14/07/09 14:50:44 INFO impl.YarnClientImpl: Submitted application application_1404888618764_0001
14/07/09 14:50:44 INFO mapreduce.Job: The url to track the job: http://jediael44:8088/proxy/application_1404888618764_0001/
14/07/09 14:50:44 INFO mapreduce.Job: Running job: job_1404888618764_0001
14/07/09 14:50:57 INFO mapreduce.Job: Job job_1404888618764_0001 running in uber mode : false
14/07/09 14:50:57 INFO mapreduce.Job: map 0% reduce 0%
14/07/09 14:51:05 INFO mapreduce.Job: map 100% reduce 0%
14/07/09 14:51:15 INFO mapreduce.Job: map 100% reduce 100%
14/07/09 14:51:15 INFO mapreduce.Job: Job job_1404888618764_0001 completed successfully
14/07/09 14:51:16 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=94
FILE: Number of bytes written=185387
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1051
HDFS: Number of bytes written=43
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=5812
Total time spent by all reduces in occupied slots (ms)=7023
Total time spent by all map tasks (ms)=5812
Total time spent by all reduce tasks (ms)=7023
Total vcore-seconds taken by all map tasks=5812
Total vcore-seconds taken by all reduce tasks=7023
Total megabyte-seconds taken by all map tasks=5951488
Total megabyte-seconds taken by all reduce tasks=7191552
Map-Reduce Framework
Map input records=9
Map output records=8
Map output bytes=72
Map output materialized bytes=94
Input split bytes=97
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=94
Reduce input records=8
Reduce output records=5
Spilled Records=16
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=154
CPU time spent (ms)=1450
Physical memory (bytes) snapshot=303112192
Virtual memory (bytes) snapshot=1685733376
Total committed heap usage (bytes)=136515584
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=954
File Output Format Counters
Bytes Written=43

ACID属性包括原子性、一致性、隔离性和持久性,是数据库设计的基石。1.原子性确保事务要么完全成功,要么完全失败。2.一致性保证数据库在事务前后保持一致状态。3.隔离性确保事务之间互不干扰。4.持久性确保事务提交后数据永久保存。

MySQL既是数据库管理系统(DBMS),也与编程语言紧密相关。1)作为DBMS,MySQL用于存储、组织和检索数据,优化索引可提高查询性能。2)通过SQL与编程语言结合,嵌入在如Python中,使用ORM工具如SQLAlchemy可简化操作。3)性能优化包括索引、查询、缓存、分库分表和事务管理。

MySQL使用SQL命令管理数据。1.基本命令包括SELECT、INSERT、UPDATE和DELETE。2.高级用法涉及JOIN、子查询和聚合函数。3.常见错误有语法、逻辑和性能问题。4.优化技巧包括使用索引、避免SELECT*和使用LIMIT。

MySQL是一种高效的关系型数据库管理系统,适用于存储和管理数据。其优势包括高性能查询、灵活的事务处理和丰富的数据类型。实际应用中,MySQL常用于电商平台、社交网络和内容管理系统,但需注意性能优化、数据安全和扩展性。

SQL和MySQL的关系是标准语言与具体实现的关系。1.SQL是用于管理和操作关系数据库的标准语言,允许进行数据的增、删、改、查。2.MySQL是一个具体的数据库管理系统,使用SQL作为其操作语言,并提供高效的数据存储和管理。

InnoDB使用redologs和undologs确保数据一致性和可靠性。1.redologs记录数据页修改,确保崩溃恢复和事务持久性。2.undologs记录数据原始值,支持事务回滚和MVCC。

EXPLAIN命令的关键指标包括type、key、rows和Extra。1)type反映查询的访问类型,值越高效率越高,如const优于ALL。2)key显示使用的索引,NULL表示无索引。3)rows预估扫描行数,影响查询性能。4)Extra提供额外信息,如Usingfilesort提示需要优化。

Usingtemporary在MySQL查询中表示需要创建临时表,常见于使用DISTINCT、GROUPBY或非索引列的ORDERBY。可以通过优化索引和重写查询避免其出现,提升查询性能。具体来说,Usingtemporary出现在EXPLAIN输出中时,意味着MySQL需要创建临时表来处理查询。这通常发生在以下情况:1)使用DISTINCT或GROUPBY时进行去重或分组;2)ORDERBY包含非索引列时进行排序;3)使用复杂的子查询或联接操作。优化方法包括:1)为ORDERBY和GROUPB


热AI工具

Undresser.AI Undress
人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover
用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool
免费脱衣服图片

Clothoff.io
AI脱衣机

AI Hentai Generator
免费生成ai无尽的。

热门文章

热工具

适用于 Eclipse 的 SAP NetWeaver 服务器适配器
将Eclipse与SAP NetWeaver应用服务器集成。

Dreamweaver CS6
视觉化网页开发工具

禅工作室 13.0.1
功能强大的PHP集成开发环境

EditPlus 中文破解版
体积小,语法高亮,不支持代码提示功能

MinGW - 适用于 Windows 的极简 GNU
这个项目正在迁移到osdn.net/projects/mingw的过程中,你可以继续在那里关注我们。MinGW:GNU编译器集合(GCC)的本地Windows移植版本,可自由分发的导入库和用于构建本地Windows应用程序的头文件;包括对MSVC运行时的扩展,以支持C99功能。MinGW的所有软件都可以在64位Windows平台上运行。