流量统计JAR打包YARN运行出错
来源:5-8 提交流量统计案例到YARN上运行

begin_0002
2020-07-04
hadoop jar HadoopTrain-1.0-SNAPSHOT.jar mr.access.AccessYarnApp /access/input/access.log /access/output2/
20/07/03 18:44:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
20/07/03 18:44:04 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
20/07/03 18:44:04 INFO input.FileInputFormat: Total input paths to process : 1
20/07/03 18:44:04 INFO mapreduce.JobSubmitter: number of splits:1
20/07/03 18:44:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1593826559825_0003
20/07/03 18:44:05 INFO impl.YarnClientImpl: Submitted application application_1593826559825_0003
20/07/03 18:44:05 INFO mapreduce.Job: The url to track the job: http://hadoop000:8088/proxy/application_1593826559825_0003/
20/07/03 18:44:05 INFO mapreduce.Job: Running job: job_1593826559825_0003
20/07/03 18:44:12 INFO mapreduce.Job: Job job_1593826559825_0003 running in uber mode : false
20/07/03 18:44:12 INFO mapreduce.Job: map 0% reduce 0%
20/07/03 18:44:19 INFO mapreduce.Job: map 100% reduce 0%
20/07/03 18:44:23 INFO mapreduce.Job: Task Id : attempt_1593826559825_0003_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#5
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)
20/07/03 18:44:28 INFO mapreduce.Job: Task Id : attempt_1593826559825_0003_r_000000_1, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#5
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)
20/07/03 18:44:33 INFO mapreduce.Job: Task Id : attempt_1593826559825_0003_m_000000_0, Status : FAILED
Too many fetch failures. Failing the attempt. Last failure reported by attempt_1593826559825_0003_r_000000_2 from host hadoop000
20/07/03 18:44:33 INFO mapreduce.Job: Task Id : attempt_1593826559825_0003_r_000000_2, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)
20/07/03 18:44:34 INFO mapreduce.Job: map 0% reduce 0%
20/07/03 18:44:41 INFO mapreduce.Job: map 100% reduce 0%
20/07/03 18:44:43 INFO mapreduce.Job: map 100% reduce 100%
20/07/03 18:44:43 INFO mapreduce.Job: Job job_1593826559825_0003 failed with state FAILED due to: Task failed task_1593826559825_0003_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1
20/07/03 18:44:43 INFO mapreduce.Job: Counters: 39
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=144126
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2420
HDFS: Number of bytes written=0
HDFS: Number of read operations=3
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed map tasks=1
Failed reduce tasks=4
Launched map tasks=2
Launched reduce tasks=4
Other local map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=7395
Total time spent by all reduces in occupied slots (ms)=12683
Total time spent by all map tasks (ms)=7395
Total time spent by all reduce tasks (ms)=12683
Total vcore-milliseconds taken by all map tasks=7395
Total vcore-milliseconds taken by all reduce tasks=12683
Total megabyte-milliseconds taken by all map tasks=7572480
Total megabyte-milliseconds taken by all reduce tasks=12987392
Map-Reduce Framework
Map input records=23
Map output records=23
Map output bytes=1121
Map output materialized bytes=1173
Input split bytes=110
Combine input records=0
Spilled Records=23
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=91
CPU time spent (ms)=430
Physical memory (bytes) snapshot=217415680
Virtual memory (bytes) snapshot=2729959424
Total committed heap usage (bytes)=169807872
File Input Format Counters
Bytes Read=2310
false
1回答
-
Michael_PK
2020-07-04
这个问题本课程的问答区就有答案,可以去参考下。
00
相似问题