数据清理作业运行到yarn上,执行不成功

来源:9-25 -数据清洗作业运行到YARN上

慕粉2110073833

2017-11-12

老师,能给分析下是哪里报的错吗?下面是运行的日志

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/filecache/13/__spark_libs__3661190248881254433.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/11/12 15:08:33 INFO util.SignalUtils: Registered signal handler for TERM
17/11/12 15:08:33 INFO util.SignalUtils: Registered signal handler for HUP
17/11/12 15:08:33 INFO util.SignalUtils: Registered signal handler for INT
17/11/12 15:08:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/12 15:08:35 INFO yarn.ApplicationMaster: Preparing Local resources
17/11/12 15:08:36 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1510455447591_0003_000001
17/11/12 15:08:36 INFO spark.SecurityManager: Changing view acls to: hadoop
17/11/12 15:08:36 INFO spark.SecurityManager: Changing modify acls to: hadoop
17/11/12 15:08:36 INFO spark.SecurityManager: Changing view acls groups to:
17/11/12 15:08:36 INFO spark.SecurityManager: Changing modify acls groups to:
17/11/12 15:08:36 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
17/11/12 15:08:36 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
17/11/12 15:08:36 INFO yarn.ApplicationMaster: Driver now available: 10.80.80.143:33185
17/11/12 15:08:37 INFO client.TransportClientFactory: Successfully created connection to /10.80.80.143:33185 after 66 ms (0 ms spent in bootstraps)
17/11/12 15:08:37 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> 39.107.32.210, PROXY_URI_BASES -> http://39.107.32.210:8088/proxy/application_1510455447591_0003),/proxy/application_1510455447591_0003)
17/11/12 15:08:37 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
 env:
   CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
   SPARK_YARN_STAGING_DIR -> hdfs://hadoop001:8020/user/hadoop/.sparkStaging/application_1510455447591_0003
   SPARK_USER -> hadoop
   SPARK_YARN_MODE -> true

 command:
   {{JAVA_HOME}}/bin/java \
     -server \
     -Xmx1024m \
     -Djava.io.tmpdir={{PWD}}/tmp \
     '-Dspark.driver.port=33185' \
     -Dspark.yarn.app.container.log.dir=<LOG_DIR> \
     -XX:MaxPermSize=256m \
     -XX:OnOutOfMemoryError='kill %p' \
     org.apache.spark.executor.CoarseGrainedExecutorBackend \
     --driver-url \
     spark://CoarseGrainedScheduler@10.80.80.143:33185 \
     --executor-id \
     <executorId> \
     --hostname \
     <hostname> \
     --cores \
     1 \
     --app-id \
     application_1510455447591_0003 \
     --user-class-path \
     file:$PWD/__app__.jar \
     1><LOG_DIR>/stdout \
     2><LOG_DIR>/stderr

 resources:
   __spark_libs__ -> resource { scheme: "hdfs" host: "hadoop001" port: 8020 file: "/user/hadoop/.sparkStaging/application_1510455447591_0003/__spark_libs__3661190248881254433.zip" } size: 202733760 timestamp: 1510470497281 type: ARCHIVE visibility: PRIVATE
   ipDatabase.csv -> resource { scheme: "hdfs" host: "hadoop001" port: 8020 file: "/user/hadoop/.sparkStaging/application_1510455447591_0003/ipDatabase.csv" } size: 4977417 timestamp: 1510470498289 type: FILE visibility: PRIVATE
   __spark_conf__ -> resource { scheme: "hdfs" host: "hadoop001" port: 8020 file: "/user/hadoop/.sparkStaging/application_1510455447591_0003/__spark_conf__.zip" } size: 83859 timestamp: 1510470498461 type: ARCHIVE visibility: PRIVATE
   ipRegion.xlsx -> resource { scheme: "hdfs" host: "hadoop001" port: 8020 file: "/user/hadoop/.sparkStaging/application_1510455447591_0003/ipRegion.xlsx" } size: 25433 timestamp: 1510470498401 type: FILE visibility: PRIVATE

===============================================================================
17/11/12 15:08:37 INFO client.RMProxy: Connecting to ResourceManager at hadoop001/10.80.80.143:8030
17/11/12 15:08:37 INFO yarn.YarnRMClient: Registering the ApplicationMaster
17/11/12 15:08:37 INFO yarn.YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/11/12 15:08:37 INFO yarn.YarnAllocator: Submitted 1 unlocalized container requests.
17/11/12 15:08:37 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
17/11/12 15:08:37 INFO impl.AMRMClientImpl: Received new token for : hadoop001:35850
17/11/12 15:08:37 INFO yarn.YarnAllocator: Launching container container_1510455447591_0003_01_000002 on host hadoop001
17/11/12 15:08:37 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
17/11/12 15:08:37 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/12 15:08:38 INFO impl.ContainerManagementProtocolProxy: Opening proxy : hadoop001:35850
17/11/12 15:08:59 INFO yarn.YarnAllocator: Completed container container_1510455447591_0003_01_000002 on host: hadoop001 (state: COMPLETE, exit status: 1)
17/11/12 15:08:59 WARN yarn.YarnAllocator: Container marked as failed: container_1510455447591_0003_01_000002 on host: hadoop001. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1510455447591_0003_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
at org.apache.hadoop.util.Shell.run(Shell.java:478)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Container exited with a non-zero exit code 1

17/11/12 15:09:02 INFO yarn.YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/11/12 15:09:02 INFO yarn.YarnAllocator: Submitted 1 unlocalized container requests.
17/11/12 15:09:02 INFO impl.AMRMClientImpl: Received new token for : hadoop003:32914
17/11/12 15:09:02 INFO yarn.YarnAllocator: Launching container container_1510455447591_0003_01_000003 on host hadoop003
17/11/12 15:09:02 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
17/11/12 15:09:02 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/12 15:09:02 INFO impl.ContainerManagementProtocolProxy: Opening proxy : hadoop003:32914
17/11/12 15:09:32 INFO yarn.YarnAllocator: Completed container container_1510455447591_0003_01_000003 on host: hadoop003 (state: COMPLETE, exit status: 137)
17/11/12 15:09:32 WARN yarn.YarnAllocator: Container marked as failed: container_1510455447591_0003_01_000003 on host: hadoop003. Exit status: 137. Diagnostics: Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal

17/11/12 15:09:35 INFO yarn.YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/11/12 15:09:35 INFO yarn.YarnAllocator: Submitted 1 unlocalized container requests.
17/11/12 15:09:35 INFO impl.AMRMClientImpl: Received new token for : hadoop002:43415
17/11/12 15:09:35 INFO yarn.YarnAllocator: Launching container container_1510455447591_0003_01_000004 on host hadoop002
17/11/12 15:09:35 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
17/11/12 15:09:35 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/12 15:09:35 INFO impl.ContainerManagementProtocolProxy: Opening proxy : hadoop002:43415
17/11/12 15:10:09 INFO yarn.YarnAllocator: Completed container container_1510455447591_0003_01_000004 on host: hadoop002 (state: COMPLETE, exit status: 1)
17/11/12 15:10:09 WARN yarn.YarnAllocator: Container marked as failed: container_1510455447591_0003_01_000004 on host: hadoop002. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1510455447591_0003_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
at org.apache.hadoop.util.Shell.run(Shell.java:478)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Container exited with a non-zero exit code 1

17/11/12 15:10:12 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 11, (reason: Max number of executor failures (3) reached)
17/11/12 15:10:15 INFO util.ShutdownHookManager: Shutdown hook called

写回答

1回答

Michael_PK

2017-11-12

检查yarn跑其他spark作业好使不

0
8
Michael_PK
回复
慕粉2110073833
资源问题,应该是服务器性能太低了
2017-11-14
共8条回复

以慕课网日志分析为例 进入大数据Spark SQL的世界

快速转型大数据:Hadoop,Hive,SparkSQL步步为赢

1644 学习 · 1129 问题

查看课程