打包和运行没有报错,但输出目录里只有_SUCCESS

来源:2-16 local模式下使用spark-submit提交Spark应用程序

慕少7351152

2022-06-12

老师好,我的idea运行输出是没问题的,打包出的jar包17K,之后在VM运行也没有报错,但是就是没有统计结果文件。

pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.imooc.bigdata</groupId>
    <artifactId>sparksql-train</artifactId>
    <version>1.0</version>
    <name>${project.artifactId}</name>

    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <encoding>UTF-8</encoding>
        <scala.tools.version>2.11</scala.tools.version>
        <scala.version>2.11.8</scala.version>
        <spark.version>2.4.3</spark.version>
        <hadoop.version>2.6.0-cdh5.15.1</hadoop.version>
    </properties>

    <!--引入CDH的仓库-->
    <repositories>
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>

        <!--Spark SQL依赖-->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>


        <!-- Hadoop相关依赖-->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>

        <!-- Test -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <sourceDirectory>src/main/scala</sourceDirectory>
        <testSourceDirectory>src/test/scala</testSourceDirectory>
        <plugins>
            <plugin>
                <!-- see http://davidb.github.com/scala-maven-plugin -->
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.1.3</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                        <configuration>
                            <args>
                                <arg>-dependencyfile</arg>
                                <arg>${project.build.directory}/.scala_dependencies</arg>
                            </args>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.13</version>
                <configuration>
                    <useFile>false</useFile>
                    <disableXmlReport>true</disableXmlReport>
                    <!-- If you have classpath issue like NoDefClassError,... -->
                    <!-- useManifestOnlyJar>false</useManifestOnlyJar -->
                    <includes>
                        <include>**/*Test.*</include>
                        <include>**/*Suite.*</include>
                    </includes>
                </configuration>
            </plugin>
        </plugins>
    </build>


</project>

VM jps
8304 SecondaryNameNode
8582 ResourceManager
7976 NameNode
8697 NodeManager
20233 Jps
8123 DataNode
14251 SparkSubmit

VM spark-submit命令

./bin/spark-submit \
  --class com.imooc.bigdata.spark.wordcount.WordCountApp_SparkSubmit \
  --master local \
  /learn_spark/lib_new/sparksql-train-1.0.jar \
 file:///learn_spark/data_new/wc.txt file:///learn_spark/data_new/out

VM spark-submit console日志

22/06/11 23:18:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/06/11 23:18:06 INFO SparkContext: Running Spark version 2.4.3
22/06/11 23:18:06 INFO SparkContext: Submitted application: com.imooc.bigdata.spark.wordcount.WordCountApp_SparkSubmit
22/06/11 23:18:06 INFO SecurityManager: Changing view acls to: hadoop
22/06/11 23:18:06 INFO SecurityManager: Changing modify acls to: hadoop
22/06/11 23:18:06 INFO SecurityManager: Changing view acls groups to: 
22/06/11 23:18:06 INFO SecurityManager: Changing modify acls groups to: 
22/06/11 23:18:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
22/06/11 23:18:06 INFO Utils: Successfully started service 'sparkDriver' on port 42047.
22/06/11 23:18:06 INFO SparkEnv: Registering MapOutputTracker
22/06/11 23:18:06 INFO SparkEnv: Registering BlockManagerMaster
22/06/11 23:18:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/06/11 23:18:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/06/11 23:18:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-bb0c82d6-aeff-426a-8d4d-f7409262b0eb
22/06/11 23:18:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
22/06/11 23:18:06 INFO SparkEnv: Registering OutputCommitCoordinator
22/06/11 23:18:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/06/11 23:18:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark010:4040
22/06/11 23:18:06 INFO SparkContext: Added JAR file:/learn_spark/lib_new/sparksql-train-1.0.jar at spark://spark010:42047/jars/sparksql-train-1.0.jar with timestamp 1655003886748
22/06/11 23:18:06 INFO Executor: Starting executor ID driver on host localhost
22/06/11 23:18:06 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39921.
22/06/11 23:18:06 INFO NettyBlockTransferService: Server created on spark010:39921
22/06/11 23:18:06 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/06/11 23:18:06 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark010, 39921, None)
22/06/11 23:18:06 INFO BlockManagerMasterEndpoint: Registering block manager spark010:39921 with 366.3 MB RAM, BlockManagerId(driver, spark010, 39921, None)
22/06/11 23:18:06 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark010, 39921, None)
22/06/11 23:18:06 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark010, 39921, None)
22/06/11 23:18:07 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 295.7 KB, free 366.0 MB)
22/06/11 23:18:07 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 24.9 KB, free 366.0 MB)
22/06/11 23:18:07 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on spark010:39921 (size: 24.9 KB, free: 366.3 MB)
22/06/11 23:18:07 INFO SparkContext: Created broadcast 0 from textFile at WordCountApp_SparkSubmit.scala:11
22/06/11 23:18:07 INFO FileInputFormat: Total input paths to process : 1
22/06/11 23:18:08 INFO deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
22/06/11 23:18:08 INFO HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter
22/06/11 23:18:08 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
22/06/11 23:18:08 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
22/06/11 23:18:08 INFO SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78
22/06/11 23:18:08 INFO DAGScheduler: Registering RDD 3 (map at WordCountApp_SparkSubmit.scala:12)
22/06/11 23:18:08 INFO DAGScheduler: Registering RDD 5 (map at WordCountApp_SparkSubmit.scala:14)
22/06/11 23:18:08 INFO DAGScheduler: Got job 0 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions
22/06/11 23:18:08 INFO DAGScheduler: Final stage: ResultStage 2 (runJob at SparkHadoopWriter.scala:78)
22/06/11 23:18:08 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
22/06/11 23:18:08 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1)
22/06/11 23:18:08 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCountApp_SparkSubmit.scala:12), which has no missing parents
22/06/11 23:18:08 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 5.0 KB, free 366.0 MB)
22/06/11 23:18:08 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.9 KB, free 366.0 MB)
22/06/11 23:18:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on spark010:39921 (size: 2.9 KB, free: 366.3 MB)
22/06/11 23:18:08 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1161
22/06/11 23:18:08 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCountApp_SparkSubmit.scala:12) (first 15 tasks are for partitions Vector(0))
22/06/11 23:18:08 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
22/06/11 23:18:08 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7882 bytes)
22/06/11 23:18:08 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
22/06/11 23:18:08 INFO Executor: Fetching spark://spark010:42047/jars/sparksql-train-1.0.jar with timestamp 1655003886748
22/06/11 23:18:08 INFO TransportClientFactory: Successfully created connection to spark010/192.168.1.80:42047 after 40 ms (0 ms spent in bootstraps)
22/06/11 23:18:08 INFO Utils: Fetching spark://spark010:42047/jars/sparksql-train-1.0.jar to /tmp/spark-ffb7f548-c158-492f-8c74-4147f92fe6a2/userFiles-a2b998be-1dd1-4581-9fc0-e3996d7f01b5/fetchFileTemp7012179740940078152.tmp
22/06/11 23:18:08 INFO Executor: Adding file:/tmp/spark-ffb7f548-c158-492f-8c74-4147f92fe6a2/userFiles-a2b998be-1dd1-4581-9fc0-e3996d7f01b5/sparksql-train-1.0.jar to class loader
22/06/11 23:18:08 INFO HadoopRDD: Input split: file:/learn_spark/data_new/wc.txt:0+29
22/06/11 23:18:08 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1149 bytes result sent to driver
22/06/11 23:18:08 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 373 ms on localhost (executor driver) (1/1)
22/06/11 23:18:08 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
22/06/11 23:18:08 INFO DAGScheduler: ShuffleMapStage 0 (map at WordCountApp_SparkSubmit.scala:12) finished in 0.484 s
22/06/11 23:18:08 INFO DAGScheduler: looking for newly runnable stages
22/06/11 23:18:08 INFO DAGScheduler: running: Set()
22/06/11 23:18:08 INFO DAGScheduler: waiting: Set(ShuffleMapStage 1, ResultStage 2)
22/06/11 23:18:08 INFO DAGScheduler: failed: Set()
22/06/11 23:18:08 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[5] at map at WordCountApp_SparkSubmit.scala:14), which has no missing parents
22/06/11 23:18:08 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 4.2 KB, free 366.0 MB)
22/06/11 23:18:08 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.5 KB, free 366.0 MB)
22/06/11 23:18:08 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on spark010:39921 (size: 2.5 KB, free: 366.3 MB)
22/06/11 23:18:08 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1161
22/06/11 23:18:08 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[5] at map at WordCountApp_SparkSubmit.scala:14) (first 15 tasks are for partitions Vector(0))
22/06/11 23:18:08 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
22/06/11 23:18:08 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, ANY, 7651 bytes)
22/06/11 23:18:08 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
22/06/11 23:18:08 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks
22/06/11 23:18:08 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 9 ms
22/06/11 23:18:08 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1321 bytes result sent to driver
22/06/11 23:18:08 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 81 ms on localhost (executor driver) (1/1)
22/06/11 23:18:08 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
22/06/11 23:18:08 INFO DAGScheduler: ShuffleMapStage 1 (map at WordCountApp_SparkSubmit.scala:14) finished in 0.104 s
22/06/11 23:18:08 INFO DAGScheduler: looking for newly runnable stages
22/06/11 23:18:08 INFO DAGScheduler: running: Set()
22/06/11 23:18:08 INFO DAGScheduler: waiting: Set(ResultStage 2)
22/06/11 23:18:08 INFO DAGScheduler: failed: Set()
22/06/11 23:18:08 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[8] at saveAsTextFile at WordCountApp_SparkSubmit.scala:15), which has no missing parents
22/06/11 23:18:08 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 77.5 KB, free 365.9 MB)
22/06/11 23:18:08 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.4 KB, free 365.9 MB)
22/06/11 23:18:08 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on spark010:39921 (size: 28.4 KB, free: 366.2 MB)
22/06/11 23:18:08 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1161
22/06/11 23:18:08 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[8] at saveAsTextFile at WordCountApp_SparkSubmit.scala:15) (first 15 tasks are for partitions Vector(0))
22/06/11 23:18:08 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
22/06/11 23:18:08 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, executor driver, partition 0, ANY, 7662 bytes)
22/06/11 23:18:08 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
22/06/11 23:18:09 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks
22/06/11 23:18:09 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
22/06/11 23:18:09 INFO HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter
22/06/11 23:18:09 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
22/06/11 23:18:09 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
22/06/11 23:18:09 INFO FileOutputCommitter: Saved output of task 'attempt_20220611231808_0008_m_000000_0' to file:/learn_spark/data_new/out/_temporary/0/task_20220611231808_0008_m_000000
22/06/11 23:18:09 INFO SparkHadoopMapRedUtil: attempt_20220611231808_0008_m_000000_0: Committed
22/06/11 23:18:09 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 1502 bytes result sent to driver
22/06/11 23:18:09 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 103 ms on localhost (executor driver) (1/1)
22/06/11 23:18:09 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 
22/06/11 23:18:09 INFO DAGScheduler: ResultStage 2 (runJob at SparkHadoopWriter.scala:78) finished in 0.130 s
22/06/11 23:18:09 INFO DAGScheduler: Job 0 finished: runJob at SparkHadoopWriter.scala:78, took 1.059982 s
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 63
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 69
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 53
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 41
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 21
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 46
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 60
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 20
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 56
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 47
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 73
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 24
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 61
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 19
22/06/11 23:18:09 INFO ContextCleaner: Cleaned accumulator 8
22/06/11 23:18:09 INFO SparkHadoopWriter: Job job_20220611231808_0008 committed.
22/06/11 23:18:09 INFO SparkUI: Stopped Spark web UI at http://spark010:4040
22/06/11 23:18:09 INFO BlockManagerInfo: Removed broadcast_3_piece0 on spark010:39921 in memory (size: 28.4 KB, free: 366.3 MB)
22/06/11 23:18:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/06/11 23:18:09 INFO MemoryStore: MemoryStore cleared
22/06/11 23:18:09 INFO BlockManager: BlockManager stopped
22/06/11 23:18:09 INFO BlockManagerMaster: BlockManagerMaster stopped
22/06/11 23:18:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/06/11 23:18:09 INFO SparkContext: Successfully stopped SparkContext
22/06/11 23:18:09 INFO ShutdownHookManager: Shutdown hook called
22/06/11 23:18:09 INFO ShutdownHookManager: Deleting directory /tmp/spark-ffb7f548-c158-492f-8c74-4147f92fe6a2
22/06/11 23:18:09 INFO ShutdownHookManager: Deleting directory /tmp/spark-c4f0ef53-896a-43b6-8a0b-fe5d0eb2d993

麻烦老师给看一下呢
(补充一下,yarn模式下也是没问题的有结果文件)

写回答

1回答

Michael_PK

2022-06-13

代码贴出来看看

0
7
Michael_PK
回复
慕少7351152
你把你的jar包,代码,数据,发我邮箱我看看。 我的qq在课程群里有
2022-06-13
共7条回复

SparkSQL入门 整合Kudu实现广告业务数据分析

大数据工程师干货课程 带你从入门到实战掌握SparkSQL

535 学习 · 192 问题

查看课程