报错误信息

来源:9-10 -数据清洗存储到目标地址

慕运维7479159

2017-12-22

17/12/22 15:54:41 WARN FileUtil: Failed to delete file or dir [E:\Software\data\clean\_temporary\0\_temporary\attempt_20171222154708_0000_m_000000_0\day=20170511\.part-00000-2ce28e10-e1ca-418a-a7e5-674710e7f94c.gz.parquet.crc]: it still exists.

17/12/22 15:54:41 WARN FileUtil: Failed to delete file or dir [E:\Software\data\clean\_temporary\0\_temporary\attempt_20171222154708_0000_m_000000_0\day=20170511\part-00000-2ce28e10-e1ca-418a-a7e5-674710e7f94c.gz.parquet]: it still exists.

Exception in thread "main" java.io.IOException: Unable to clear output directory file:/E:/Software/data/clean prior to writing to it

at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.deleteMatchingPartitions(InsertIntoHadoopFsRelationCommand.scala:140)

at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:82)

at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)

at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)

at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)

at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)

at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)

at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)

at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)

at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)

at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)

at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)

at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:492)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)

at Test.SparkStatCleanJob$.main(SparkStatCleanJob.scala:62)

at Test.SparkStatCleanJob.main(SparkStatCleanJob.scala)

17/12/22 15:54:41 INFO SparkContext: Invoking stop() from shutdown hook

17/12/22 15:54:41 INFO SparkUI: Stopped Spark web UI at http://10.0.30.26:4041

17/12/22 15:54:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!

17/12/22 15:54:41 INFO MemoryStore: MemoryStore cleared

17/12/22 15:54:41 INFO BlockManager: BlockManager stopped

17/12/22 15:54:41 INFO BlockManagerMaster: BlockManagerMaster stopped

17/12/22 15:54:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!

17/12/22 15:54:41 INFO SparkContext: Successfully stopped SparkContext

17/12/22 15:54:41 INFO ShutdownHookManager: Shutdown hook called

17/12/22 15:54:41 INFO ShutdownHookManager: Deleting directory C:\Users\5838\AppData\Local\Temp\spark-50bb2fef-adf1-4acf-bd09-f9c0db5496bb


写回答

2回答

Michael_PK

2017-12-22

.IOException: Unable to clear output directory file:/E:/Software/data/clean prior to writing to it

0
1
慕运维7479159
非常感谢!
2018-06-21
共1条回复

火力全开dung

2018-06-20

你好,请问怎么解决的呢?谢谢了

0
0

以慕课网日志分析为例 进入大数据Spark SQL的世界

快速转型大数据:Hadoop,Hive,SparkSQL步步为赢

1644 学习 · 1129 问题

查看课程