流量统计时报了一个错,附代码请指教

来源:6-18 省份浏览量统计功能升级

慕虎5193221

2020-02-21

Caused by: java.lang.NullPointerException
at org.apache.hadoop.io.Text.encode(Text.java:450)
at org.apache.hadoop.io.Text.set(Text.java:198)
at org.apache.hadoop.io.Text.(Text.java:88)
at com.imooc.bigdata.project.mrv2.ProvinceStatV2AppMyMapper.map(ProvinceStatV2App.java:84)atcom.imooc.bigdata.project.mrv2.ProvinceStatV2AppMyMapper.map(ProvinceStatV2App.java:84) at com.imooc.bigdata.project.mrv2.ProvinceStatV2AppMyMapper.map(ProvinceStatV2App.java:84)atcom.imooc.bigdata.project.mrv2.ProvinceStatV2AppMyMapper.map(ProvinceStatV2App.java:68)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunnerJobJobJobMapTaskRunnable.run(LocalJobRunner.java:270)
at java.util.concurrent.ExecutorsRunnableAdapter.call(Executors.java:511)atjava.util.concurrent.FutureTask.run(FutureTask.java:266)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)atjava.util.concurrent.ThreadPoolExecutorRunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutorRunnableAdapter.call(Executors.java:511)atjava.util.concurrent.FutureTask.run(FutureTask.java:266)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)atjava.util.concurrent.ThreadPoolExecutorWorker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO ] method:org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1431)
Job job_local909125896_0001 failed with state FAILED due to: NA
[INFO ] method:org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1436)
Counters: 0

附代码:
public class ProvinceStatV2App {

public static void main(String[] args)throws Exception{

    Configuration configuration = new Configuration();
    FileSystem fileSystem= FileSystem.get(configuration);
    Path outputPath = new Path("output/v2/provincestat");
    if(fileSystem.exists(outputPath)){
        fileSystem.delete(outputPath,true);
    }

    Job job = Job.getInstance(configuration);
    job.setJarByClass(ProvinceStatV2App.class);

    //设定Mapper类
    job.setMapperClass(MyMapper.class);
    //设定Reduce类
    job.setReducerClass(MyReducer.class);

    //输出map key的类
    job.setMapOutputKeyClass(Text.class);

    //输出map value的类型
    job.setMapOutputValueClass(LongWritable.class);

    //输出Reduce key的类型
    job.setOutputKeyClass(Text.class);
    //输出Reduce  value的类型
    job.setOutputValueClass(LongWritable.class);

    //设置输入路径
    FileInputFormat.setInputPaths(job, new Path("input/etl"));
    //设置输出路径
    FileOutputFormat.setOutputPath(job, new Path("output/v2/provincestat"));

    //任务完成
    job.waitForCompletion(true);

}

static class MyMapper extends Mapper<LongWritable,Text,Text,LongWritable>{
    private LogParser logParser;
    private LongWritable ONE = new LongWritable(1);

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        logParser = new LogParser();
    }

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String log = value.toString();

        Map<String,String> info = logParser.parsev2(log);


        context.write(new Text(info.get("province")),ONE);

    }

}
static  class MyReducer extends Reducer<Text,LongWritable,Text,LongWritable>{
    @Override
    protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
        long count = 0;
        for(LongWritable value : values){
            count ++;
        }
        context.write(key, new LongWritable(count));
    }
}

}

写回答

1回答

Michael_PK

2020-02-21

报错空指针异常,根据行号定位到代码,打个断点,看看是什么对象null了

0
0

Hadoop 系统入门+核心精讲

从Hadoop核心技术入手,掌握数据处理中ETL应用,轻松进军大数据

2417 学习 · 909 问题

查看课程