2017-03-05 15 views
0

hadoop-1.2.1マルチノードクラスタリングをセットアップしました。多くの時間を費やしましたが、mapreduceコードが実行されない理由を理解できていません。 bashrファイルでは、私はexport JAVA_HOME,export HADOOP_HOME,export PATH,export HADOOP_CLASSPATHです。私はWordCountとProcessUnitsを試しましたが、私は同じ問題に直面しています。今私はコードには何も問題がないと思う。MapReduceコードがHadoopクラスタリングで実行されない

[email protected]:~/hadoop$ hadoop com.sun.tools.javac.Main hadoop/WordCount.java 
Warning: $HADOOP_HOME is deprecated. 

[email protected]:~/hadoop$ jar -cf units.jar hadoop/WordCount*.class 
[email protected]:~/hadoop$ hadoop jar units.jar hadoop/WordCount input_dir output_dir 
Warning: $HADOOP_HOME is deprecated. 

17/03/06 02:41:48 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
17/03/06 02:41:48 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 
17/03/06 02:41:49 INFO input.FileInputFormat: Total input paths to process : 1 
17/03/06 02:41:49 INFO util.NativeCodeLoader: Loaded the native-hadoop library 
17/03/06 02:41:49 WARN snappy.LoadSnappy: Snappy native library not loaded 
17/03/06 02:41:52 INFO mapred.JobClient: Running job: job_201703052335_0001 
17/03/06 02:41:53 INFO mapred.JobClient: map 0% reduce 0% 
17/03/06 02:42:02 INFO mapred.JobClient: Task Id : attempt_201703052335_0001_m_000000_0, Status : FAILED 
java.lang.RuntimeException: java.lang.ClassNotFoundException: hadoop.WordCount$Map 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857) 
    at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:718) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) 
    at org.apache.hadoop.mapred.Child.main(Child.java:249) 
Caused by: java.lang.ClassNotFoundException: hadoop.WordCount$Map 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:359) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:348) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:347) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:278) 
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810) 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855) 
    ... 8 more 

17/03/06 02:42:02 WARN mapred.JobClient: Error reading task outputConnection refused (Connection refused) 
17/03/06 02:42:02 WARN mapred.JobClient: Error reading task outputConnection refused (Connection refused) 
17/03/06 02:42:07 INFO mapred.JobClient: Task Id : attempt_201703052335_0001_m_000000_1, Status : FAILED 
java.lang.RuntimeException: java.lang.ClassNotFoundException: hadoop.WordCount$Map 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857) 
    at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:718) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) 
    at org.apache.hadoop.mapred.Child.main(Child.java:249) 
Caused by: java.lang.ClassNotFoundException: hadoop.WordCount$Map 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:359) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:348) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:347) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:278) 
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810) 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855) 
    ... 8 more 

17/03/06 02:42:07 WARN mapred.JobClient: Error reading task outputConnection refused (Connection refused) 
17/03/06 02:42:07 WARN mapred.JobClient: Error reading task outputConnection refused (Connection refused) 
17/03/06 02:42:12 INFO mapred.JobClient: Task Id : attempt_201703052335_0001_m_000000_2, Status : FAILED 
java.lang.RuntimeException: java.lang.ClassNotFoundException: hadoop.WordCount$Map 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857) 
    at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:718) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) 
    at org.apache.hadoop.mapred.Child.main(Child.java:249) 
Caused by: java.lang.ClassNotFoundException: hadoop.WordCount$Map 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:359) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:348) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:347) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:278) 
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810) 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855) 
    ... 8 more 

17/03/06 02:42:12 WARN mapred.JobClient: Error reading task outputConnection refused (Connection refused) 
17/03/06 02:42:12 WARN mapred.JobClient: Error reading task outputConnection refused (Connection refused) 
17/03/06 02:42:19 INFO mapred.JobClient: Job complete: job_201703052335_0001 
17/03/06 02:42:19 INFO mapred.JobClient: Counters: 7 
17/03/06 02:42:19 INFO mapred.JobClient: Job Counters 
17/03/06 02:42:19 INFO mapred.JobClient:  SLOTS_MILLIS_MAPS=24389 
17/03/06 02:42:19 INFO mapred.JobClient:  Total time spent by all reduces waiting after reserving slots (ms)=0 
17/03/06 02:42:19 INFO mapred.JobClient:  Total time spent by all maps waiting after reserving slots (ms)=0 
17/03/06 02:42:19 INFO mapred.JobClient:  Launched map tasks=4 
17/03/06 02:42:20 INFO mapred.JobClient:  Data-local map tasks=4 
17/03/06 02:42:20 INFO mapred.JobClient:  SLOTS_MILLIS_REDUCES=0 
17/03/06 02:42:20 INFO mapred.JobClient:  Failed map tasks=1 

WORDCOUNT

package org.myorg; 

import java.io.IOException; 
import java.util.*; 

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.conf.*; 
import org.apache.hadoop.io.*; 
import org.apache.hadoop.mapred.*; 
import org.apache.hadoop.util.*; 

public class WordCount { 

    public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { 
    private final static IntWritable one = new IntWritable(1); 
    private Text word = new Text(); 

    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 
     String line = value.toString(); 
     StringTokenizer tokenizer = new StringTokenizer(line); 
     while (tokenizer.hasMoreTokens()) { 
     word.set(tokenizer.nextToken()); 
     output.collect(word, one); 
     } 
    } 
    } 

    public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { 
    public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 
     int sum = 0; 
     while (values.hasNext()) { 
     sum += values.next().get(); 
     } 
     output.collect(key, new IntWritable(sum)); 
    } 
    } 

    public static void main(String[] args) throws Exception { 
    JobConf conf = new JobConf(WordCount.class); 
    conf.setJobName("wordcount"); 

    conf.setOutputKeyClass(Text.class); 
    conf.setOutputValueClass(IntWritable.class); 

    conf.setMapperClass(Map.class); 
    conf.setCombinerClass(Reduce.class); 
    conf.setReducerClass(Reduce.class); 

    conf.setInputFormat(TextInputFormat.class); 
    conf.setOutputFormat(TextOutputFormat.class); 

    FileInputFormat.setInputPaths(conf, new Path(args[0])); 
    FileOutputFormat.setOutputPath(conf, new Path(args[1])); 

    JobClient.runJob(conf); 
    } 
} 
` 
+0

Hadoop 1.2.1?どうして?最新のコードを実行するには、Hadoop 2.xを使用する必要があります。それ以外の 'hadoop/WordCount'は正しいクラス名ではありません...公式のHadoop MapReduceチュートリアル –

+0

[Mapreduce wordcountジョブで例外クラスが見つかりません](http://stackoverflow.com/questions/21373550/)の重複が考えられます。 class-not-found-exception-in-mapreduce-wordcount-job) – vefthym

答えて

0

それが分散キャッシュにあなたのjarファイルを追加する必要がありconf.setJarByClass(WordCount.class); を追加してみてください。

関連する問題