0

私はハイブで外部テーブルを作成した後、私はつぶやきの数を知りたかったので、次のクエリを書いたが、このエラーが出た。 mapred-site.xmlの構成ハイブhadoop:テーブルからエラーを取得するテーブルからデータを選択

<configuration> 

<property> 
    <name>mapred.job.tracker</name> 
    <value>localhost:8021</value> 
    </property> 

hive> select count(*) from tweet;   
Total MapReduce jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
    set mapred.reduce.tasks=<number> 
Starting Job = job_1464556774961_0005, Tracking URL = http://ubuntu:8088/proxy/application_1464556774961_0005/ 
Kill Command = /usr/local/hadoop/bin/hadoop job -Dmapred.job.tracker=localhost:8021 -kill job_1464556774961_0005 
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 
2016-05-29 15:14:24,207 Stage-1 map = 0%, reduce = 0% 
2016-05-29 15:14:30,496 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:31,532 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:32,558 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:33,592 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:34,625 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:35,649 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:36,676 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:37,697 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:38,720 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:39,745 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:40,776 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 2.14 sec 
2016-05-29 15:14:41,804 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 2.14 sec 
2016-05-29 15:14:42,823 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.91 sec 
2016-05-29 15:14:43,847 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.91 sec 
MapReduce Total cumulative CPU time: 1 seconds 910 msec 
Ended Job = job_1464556774961_0005 with errors 
Error during job, obtaining debugging information... 
Examining task ID: task_1464556774961_0005_m_000000 (and more) from job job_1464556774961_0005 
Exception in thread "Thread-128" java.lang.RuntimeException: Error while reading from task log url 
    at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:240) 
    at org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:227) 
    at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:92) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: http://ubuntu:13562/tasklog?taskid=attempt_1464556774961_0005_m_000000_3&start=-8193 
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1840) 
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441) 
    at java.net.URL.openStream(URL.java:1045) 
    at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:192) 
    ... 3 more 
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask 
MapReduce Jobs Launched: 
Job 0: Map: 2 Reduce: 1 Cumulative CPU: 1.91 sec HDFS Read: 277 HDFS Write: 0 FAIL 
Total MapReduce CPU Time Spent: 1 seconds 910 msec 
hive> 
+0

HDFSの書き込み:0 FAILのいずれかがHDFSやスペースの問題上の権限を持っていないということになります.... –

+1

私は(Hadoopのfsは777を-chmodいるため、この問題の許可を解決するために、今何をすべきか私を助けることができるしてくださいどこのディレクトリ/ファイル)とどのスペース? – javac

答えて

1

これは、ハイブ-EXECジャーにおける欠落ハイブ・ストレージAPIモジュールによるものです。ハイブを最新のバージョンに更新して、最新のハイブ修正を取得する必要があります。

一時的な修正は、ストレージapi jarを明示的に追加することです。

add jar ./dist/hive/lib/hive-storage-api-2.0.0-SNAPSHOT.jar; 
+1

@javac:問題解決に役立つ場合は、この回答を受け入れられた回答にしてください – Akarsh

関連する問題