2016-10-10 4 views
0

私はhttp://markmail.org/download.xqy?id=zua6upabiylzeetp&number=2Java Spark RDD例外です。 token( "*")>? ANDトークン( "*")<=?フィルタリングを許可:行1:8 '' からの入力でノー実行可能な代替案([FROM] SELECT ...)

からコピーされたJava +スパーク+カサンドラのJavaの例の非常に単純な例を得ましただから、ここに私のアプリケーションのコードは次のとおりです。ここで

package com.chatSparkConnactionTest; 

import static com.datastax.spark.connector.japi.CassandraJavaUtil.javaFunctions; 
import static com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowTo; 
import java.io.Serializable; 
import org.apache.spark.SparkConf; 
import org.apache.spark.api.java.JavaRDD; 
import org.apache.spark.api.java.JavaSparkContext; 
import org.apache.spark.api.java.function.Function; 


public class JavaDemoRDDBean implements Serializable { 
    private static final long serialVersionUID = 1L; 
    public static void main(String[] args) { 

     SparkConf conf = new SparkConf(). 
      setAppName("chat"). 
      setMaster("local"). 
      set("spark.cassandra.connection.host", "127.0.0.1"); 
     JavaSparkContext sc = new JavaSparkContext(conf); 

     JavaRDD<String> rddBeans = javaFunctions(sc) 
      .cassandraTable("chat", "dictionary", mapRowTo(Dictionary.class)) 
      .map(
      new Function<Dictionary, String>() { 
        public String call(Dictionary dictionary) throws Exception { 
        String row = dictionary.toString(); 
        System.out.println("Row :" + row); 
        return row; 
       } 
     }); 
     System.out.println(rddBeans.collect().get(0)); 
     System.out.println(rddBeans.collect().size()); 
    } 
} 

は、Dictionaryクラスのコードです:

package com.chatSparkConnactionTest; 

import org.spark_project.guava.base.Objects; 

public class Dictionary { 
    private String value_id; 
    private String d_name; 
    private String d_value; 

    public static Dictionary newInstance(String value_id, String d_name, String d_value) { 
     Dictionary dictionary = new Dictionary(); 
     dictionary.setId(value_id); 
     dictionary.setName(d_name); 
     dictionary.setValue(d_value); 
     return dictionary; 
    } 

    public String getId() { 
     return value_id; 
    } 

    public void setId(String value_id) { 
     this.value_id = value_id; 
    } 

    public String getName() { 
     return d_name; 
    } 

    public void setName(String d_name) { 
     this.d_name = d_name; 
    } 

    public String getValue() { 
     return d_value; 
    } 

    public void setValue(String d_value) { 
     this.d_value = d_value; 
    } 

    @Override 
    public String toString() { 
     return Objects.toStringHelper(this) 
       .add("value_id", value_id) 
       .add("d_name", d_name) 
       .add("d_value", d_value) 
       .toString(); 
    } 
} 

、これはカサンドラDBテーブルを作成するためのコードです:

CREATE TABLE dictionary (
    value_id text, 
    d_value  text, 
    d_name text, 
    PRIMARY KEY (value_id, d_name) 
) WITH comment = 'dictionary values' 
AND CLUSTERING ORDER BY (d_name ASC); 
INSERT INTO chat.dictionary (d_name,d_value,value_id) VALUES ('Friendship Status','Requested','1'); 
INSERT INTO chat.dictionary (d_name,d_value,value_id) VALUES ('Friendship Status','Friends','2'); 

私は(私はこの問題の2つのリンクを見つけましたが、私はキャッチすることはできません - 私のアプリケーションを動作させるためにどのように)私のアプリケーションを実行しようとする試みでエラーが発生しました:ここ InvalidRequestException(why:empid cannot be restricted by more than one relation if it includes an Equal) Spark Datastax Java API Select statements

は私ですエラー:

java.io.IOException: Exception during preparation of SELECT FROM "chat"."dictionary" WHERE token("value_id") > ? AND token("value_id") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:293) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:307) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
    at java.lang.Thread.run(Unknown Source) 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24) 
    at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) 
    at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:113) 
    at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45) 
    at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28) 
    at com.sun.proxy.$Proxy16.prepare(Unknown Source) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:279) 
    ... 28 more 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:132) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:224) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:200) 
    at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) 
    ... 3 more 
16/10/09 21:36:06 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1, ANY, 19449 bytes) 
16/10/09 21:36:06 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Exception during preparation of SELECT FROM "chat"."dictionary" WHERE token("value_id") > ? AND token("value_id") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:293) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:307) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
    at java.lang.Thread.run(Unknown Source) 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24) 
    at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) 
    at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:113) 
    at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45) 
    at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28) 
    at com.sun.proxy.$Proxy16.prepare(Unknown Source) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:279) 
    ... 28 more 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:132) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:224) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:200) 
    at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) 
    ... 3 more 

16/10/09 21:36:06 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 
16/10/09 21:36:06 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job 
16/10/09 21:36:06 INFO TaskSchedulerImpl: Cancelling stage 0 
16/10/09 21:36:06 INFO TaskSchedulerImpl: Stage 0 was cancelled 
16/10/09 21:36:06 INFO Executor: Executor is trying to kill task 1.0 in stage 0.0 (TID 1) 
16/10/09 21:36:06 INFO DAGScheduler: ResultStage 0 (collect at JavaDemoRDDBean.java:32) failed in 0.163 s 
16/10/09 21:36:06 INFO DAGScheduler: Job 0 failed: collect at JavaDemoRDDBean.java:32, took 0.349047 s 
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Exception during preparation of SELECT FROM "chat"."dictionary" WHERE token("value_id") > ? AND token("value_id") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:293) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:307) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
    at java.lang.Thread.run(Unknown Source) 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24) 
    at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) 
    at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:113) 
    at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45) 
    at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28) 
    at com.sun.proxy.$Proxy16.prepare(Unknown Source) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:279) 
    ... 28 more 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:132) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:224) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:200) 
    at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) 
    ... 3 more 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
    at scala.Option.foreach(Option.scala:257) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1911) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:893) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
    at org.apache.spark.rdd.RDD.collect(RDD.scala:892) 
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:360) 
    at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45) 
    at com.chatSparkConnactionTest.JavaDemoRDDBean.main(JavaDemoRDDBean.java:32) 
Caused by: java.io.IOException: Exception during preparation of SELECT FROM "chat"."dictionary" WHERE token("value_id") > ? AND token("value_id") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:293) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:307) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
    at java.lang.Thread.run(Unknown Source) 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24) 
    at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) 
    at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:113) 
    at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45) 
    at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28) 
    at com.sun.proxy.$Proxy16.prepare(Unknown Source) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:279) 
    ... 28 more 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:132) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:224) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:200) 
    at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) 
    ... 3 more 
16/10/09 21:36:06 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1) 
java.io.IOException: Exception during preparation of SELECT FROM "chat"."dictionary" WHERE token("value_id") > ? AND token("value_id") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:293) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:307) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$19.apply(CassandraTableScanRDD.scala:335) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:893) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
    at java.lang.Thread.run(Unknown Source) 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58) 
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24) 
    at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) 
    at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:113) 
    at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45) 
    at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28) 
    at com.sun.proxy.$Proxy16.prepare(Unknown Source) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:279) 
    ... 28 more 
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:8 no viable alternative at input 'FROM' (SELECT [FROM]...) 
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:132) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:224) 
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:200) 
    at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) 
    ... 3 more 

誰かが私の簡単な例を実行可能にする手助けができますか? ありがとうございました!

答えて

0

辞書javabeanクラスに問題があります。以下のクラスを使用してください。

public class Dictionary { 
private String value_id; 
private String d_name; 
private String d_value; 

public String getValue_id() { 
    return value_id; 
} 
public void setValue_id(String value_id) { 
    this.value_id = value_id; 
} 
public String getD_name() { 
    return d_name; 
} 
public void setD_name(String d_name) { 
    this.d_name = d_name; 
} 
public String getD_value() { 
    return d_value; 
} 
public void setD_value(String d_value) { 
    this.d_value = d_value; 
    } 
} 
+0

ありがとう、私の問題は解決しました:) –

関連する問題