2017-04-23 3 views
0

私はインポート-すべてのテーブルをするsqoop経由しようとすると、私は次のエラーを取得しています中:取得エラーインポート、すべてのテーブルClouderaのクイックスタートVM

sqoopを輸入すべてのテーブルが12を-m - --username = retail_dba --password = Clouderaの--warehouse-DIR =/R/Clouderaの/ sqoop_import

   Please set $ACCUMULO_HOME to the root of your Accumulo installation. 
      17/04/23 15:29:27 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.0 
      17/04/23 15:29:27 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 
      17/04/23 15:29:27 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 
      17/04/23 15:29:27 INFO tool.CodeGenTool: Beginning code generation 
      17/04/23 15:29:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1 
      17/04/23 15:29:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1 
      17/04/23 15:29:27 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce 
      Note: /tmp/sqoop-cloudera/compile/e8e72a2e112fced2b0f3251b5666473d/categories.java uses or overrides a deprecated API. 
      Note: Recompile with -Xlint:deprecation for details. 
      17/04/23 15:29:30 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/e8e72a2e112fced2b0f3251b5666473d/categories.jar 
      17/04/23 15:29:30 WARN manager.MySQLManager: It looks like you are importing from mysql. 
      17/04/23 15:29:30 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 
      17/04/23 15:29:30 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 
      17/04/23 15:29:30 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 
      17/04/23 15:29:30 INFO mapreduce.ImportJobBase: Beginning import of categories 
      17/04/23 15:29:31 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 
      17/04/23 15:29:32 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 
      17/04/23 15:29:32 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/192.168.40.134:8032 
      17/04/23 15:29:37 INFO db.DBInputFormat: Using read commited transaction isolation 
      17/04/23 15:29:37 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`category_id`), MAX(`category_id`) FROM `categories` 
      17/04/23 15:29:37 INFO db.IntegerSplitter: Split size: 4; Num splits: 12 from: 1 to: 58 
      17/04/23 15:29:38 INFO mapreduce.JobSubmitter: number of splits:12 
      17/04/23 15:29:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1492945339848_0010 
      17/04/23 15:29:39 INFO impl.YarnClientImpl: Submitted application application_1492945339848_0010 
      17/04/23 15:29:39 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1492945339848_0010/ 
      17/04/23 15:29:39 INFO mapreduce.Job: Running job: job_1492945339848_0010 
      17/04/23 15:29:52 INFO mapreduce.Job: Job job_1492945339848_0010 running in uber mode : false 
      17/04/23 15:29:52 INFO mapreduce.Job: map 0% reduce 0% 
      17/04/23 15:29:52 INFO mapreduce.Job: Job job_1492945339848_0010 failed with state FAILED due to: Application application_1492945339848_0010 failed 2 times due to AM Container for appattempt_1492945339848_0010_000002 exited with exitCode: 1 
      For more detailed output, check application tracking page:http://quickstart.cloudera:8088/proxy/application_1492945339848_0010/Then, click on links to logs of each attempt. 
      Diagnostics: Exception from container-launch. 
      Container id: container_1492945339848_0010_02_000001 
      Exit code: 1 
      Stack trace: ExitCodeException exitCode=1: 
       at org.apache.hadoop.util.Shell.runCommand(Shell.java:578) 
       at org.apache.hadoop.util.Shell.run(Shell.java:481) 
       at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:763) 
       at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213) 
       at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 
       at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 
       at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
       at java.lang.Thread.run(Thread.java:745) 


      Container exited with a non-zero exit code 1 
      Failing this attempt. Failing the application. 
      17/04/23 15:29:52 INFO mapreduce.Job: Counters: 0 
      17/04/23 15:29:52 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 
      17/04/23 15:29:52 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 19.6175 seconds (0 bytes/sec) 
      17/04/23 15:29:52 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 
      17/04/23 15:29:52 INFO mapreduce.ImportJobBase: Retrieved 0 records. 
      17/04/23 15:29:52 ERROR tool.ImportAllTablesTool: Error during import: Import job failed!`enter 

答えて

1

ルックス ":mysqlの://quickstart.cloudera 3306/retail_db JDBC" ここにコードを入力してください-connect同じようなアプリケーションのマスターは、繰り返し意味が殺されているので、彼らは好きなほどのメモリを得ることはできません。 cloudera仮想マシンでsqoopを試している場合は、-m 12を使用しないでください。これは、あなたが(単一の)マシンで処理できない12の並列マップタスクを起動しようとします。その設定をそのまま残すか、代わりに--directを試してみてください。また、--warehousedir=/r/cloudera/sqoop_importで何が起こっていますか?タイプミスとして/r/であるか、それは/user/

代わりにこれを試してみてくださいする必要があります:

sqoop import-all-tables \ 
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \ 
--warehouse-dir=/user/cloudera/sqoop_import 
--username=retail_dba \ 
--direct 
--password=cloudera; 
0

が輸入すべてのテーブルを使用している間、あなたのマッパーを制限しようとする。また、最初の1台に代わり輸入すべてのテーブルをロードしよう12人のマッパーがVM上のメモリを邪魔しています。

sqoop import-all-tables \ 
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \ 
--warehouse-dir=/user/cloudera/sqoop_import 
--username=retail_dba \ 
--password=cloudera 
-m 2 
関連する問題