2016-05-23 5 views
0

私はSpark HistoryServer UIリンクまたはYarn History Server UIリンクをクリックするまで、CDH 5.5.2をインストールしていて、Cloudera Managerからは正常です。それらは動作していません。動作しないと、ブラウザからまったくアクセスできないということです。Cloudera Manager糸とスパークUIが動作しない

は、私は、コマンド

sudo service spark-history-server start 

私はClouderaのマネージャに移動してサービスを開始することができないファイルのスパーク-defaults.confに以下の行

spark.eventLog.dir=hdfs://name-node-1:8020/user/spark/applicationHistory 
spark.eventLog.enabled=true 
spark.yarn.historyServer.address=http://name-node-1:18088 

を追加 - >スパーク - >ヒストリーサーバーでは、実行中で、名前ノード1にあり、Cloudera Managerから開始できます。ここで

は、スパーク、YARN、HDFS、SCMとClouderaのManagerログからの出力

name-node-1 INFO May 24, 2016 10:29 PM JobHistory 
Starting scan to move intermediate done files 
View Log File 
name-node-1 INFO May 24, 2016 10:29 PM StateChange 
BLOCK* allocateBlock: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2016_05_24-22_29_59. BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} 
View Log File 
data-node-1 INFO May 24, 2016 10:29 PM DataNode  
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.2:38325 dest: /10.128.0.3:50010 
View Log File 
data-node-2 INFO May 24, 2016 10:29 PM DataNode  
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.3:49410 dest: /10.128.0.4:50010 
View Log File 
data-node-3 INFO May 24, 2016 10:29 PM DataNode  
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.4:53572 dest: /10.128.0.5:50010 
View Log File 
data-node-3 INFO May 24, 2016 10:29 PM DataNode  
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 
View Log File 
data-node-3 INFO May 24, 2016 10:29 PM clienttrace 
src: /10.128.0.4:53572, dest: /10.128.0.5:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: 2690c629-9322-4b95-b70e-20270682fe5e, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 8712883 
View Log File 
data-node-2 INFO May 24, 2016 10:29 PM clienttrace 
src: /10.128.0.3:49410, dest: /10.128.0.4:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: 9a9d8417-9b4e-482b-80c8-133eeb679c68, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 9771398 
View Log File 
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange  
BLOCK* addStoredBlock: blockMap updated: 10.128.0.5:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0 
View Log File 
data-node-2 INFO May 24, 2016 10:29 PM DataNode  
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=HAS_DOWNSTREAM_IN_PIPELINE terminating 
View Log File 
data-node-1 INFO May 24, 2016 10:29 PM DataNode  
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=HAS_DOWNSTREAM_IN_PIPELINE terminating 
View Log File 
data-node-1 INFO May 24, 2016 10:29 PM clienttrace 
src: /10.128.0.2:38325, dest: /10.128.0.3:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: a5a064ce-0710-462a-b8b2-489493fd7d8f, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 10857807 
View Log File 
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange  
BLOCK* addStoredBlock: blockMap updated: 10.128.0.4:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0 
View Log File 
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange  
BLOCK* addStoredBlock: blockMap updated: 10.128.0.3:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0 
View Log File 
name-node-1 INFO May 24, 2016 10:29 PM StateChange 
DIR* completeFile: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2016_05_24-22_29_59 is closed by DFSClient_NONMAPREDUCE_375545611_68 
View Log File 
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange  
BLOCK* addToInvalidates: blk_1073747330_6799 10.128.0.3:50010 10.128.0.4:50010 10.128.0.5:50010 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange  
BLOCK* BlockManager: ask 10.128.0.5:50010 to delete [blk_1073747330_6799] 
View Log File 
data-node-3 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService 
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 
View Log File 
data-node-3 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService 
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange  
BLOCK* BlockManager: ask 10.128.0.4:50010 to delete [blk_1073747330_6799] 
View Log File 
data-node-2 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService 
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion 
View Log File 
data-node-2 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService 
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange  
BLOCK* BlockManager: ask 10.128.0.3:50010 to delete [blk_1073747330_6799] 
View Log File 
data-node-1 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService 
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 
View Log File 
data-node-1 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService 
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider 
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager 
Running the LDBTimeSeriesRollupManager at 2016-05-24T22:30:15.155Z, forMigratedData=false 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager 
Starting rollup from raw to rollup=TEN_MINUTELY for rollupTimestamp=2016-05-24T22:30:00.000Z 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager 
Finished rollup: duration=PT0.729S, numStreamsChecked=38563, numStreamsRolledUp=1295 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider 
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager 
Running the LDBTimeSeriesRollupManager at 2016-05-24T22:30:19.235Z, forMigratedData=false 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager 
Starting rollup from raw to rollup=TEN_MINUTELY for rollupTimestamp=2016-05-24T22:30:00.000Z 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM CacheReplicationMonitor 
Rescanning after 30000 milliseconds 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM CacheReplicationMonitor 
Scanned 0 directive(s) and 0 block(s) in 2 millisecond(s). 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager 
Finished rollup: duration=PT5.328S, numStreamsChecked=63547, numStreamsRolledUp=23639 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM metastore 
Opened a connection to metastore, current connections: 1 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM metastore 
Trying to connect to metastore with URI thrift://name-node-1:9083 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM metastore 
Connected to metastore. 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM metastore 
Closed a connection to metastore, current connections: 0 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM SearcherManager 
Warming up the FieldCache 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM SearcherManager 
FieldCache built for 192 docs using 0.00 MB of space. 
View Log File 
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider 
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006 
+0

更新ログを使用して、ポストのコメントのための – BruceWayne

+0

こんにちはおかげで、私はログを取得する必要がありますにするためにファイアウォールルールを作成するために必要な? spark oneまたはscm oneまたはおそらくyarn –

+0

申し訳ありませんが、スパークログ – BruceWayne

答えて

1

ですこれは、クラウドでの私のクラスターとの問題でした。私はすべての単一のポート

gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:80 --format json 
gcloud compute firewall-rules create allow-http --description "allow-spark-ui." --allow tcp:18088 --format json 
gcloud compute firewall-rules create allow-hue --description "allow-hue." --allow tcp:8888 --format json 
gcloud compute firewall-rules create allow-spark-rpc-master --description "allow-spark-rpc-master." --allow tcp:7077 --format json 
gcloud compute firewall-rules create allow-spark-rpc-worker --description "allow-spark-rpc-worker." --allow tcp:7078 --format json 
gcloud compute firewall-rules create allow-spark-webui-master --description "allow-spark-webui-master." --allow tcp:18080 --format json 
gcloud compute firewall-rules create allow-spark-webui-worker --description "allow-spark-webui-worker." --allow tcp:18081 --format json 
gcloud compute firewall-rules create allow-yarn-resourcemanager-address --description "yarn-resourcemanager-address." --allow tcp:8032 --format json 
gcloud compute firewall-rules create allow-yarn-resourcemanager-scheduler-address --description "yarn-resourcemanager-scheduler-address." --allow tcp:8030 --format json 
gcloud compute firewall-rules create allow-yarn-resourcemanager-resource-tracker-address --description "allow-yarn-resourcemanager-resource-tracker-address." --allow tcp:8031 --format json 
gcloud compute firewall-rules create allow-yarn-resourcemanager-admin-address --description "allow-yarn-resourcemanager-admin-address." --allow tcp:8033 --format json 
gcloud compute firewall-rules create allow-yarn-resourcemanager-webapp-address --description "allow-yarn-resourcemanager-webapp-address." --allow tcp:8088 --format json 
gcloud compute firewall-rules create allow-yarn-resourcemanager-webapp-https-address --description "allow-yarn-resourcemanager-webapp-https-address." --allow tcp:8090 --format json 
gcloud compute firewall-rules create allow-yarn-historyserver --description "allow-yarn-historyserver." --allow tcp:19888 --format json 
gcloud compute firewall-rules create allow-oozie-webui --description "Allow Oozie Web UI." --allow tcp:11000 --format json 
gcloud compute firewall-rules create zeppelin-webui --description "Zeppelin UI." --allow tcp:8080 --format json 
関連する問題