0
Apache sparkデータ分析の設定手順を試すと、このエラーが発生します。IBM Bluemix set_hadoop_configエラー
アウトcredentials['name'] = 'keystone'
set_hadoop_config(credentials)
で
def set_hadoop_config(credentials):
prefix = "fs.swift.service." + credentials['name']
hconf = sc._jsc.hadoopConfiguration()
hconf.set(prefix + ".auth.url", credentials['auth_url']+'/v3/auth/tokens')
hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")
hconf.set(prefix + ".tenant", credentials['project_id'])
hconf.set(prefix + ".username", credentials['user_id'])
hconf.set(prefix + ".password", credentials['password'])
hconf.setInt(prefix + ".http.port", 8080)
hconf.set(prefix + ".region", credentials['region'])
hconf.setBoolean(prefix + ".public", True)
で
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-6-976c35e1d85e> in <module>()
----> 1 credentials['name'] = 'keystone'
2 set_hadoop_config(credentials)
NameError: name 'credentials' is not defined
誰もがこの問題を解決する方法を知っていますか?私立ち往生
感謝を!あなたは私がこの問題を解決するのを手助けしました。 –