新人再次求教,hdfs教程无法格式化

查看: 2377|回复: 3
HDFS格式化完成后,不能访问,请高手赐教。
论坛徽章:1
/06/05 23:15:18 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/06/05 23:15:18 INFO common.Storage: Storage directory /home/hadoop/name/name1&&has been successfully formatted.
下面为配置信息:
[hadoop@localhost conf]$ more core-site.xml
&?xml version=&1.0&?&
&?xml-stylesheet type=&text/xsl& href=&configuration.xsl&?&
&!-- Put site-specific property overrides in this file. --&
&configuration&
&property&
& && &&&&name&fs.default.name&/name&
& && &&&&value&http://localhost:9000&/value&
&&&/property&
&/configuration&
[hadoop@localhost conf]$ cat hdfs-site.xml
&?xml version=&1.0&?&
&?xml-stylesheet type=&text/xsl& href=&configuration.xsl&?&
&!-- Put site-specific property overrides in this file. --&
&configuration&
&property&
& & &name&dfs.replication&/name&
& & &value&1&/value&
&/property&
&property&
& & &name&dfs.data.dir&/name&
& & &value&/home/hadoop/date/date1&/value&
&/property&
&property&
& & &name&dfs.name.dir&/name&
& & &value&/home/hadoop/name/name1&/value&
&/property&
&/configuration&
[hadoop@localhost conf]$ cat mapred-site.xml
&?xml version=&1.0&?&
&?xml-stylesheet type=&text/xsl& href=&configuration.xsl&?&
&!-- Put site-specific property overrides in this file. --&
&configuration&
&&&property&
& & &name&mapred.job.tracker&/name&
& & &value&localhost:9001&/value&
&&&/property&
&&&property&
& & &name&mapred.tasktracker.map.tasks.maximum&/name&
& & &value&8&/value&
&&&/property&
&&&property&
& & &name&mapred.tasktracker.reduce.tasks.maximum&/name&
& & &value&6&/value&
&&&/property&
&/configuration&
使用hadoop fs -ls / 提示信息如下:
[hadoop@localhost date]$ hadoop fs -ls /
Bad connection to FS. command aborted.
论坛徽章:1
本帖最后由 xiaotianle2 于
15:11 编辑
进程都启动了,怎还是反问不了。
hadoop fs version
[root@localhost software]# $JAVA_HOME/bin/jps
9862 NameNode
10126 JobTracker
10216 TaskTracker
9964 DataNode
论坛徽章:1
fs.default.name这里是hdfs://
而不是http:
&property&
& && &&&&name&fs.default.name&/name&
& && &&&&value&http://localhost:9000&/value&
&&&/property&
论坛徽章:1
wxxiaoyang 发表于
fs.default.name这里是hdfs://
而不是http:
多谢!搞定了
itpub.net All Right Reserved. 北京皓辰网域网络信息技术有限公司版权所有    
 北京市公安局海淀分局网监中心备案编号: 广播电视节目制作经营许可证:编号(京)字第1149号hadoop格式化失败原因&Format&aborted&in&/data0/hadoop-name
[user6@das0 hadoop-0.20.203.0]$ bin/hadoop namenode
12/02/20 14:05:17 INFO namenode.NameNode:
STARTUP_MSG:&
Re-format filesystem in /data0/hadoop-name ? (Y or N) y
Format aborted in /data0/hadoop-name
12/02/20 14:05:20 INFO namenode.NameNode:
SHUTDOWN_MSG:&
随后启动hadoop,
发现http://das0:5007无法显示。
将/data0/hadoop-name文件夹整个删除。然后再格,成功!!!
[zhangpeng6@das0 hadoop-0.20.203.0]$ bin/hadoop namenode
12/02/20 14:09:57 INFO namenode.NameNode:
STARTUP_MSG:&
12/02/20 14:09:57 INFO util.GSet: VM type &
& & = 64-bit
12/02/20 14:09:57 INFO util.GSet: 2% max memory = 177.77875
12/02/20 14:09:57 INFO util.GSet: capacity &
& &= 2^24 =
12/02/20 14:09:57 INFO util.GSet: recommended=,
12/02/20 14:09:57 INFO namenode.FSNamesystem:
fsOwner=zhangpeng6
12/02/20 14:09:57 INFO namenode.FSNamesystem:
supergroup=supergroup
12/02/20 14:09:57 INFO namenode.FSNamesystem:
isPermissionEnabled=true
12/02/20 14:09:57 INFO namenode.FSNamesystem:
dfs.block.invalidate.limit=100
12/02/20 14:09:57 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
12/02/20 14:09:57 INFO namenode.NameNode: Caching file names
occuring more than 10 times&
12/02/20 14:09:57 INFO common.Storage: Image file of size 116
saved in 0 seconds.
12/02/20 14:09:57 INFO common.Storage: Storage directory
/data0/hadoop-name/namenode has been successfully formatted.
12/02/20 14:09:57 INFO namenode.NameNode:
SHUTDOWN_MSG:&
问题总结:
在对namenode格式化之前,要确保dfs.name.dir参数指定的目录不存在。
Hadoop这样做的目的是防止错误地将已存在的集群格式化了
已投稿到:
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。Hadoop 在格式化 namenode 时出现以下问题: - 开源中国社区
当前访客身份:游客 [
当前位置:
&格式化仍输出如下:
hadoop@ubuntu :~$ hadoop namenode -format ^[[3~12/10/24 16:51:41 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG:&& host = ubuntu/127.0.1.1 STARTUP_MSG:&& args = [-format] STARTUP_MSG:&& version = 1.1.0 STARTUP_MSG:&& build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1394289; compiled by 'hortonfo' on Thu Oct& 4 22:06:49 UTC 2012 ************************************************************/ 12/10/24 16:51:41 INFO util.GSet: VM type&&&&&& = 32-bit 12/10/24 16:51:41 INFO util.GSet: 2% max memory = 19.33375 MB 12/10/24 16:51:41 INFO util.GSet: capacity&&&&& = 2^22 = 4194304 entries 12/10/24 16:51:41 INFO util.GSet: recommended=4194304, actual=4194304 12/10/24 16:51:41 INFO namenode.FSNamesystem: fsOwner=hadoop 12/10/24 16:51:41 INFO namenode.FSNamesystem: supergroup=supergroup 12/10/24 16:51:41 INFO namenode.FSNamesystem: isPermissionEnabled=true 12/10/24 16:51:41 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100 12/10/24 16:51:41 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 12/10/24 16:51:41 INFO namenode.NameNode: Caching file names occuring more than 10 times
12/10/24 16:51:41 ERROR namenode.NameNode: java.io.IOException: Cannot create directory /export/home/dfs/name/current &at org.apache.hadoop.mon.Storage$StorageDirectory.clearDirectory(Storage.java:294) &at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1324) &at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1343) &at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1200) &at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1391) &at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1412)
12/10/24 16:51:41 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/
百度上搜的方法如下:可没解决。。。。
解法一:帮 /home/hadoop 加入其他使用者也可以写入的权限
&&&&sudo chmod -R a+w /home/hadoop/tmp
&解法二:改用 user 身份可以写入的路径 hadoop.tmp.dir 的路径 - 修改 core-site.xml
&&property&
&name&hadoop.tmp.dir&/name&
& &value&/tmp/hadoop-${user.name}&/value&
& &/property&
共有11个答案
<span class="a_vote_num" id="a_vote_num_
应该是权限不够用。你试一下 mkdir&/export/home/dfs/name/current目录你是否能创建目录。
--- 共有 4 条评论 ---
: namenode没有初始化。重启后格式化后,还是没有初始化
(4年前)&nbsp&
: mkdir -p
(4年前)&nbsp&
mkdir: cannot create directory `/export/home/dfs/name/current': No such file or directory
(4年前)&nbsp&
好的,今天休息,明天上班试下。谢啦。
(4年前)&nbsp&
<span class="a_vote_num" id="a_vote_num_
引用来自“矜兰”的答案 应该是权限不够用。你试一下 mkdir&/export/home/dfs/name/current目录你是否能创建目录。 hadoop :/usr/local/hadoop/bin$ hadoop namenode -format
12/10/26 11:19:54 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:&& host = ubuntu/127.0.1.1
STARTUP_MSG:&& args = [-format]
STARTUP_MSG:&& version = 1.1.0
STARTUP_MSG:&& build =
-r 1394289; compiled by 'hortonfo' on Thu Oct& 4 22:06:49 UTC 2012
************************************************************/
Re-format filesystem in /export/home/dfs/name ? (Y or N) y
Format aborted in /export/home/dfs/name
12/10/26 11:19:56 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
hadoop :/usr/local/hadoop/bin$ start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-namenode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-ubuntu.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-ubuntu.out
hadoop :/usr/local/hadoop/bin$ jps
4343 JobTracker
4259 SecondaryNameNode
4561 TaskTracker
hadoop :/usr/local/hadoop/bin$
<span class="a_vote_num" id="a_vote_num_
/export/home 在这个目录删除 dfs/name/current 试下
<span class="a_vote_num" id="a_vote_num_
引用来自“矜兰”的答案 /export/home 在这个目录删除 dfs/name/current 试下
重启格式化之后如下:
hadoop :/usr/local/hadoop/bin$ stop-all.sh stopping jobtracker localhost: stopping tasktracker no namenode to stop localhost: no datanode to stop localhost: stopping secondarynamenode hadoop :/usr/local/hadoop/bin$ start-all.sh starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-namenode-ubuntu.out localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-datanode-ubuntu.out localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-ubuntu.out starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-ubuntu.out localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-ubuntu.out hadoop :/usr/local/hadoop/bin$ jps 6740 JobTracker 6998 Jps 6656 SecondaryNameNode 6958 TaskTracker hadoop :/usr/local/hadoop/bin$ hadoop namenode -format 12/10/26 11:46:26 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG:&& host = ubuntu/127.0.1.1 STARTUP_MSG:&& args = [-format] STARTUP_MSG:&& version = 1.1.0 STARTUP_MSG:&& build =
-r 1394289; compiled by 'hortonfo' on Thu Oct& 4 22:06:49 UTC 2012 ************************************************************/ Re-format filesystem in /export/home/dfs/name ? (Y or N) y Format aborted in /export/home/dfs/name 12/10/26 11:46:28 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/ &仍未启动namenode。。。。。。。。。
<span class="a_vote_num" id="a_vote_num_
我之前遇见了和你同样的问题.
sudo chmod -R 775 you_datanode_path
sudo chmod -R 755 you_namenode_path
<span class="a_vote_num" id="a_vote_num_
你是不是只删除了current 呀? 要把dfs删除
<span class="a_vote_num" id="a_vote_num_
引用来自“震秦”的答案 我之前遇见了和你同样的问题.
sudo chmod -R 775 you_datanode_path
sudo chmod -R 755 you_namenode_path
datanode和namenode的位置在哪?
<span class="a_vote_num" id="a_vote_num_
core-site.xml
&property&
&name&hadoop.tmp.dir&/name&
&value&/data/hadoop/datastore&/value&
&/property& hdfs-silt.xml
&property&
&name&dfs.data.dir&/name&
&value&/data/hadoop/datanode&/value&
&/property& datanode需要是775
<span class="a_vote_num" id="a_vote_num_
core-site.xml
&property&
&name&hadoop.tmp.dir&/name&
&value&/data/hadoop/datastore&/value&
&/property& hdfs-silt.xml
&property&
&name&dfs.data.dir&/name&
&value&/data/hadoop/datanode&/value&
&/property& datanode需要是775
--- 共有 1 条评论 ---
好像我的配置文件出错了,一个网友远程帮我弄好了,说实话没怎么看懂。。。
(4年前)&nbsp&
<span class="a_vote_num" id="a_vote_num_
应该是权限的问题
更多开发者职位上
有什么技术问题吗?
dalu~的其它问题
类似的话题新人再次求教,hdfs无法格式化_百度知道后使用快捷导航没有帐号?
查看: 916|回复: 1
hadoop2.5.2之hdfs格式化报错Cannot remove current directory怎么回事?
回复本帖可获得 1 金子奖励! 每人限 1 次
注册会员, 积分 178, 距离下一级还需 22 积分
论坛徽章:4
在执行到 bin/hdfs namenode -format 时候,报错如下:
15/05/10 01:23:52 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot remove current directory: /home/niewj/hadoop2.5.2/dfs/name/current
& && &&&at org.apache.hadoop.mon.Storage$StorageDirectory.clearDirectory(Storage.java:332)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567)
& && &&&at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:926)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
15/05/10 01:23:52 INFO util.ExitUtil: Exiting with status 1 复制代码我查找发现是说权限问题,起先改了这个目录的权限 chown -R niewj:niewj&&hadoop2.5.2/dfs/没起作用,后来又 chown -Rniewj:niewj /export/hadoop2.5.2/tmp 还是不行(这个是我的tmp目录),请问是为么?
现在已经确认是权限的问题,因为我用sudo ./hdfs namenode -format ,可以正常完成格式化,我就是相知到如果不用sudo,修改那个目录的权限可以正常格式化呢?
注册会员, 积分 178, 距离下一级还需 22 积分
论坛徽章:4
我的用户名和组都是niewj
扫一扫加入本版微信群

我要回帖

更多关于 hadoop格式化hdfs 的文章

 

随机推荐