分享

ceph-deploy安装的节点查看状态时都是失败

lz19851224 发表于 2016-7-20 15:48:52 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 2 19421
systemctl restart  ceph-osd@osd-node3
[root@osd-node3 ~]# systemctl status -l ceph-osd@osd-node3
ceph-osd@osd-node3.service - Ceph object storage daemon
   Loaded: loaded ([url=]/usr/lib/systemd/system/ceph-osd@.service[/url]; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since 三 2016-07-20 15:23:56 CST; 708ms ago
  Process: 11140 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
  Process: 11099 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 11140 (code=exited, status=1/FAILURE)
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service: main process exited, code=exited, status=1/FAILURE
7月 20 15:23:56 osd-node3 systemd[1]: Unit ceph-osd@osd-node3.service entered failed state.
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service failed.
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service holdoff time over, scheduling restart.
7月 20 15:23:56 osd-node3 systemd[1]: start request repeated too quickly for ceph-osd@osd-node3.service
7月 20 15:23:56 osd-node3 systemd[1]: Failed to start Ceph object storage daemon.
7月 20 15:23:56 osd-node3 systemd[1]: Unit ceph-osd@osd-node3.service entered failed state.
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service failed.

        ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.13889 root default                                         
-2 0.04630     host osd-node3                                   
0 0.04630         osd.0           up  1.00000          1.00000
-3 0.04630     host osd-node1                                   
1 0.04630         osd.1           up  1.00000          1.00000
-4 0.04630     host osd-node2                                   
2 0.04630         osd.2           up  1.00000          1.00000 [root@osd-node3 ~]#

ceph osd stat
     osdmap e15: 3 osds: 3 up, 3 in
            flags sortbitwise

[root@mon-node2 ceph]# systemct status -l ceph-mon@mon-node2
ceph-mon@mon-node2.service - Ceph cluster monitor daemon
   Loaded: loaded ([url=]/usr/lib/systemd/system/ceph-mon@.service[/url]; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since 三 2016-07-20 15:14:09 CST; 3min 54s ago
  Process: 8399 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 8399 (code=exited, status=1/FAILURE)
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service: main process exited, code=exited, status=1/FAILURE
7月 20 15:14:09 mon-node2 systemd[1]: Unit ceph-mon@mon-node2.service entered failed state.
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service failed.
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service holdoff time over, scheduling restart.
7月 20 15:14:09 mon-node2 systemd[1]: start request repeated too quickly for ceph-mon@mon-node2.service
7月 20 15:14:09 mon-node2 systemd[1]: Failed to start Ceph cluster monitor daemon.
7月 20 15:14:09 mon-node2 systemd[1]: Unit ceph-mon@mon-node2.service entered failed state.
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service failed.


[root@mon-node2 ceph]# ceph -s
    cluster fafcdcaa-48ff-460e-bd41-36bd013b6529
     health HEALTH_OK
     monmap e7: 3 mons at {mon-node1=172.16.1.172:6789/0,mon-node2=172.16.1.173:6789/0,mon-node3=172.16.1.174:6789/0}
            election epoch 26, quorum 0,1,2 mon-node1,mon-node2,mon-node3
     osdmap e15: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v114: 64 pgs, 1 pools, 0 bytes data, 0 objects
            18875 MB used, 123 GB / 142 GB avail
                  64 active+clean
请大神们看看这是咋回事,还有就是监控节点时间不同步的话,我更改配置文件想重启服务起不了,我装的是[root@mon-node2 ceph]# ceph --version
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

已有(2)人评论

跳转到指定楼层
nextuser 发表于 2016-7-20 15:59:19
集群首先时间需要同步,这是所有集群必须的,否则集群会出异常。另外只看到失败,别的没有看出来。所以楼主需在提供下日志
回复

使用道具 举报

lz19851224 发表于 2016-7-21 14:53:57
时间是同步的,就是刚ceph-deploy mon add mon-node2 mon-node3执行完这条命令,我就查看状态,就是失败,然后我就ceph -s 集群显示就有三个监控节点啦
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条