本帖最后由 nettman 于 2014-7-18 21:52 编辑
问题导读:
1.使用RDO 安装OpenStack Icehouse如何配置本地软件源?
2.如何执行执行RDO 安装?
3.如何查看OpenStack 的运行状态?
OpenStack 每半年发布一个版本,Icehouse 是最近的一个版本,相对于Havana 提供了更多的功能和驱动支持。本文是使用RedHat 提供的RDO 脚本进行部署的文档。
RDO 部署方式比较快捷,但由于相关的yum 源都在国外,若直接安装,经常出现rpm 包获取失败导致的问题。故建议部署前,先把相关的软件源镜像到本地,修改DNS 指向。(注意,不能直接修改repos 库中的位置,因为在多节点部署时,RDO 会自动安装epel、forman 等repo文件,手动修改是来不及的)
本文采用两节点方式部署,第一个节点node01 作为控制节点(身份认证、网络服务、计算调度服务、Cinder服务、镜像服务等)+计算节点;第二个节点node02 作为单纯的计算节点扩展。相关详细的概念请见OpenStack 官网 ,这里不再一一说明。
(阅读本文时,建议点击上方的“边栏”按钮,把边栏隐藏,否则格式可能会混乱。)
1.系统环境
两个节点:
引用
node01.linuxfly.org
eth0: 192.168.48.213
eth1: 10.0.48.213
node02.linuxfly.org
eth0: 192.168.48.214
eth1: 10.0.48.214 复制代码
eth0作为管理网卡和外部网络连接网卡;eth1作为gre 通道的连接网卡,也是两个节点间数据沟通的网卡。
2.配置本地软件源
使用192.168.86.37 上的本地yum 源,根据新的脚本执行情况进行修改。
安装前,需要确保rdo.fedorapeople.org 可正常解析到192.168.86.37:
引用
[root@gd2-cloud-037 ~]# vi /var/named/fedorapeople.org.master.zone
$TTL 1D
@ IN SOA root.repos.fedorapeople.org. repos.fedorapeople.org. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS repos.fedorapeople.org.
repos IN A 192.168.86.37
rdo IN A 192.168.86.37 复制代码
测试:
引用
[root@node01 ~]# ping -c2 rdo.fedorapeople.org
PING rdo.fedorapeople.org (192.168.86.37) 56(84) bytes of data.
64 bytes from gd2-cloud-037.vclound.com (192.168.86.37): icmp_seq=1 ttl=61 time=0.331 ms
64 bytes from gd2-cloud-037.vclound.com (192.168.86.37): icmp_seq=2 ttl=61 time=0.354 ms
--- rdo.fedorapeople.org ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.331/0.342/0.354/0.021 ms 复制代码
还要修改http 的虚拟主机配置:
引用
[root@gd2-cloud-037 ~]# vi /etc/httpd/conf.d/yum_vhost.conf
ServerAdmin webmaster@vclound.com
DocumentRoot /var/www/html/root/repos.fedorapeople.org/repos
ServerName rdo.fedorapeople.org
ErrorLog logs/rdo.fedorapeople.org-error_log
CustomLog logs/rdo.fedorapeople.org-access_log common 复制代码
否则在使用RDO 安装时,可能会遇到错误:
引用
2014-05-23 18:18:07::INFO::shell::78::root:: [192.168.48.214] Executing script:
(rpm -q 'rdo-release-icehouse' || yum install -y --nogpg http://rdo.fedorapeople.org/open ... ehouse-3.noarch.rpm) || true
2014-05-23 18:18:19::INFO::shell::78::root:: [192.168.48.214] Executing script:
yum-config-manager --enable openstack-icehouse
2014-05-23 18:18:19::ERROR::run_setup::892::root:: Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 887, in main
_main(confFile)
File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 574, in _main
runSequences()
File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 553, in runSequences
controller.runAllSequences()
File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 84, in runAllSequences
sequence.run(self.CONF)
File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 96, in run
step.run(config=config)
File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 43, in run
raise SequenceError(str(ex))
SequenceError: Failed to set RDO repo on host 192.168.48.214:
RPM file seems to be installed, but appropriate repo file is probably missing in /etc/yum.repos.d/
2014-05-23 18:18:19::INFO::shell::78::root:: [192.168.48.213] Executing script:
rm -rf /var/tmp/packstack/0c97dceac80e41b081bc8316ae439d88
2014-05-23 18:18:19::INFO::shell::78::root:: [192.168.48.214] Executing script:
rm -rf /var/tmp/packstack/20a0e6a1d0ba4ab0a0c4490ba3dc9fce 复制代码
启动防火墙:
引用
[root@node01 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@node01 ~]# service iptables save
iptables:将防火墙规则保存到 /etc/sysconfig/iptables: [确定]
[root@node02 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@node02 ~]# service iptables save
iptables:将防火墙规则保存到 /etc/sysconfig/iptables: [确定] 复制代码
如果不执行该动作,可能执行RDO 时会遇到错误:
引用
ERROR : Error appeared during Puppet run: 192.168.48.214_prescript.pp
Error: Could not start Service[iptables]: Execution of '/sbin/service iptables start' returned 6: 复制代码
3.安装软件
引用
[root@node01 ~]# wget http://rdo.fedorapeople.org/open ... ehouse-3.noarch.rpm
[root@node01 ~]# rpm -ivh rdo-release-icehouse-3.noarch.rpm
[root@node01 ~]# cat /etc/yum.repos.d/rdo-release.repo
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/
enabled=1
skip_if_unavailable=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
priority=98 复制代码
因为foreman 的源还没有同步到本地,把域名加入到两个节点上:
引用
[root@node01 ~]# echo '208.74.145.172 yum.theforeman.org' >> /etc/hosts
复制代码
否则,会报:
引用
http://yum.theforeman.org/releas ... epodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
复制代码
安装RDO 脚本:
引用
[root@node01 ~]# yum install -y openstack-packstack
Installed:
openstack-packstack.noarch 0:2014.1.1-0.12.dev1068.el6
Dependency Installed:
openstack-packstack-puppet.noarch 0:2014.1.1-0.12.dev1068.el6 openstack-puppet-modules.noarch 0:2014.1-11.1.el6 ruby.x86_64 0:1.8.7.352-13.el6 ruby-irb.x86_64 0:1.8.7.352-13.el6 ruby-libs.x86_64 0:1.8.7.352-13.el6
ruby-rdoc.x86_64 0:1.8.7.352-13.el6 rubygem-json.x86_64 0:1.5.5-1.el6 rubygems.noarch 0:1.3.7-5.el6
Complete! 复制代码
使用/dev/sdb 作为lvm 提供给cinder 使用:
引用
[root@node01 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@node01 ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
[root@node01 ~]# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 100.00 GiB
PE Size 4.00 MiB
Total PE 25599
Alloc PE / Size 0 / 0
Free PE / Size 25599 / 100.00 GiB
VG UUID YkbC1M-UJuf-WXKS-se8W-yoZx-Y8JU-cvj9fN 复制代码
生成应答文件:
[root@node01 ~]# packstack --gen-answer-file=openstack-icehouse-test-20140523.txt
修改应答文件:
确认需要安装的服务,以及相关服务的数据库密码,登陆密码等信息。
引用
复制代码
这里admin 用户的密码通过CONFIG_KEYSTONE_ADMIN_PW=linuxfly 设置,在安装完毕后,会在/root目录下有个环境变量的文件,可导入其中的值,进行命令行的操作(见最后的示例):
引用
[root@node01 ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=linuxfly
export OS_AUTH_URL=http://192.168.48.213:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ ' 复制代码
执行RDO 安装:
引用
复制代码
4.查看OpenStack 的运行状态
引用
[root@node01 ~]# ovs-vsctl show
c4b035f0-98e4-4868-9e14-094ca5a952f4
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-0a0030d6"
Interface "gre-0a0030d6"
type: gre
options: {in_key=flow, local_ip="10.0.48.213", out_key=flow, remote_ip="10.0.48.214"}
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
ovs_version: "1.11.0"
[root@node02 ~]# ovs-vsctl show
6d99ed4b-2030-4e6f-bee0-9b782977b76e
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-0a0030d5"
Interface "gre-0a0030d5"
type: gre
options: {in_key=flow, local_ip="10.0.48.214", out_key=flow, remote_ip="10.0.48.213"}
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: "1.11.0" 复制代码
5.调整相关参数
1)创建外部网络使用的桥接端口
如果不创建该桥接端口,实例无法连接路由网关、外部网络以及meta-data 服务,l3-agent.log 提示:
引用
2014-05-24 01:43:42.096 3283 ERROR neutron.agent.l3_agent [req-5cbe551e-28e5-40d8-b3df-8def91cb5f81 None] The external network bridge 'br-ex' does not exist 复制代码
创建步骤:
引用
[root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
IPV6INIT=no
MTU=1500
ONBOOT=yes
HWADDR=00:50:56:81:9a:e1
USERCTL=no
[root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
BOOTPROTO=none
IPV6INIT=no
MTU=1500
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=192.168.48.213
NETMASK=255.255.255.0
GATEWAY=192.168.48.1
DNS1=192.168.86.37
USERCTL=no
[root@node01 ~]# ovs-vsctl add-br br-ex; ovs-vsctl add-port br-ex eth0; service network restart
重启neutron-l3-agent 服务。
[root@node01 ~]# /etc/init.d/neutron-l3-agent restart
停止 neutron-l3-agent: [确定]
正在启动 neutron-l3-agent: [确定] 复制代码
2)使用Memcache 作为token 的后端服务
引用
[root@node01 ~]# vi /etc/keystone/keystone.conf
# Controls the token construction, validation, and revocation
# operations. Core providers are
# "keystone.token.providers.[pki|uuid].Provider". (string
# value)
#provider=
#provider=keystone.token.providers.pki.Provider
provider=keystone.token.providers.uuid.Provider
# Keystone Token persistence backend driver. (string value)
#driver=keystone.token.backends.sql.Token
#driver=keystone.token.backends.sql.Token
driver=keystone.token.backends.memcache.Token 复制代码
重启memcached 服务:
引用
[root@node01 ~]# /etc/init.d/memcached restart
停止 memcached: [确定]
正在启动 memcached: [确定] 复制代码
重启相关服务:
引用
[root@node01 ~]# for i in `chkconfig --list|grep '3:启用'|egrep 'neutron|openstack'|grep -v neutron-ovs-cleanup|awk '{print $1}'`; do service $i restart; done
停止 neutron-dhcp-agent: [确定]
正在启动 neutron-dhcp-agent: [确定]
停止 neutron-l3-agent: [确定]
正在启动 neutron-l3-agent: [确定]
停止 neutron-metadata-agent: [确定]
正在启动 neutron-metadata-agent: [确定]
停止 neutron-openvswitch-agent: [确定]
正在启动 neutron-openvswitch-agent: [确定]
停止 neutron: [确定]
正在启动 neutron: [确定]
停止 openstack-ceilometer-alarm-evaluator: [确定]
正在启动 openstack-ceilometer-alarm-evaluator: [确定]
停止 openstack-ceilometer-alarm-notifier: [确定]
正在启动 openstack-ceilometer-alarm-notifier: [确定]
停止 openstack-ceilometer-api: [确定]
正在启动 openstack-ceilometer-api: [确定]
停止 openstack-ceilometer-central: [确定]
正在启动 openstack-ceilometer-central: [确定]
停止 openstack-ceilometer-collector: [确定]
正在启动 openstack-ceilometer-collector: [确定]
停止 openstack-ceilometer-compute: [确定]
正在启动 openstack-ceilometer-compute: [确定]
停止 openstack-cinder-api: [确定]
正在启动 openstack-cinder-api: [确定]
停止 openstack-cinder-backup: [确定]
正在启动 openstack-cinder-backup: [确定]
停止 openstack-cinder-scheduler: [确定]
正在启动 openstack-cinder-scheduler: [确定]
停止 openstack-cinder-volume: [确定]
正在启动 openstack-cinder-volume: [确定]
停止 openstack-glance-api: [确定]
正在启动 openstack-glance-api: [确定]
停止 openstack-glance-registry: [确定]
正在启动 openstack-glance-registry: [确定]
停止 openstack-heat-api: [确定]
正在启动 openstack-heat-api: [确定]
停止 openstack-heat-api-cfn: [确定]
正在启动 openstack-heat-api-cfn: [确定]
停止 openstack-heat-api-cloudwatch: [确定]
正在启动 openstack-heat-api-cloudwatch: [确定]
停止 openstack-heat-engine: [确定]
正在启动 openstack-heat-engine: [确定]
停止 keystone: [确定]
正在启动 keystone: [确定]
停止 openstack-nova-api: [确定]
正在启动 openstack-nova-api: [确定]
停止 openstack-nova-cert: [确定]
正在启动 openstack-nova-cert: [确定]
停止 openstack-nova-compute: [确定]
正在启动 openstack-nova-compute: [确定]
停止 openstack-nova-conductor: [确定]
正在启动 openstack-nova-conductor: [确定]
停止 openstack-nova-consoleauth: [确定]
正在启动 openstack-nova-consoleauth: [确定]
停止 openstack-nova-novncproxy: [确定]
正在启动 openstack-nova-novncproxy: [确定]
停止 openstack-nova-scheduler: [确定]
正在启动 openstack-nova-scheduler: [确定]
Stopping openstack-swift-account: [确定]
Starting openstack-swift-account: [确定]
Stopping openstack-swift-account-auditor: [确定]
Starting openstack-swift-account-auditor: [确定]
Stopping openstack-swift-account-reaper: [确定]
Starting openstack-swift-account-reaper: [确定]
Stopping openstack-swift-account-replicator: [确定]
Starting openstack-swift-account-replicator: [确定]
Stopping openstack-swift-container: [确定]
Starting openstack-swift-container: [确定]
Stopping openstack-swift-container-auditor: [确定]
Starting openstack-swift-container-auditor: [确定]
Stopping openstack-swift-container-replicator: [确定]
Starting openstack-swift-container-replicator: [确定]
Stopping openstack-swift-container-updater: [确定]
Starting openstack-swift-container-updater: [确定]
Stopping openstack-swift-object: [确定]
Starting openstack-swift-object: [确定]
Stopping openstack-swift-object-auditor: [确定]
Starting openstack-swift-object-auditor: [确定]
Stopping openstack-swift-object-replicator: [确定]
Starting openstack-swift-object-replicator: [确定]
Stopping openstack-swift-object-updater: [确定]
Starting openstack-swift-object-updater: [确定]
Stopping openstack-swift-proxy: [确定]
Starting openstack-swift-proxy: [确定] 复制代码
关闭crontab 的定时任务:
引用
[root@node01 ~]# crontab -u keystone -e
# HEADER: This file was autogenerated at Sat May 24 00:28:13 +0800 2014 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: token-flush
#*/1 * * * * /usr/bin/keystone-manage token_flush >/dev/null 2>&1 复制代码
因为keystone-manage token_flush 是针对SQL 保存token的情况实现的,如果不关闭,执行会报错:
引用
2014-05-26 11:07:01.847 7258 CRITICAL keystone [-] NotImplemented: The action you have requested has not been implemented.
2014-05-26 11:07:01.847 7258 TRACE keystone Traceback (most recent call last):
2014-05-26 11:07:01.847 7258 TRACE keystone File "/usr/bin/keystone-manage", line 51, in
2014-05-26 11:07:01.847 7258 TRACE keystone cli.main(argv=sys.argv, config_files=config_files)
2014-05-26 11:07:01.847 7258 TRACE keystone File "/usr/lib/python2.6/site-packages/keystone/cli.py", line 190, in main
2014-05-26 11:07:01.847 7258 TRACE keystone CONF.command.cmd_class.main()
2014-05-26 11:07:01.847 7258 TRACE keystone File "/usr/lib/python2.6/site-packages/keystone/cli.py", line 154, in main
2014-05-26 11:07:01.847 7258 TRACE keystone token_manager.driver.flush_expired_tokens()
2014-05-26 11:07:01.847 7258 TRACE keystone File "/usr/lib/python2.6/site-packages/keystone/token/backends/kvs.py", line 355, in flush_expired_tokens
2014-05-26 11:07:01.847 7258 TRACE keystone raise exception.NotImplemented()
2014-05-26 11:07:01.847 7258 TRACE keystone NotImplemented: The action you have requested has not been implemented.
2014-05-26 11:07:01.847 7258 TRACE keystone 复制代码
简单验证:
引用
[root@node01 ~]# source keystonerc_admin
[root@node01 ~(keystone_admin)]# nova service-list
+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | node01.linuxfly.org | internal | enabled | up | 2014-05-26T03:08:08.000000 | - |
| nova-conductor | node01.linuxfly.org | internal | enabled | up | 2014-05-26T03:08:06.000000 | - |
| nova-scheduler | node01.linuxfly.org | internal | enabled | up | 2014-05-26T03:08:08.000000 | - |
| nova-compute | node01.linuxfly.org | nova | enabled | up | 2014-05-26T03:08:09.000000 | - |
| nova-compute | node02.linuxfly.org | nova | enabled | up | 2014-05-26T03:08:08.000000 | - |
| nova-cert | node01.linuxfly.org | internal | enabled | up | 2014-05-26T03:08:06.000000 | - |
+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+
[root@node01 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+---------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+---------------------+-------+----------------+
| 04990a48-4b83-4999-ae29-1fbc33e9de3a | Metadata agent | node01.linuxfly.org | :-) | True |
| 1d5758ca-73fa-4729-bd95-6a4bf8066c5f | L3 agent | node01.linuxfly.org | :-) | True |
| 3babec70-d4bd-4135-8bd6-097ab7e22a54 | Open vSwitch agent | node02.linuxfly.org | :-) | True |
| 431d2a91-2bc8-4f95-b574-9f6dc94cb49d | DHCP agent | node01.linuxfly.org | :-) | True |
| c706de15-50fb-495b-868b-bb7a228d64d1 | Open vSwitch agent | node01.linuxfly.org | :-) | True |
+--------------------------------------+--------------------+---------------------+-------+----------------+ 复制代码
安装完成。
应答文件:openstack-icehouse-test-20140523.tgz
下载:
链接:http://pan.baidu.com/s/1o69I6c6 密码:kk32