分享

对Havana OpenStack VMware Plugin的一些认识

pig2 发表于 2014-1-16 01:17:17 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 0 5704
本帖最后由 pig2 于 2014-1-16 01:19 编辑

没搞过VMware,也没看过Nova VMware Driver的代码,但今天同事找我咨询Nova VMware Driver能不能用Neutron来建立Vlan网络的问题,接受他们的一些反馈再经理论分析后的认识如下。

首先Dan(Neutron的前PTL)在这个网页(https://communities.vmware.com/message/2315081)说了一段话如下:
The only Neutron plugin that is validated to work with vSphere is the Nicira NVP (now renamed to VMware NSX) plugin.  With other plugins like the OVS plugin, the Quantum calls will succeed and an IP will be allocated, but the underlying "plumbing" won't work correctly, which is why DHCP traffic is not getting through.

If you wanted to use the OVS plugin in VLAN mode and only create a single network, it is technically possible to make DHCP work, as you could map the br-int port group to the same VLAN that the OVS plugin is putting the VM + DHCP server traffic on to, but this is really only a hack for very limited scenarios.

The Nicira NVP / VMware NSX plugin is available only via VMware and requires direct engagement with the VMware networking team (i.e., no generally available trial download).  You can send me a private message via this communities page if you have a near term production deployment opportunity and would like to be put in touch with the VMware networking team.

In the future, we are considering a basic Neutron plugin that uses vsphere port-groups, though the value of such a model is somewhat limited, as it won't support many key features, including security groups.  

也需要了解VMware中打tag的三种方式,VGT即在虚机里就打标签,VST在vmware虚拟交换机上打tag, EGT即在外部物理交换机上打tag,可参见我的博客:http://blog.csdn.net/quqi99/article/details/8727130 , 稍微翻看了一下nova vmware driver的代码,显然tag是打在虚拟交换机上的(在vmware的tag叫port group)也即VST模式。

具体结论为:
1,nova-network能够调用vcenter api来打tag。
2,neutron nvp也能够调用vcenter api来打tag,但是这需要使用Vmware NSX SDN控制器,这是收费软件,显然我们希望使用OVS.
3, 如果使用neturon ml2 ovs plugin的话,因为ovs agent显然是不可能调用vcenter api去自动打tag的(即创建port group),所以显然需要再写一个vmware agent来自动做这件事


要实现这个VMware Agent非常简单, 下面谈谈具体的实现步骤.
第一步,你要非常清楚Agent究竟是来做什么的,解释一下:

1,创建network和subnet时,只是在DB里记了一下,没有做什么事

2,对于内外部网的物理tap及网桥由l3-agent通过internal_network_added和external_gateway_added方法调用interface.py里的plug创建

3, 对于虚机的tap及网桥由nova-compute在spwan时通过vif.py里的plug创建

4,在上面2,3两点创建实际的物理tap及网桥后, neutron agent(运行在l3-agent和nova-compute所在节点上)来处理tap的数据据抽象, 它检测到br-int上的tap增加了,然后就从DB取出和tap相关的port信息继续去设置和port相关的事情,如vlan的流规则,security group的iptables规则等.


第二步,很多代码都是现成的。
首先你要明白l3-agent是做网关的,所以vmware的虚机是可以通过openvswitch的做网关的port出去的。但是我们在使用ovs agent的时候由于它没有调用vCenter api去设置vlan(即port group),所以我们需要写这么一个VMware Agent,那样我们当然可以直接将ovs agent的代码拿过来在设置vlan的地方修改成直接调用vCenter设置port group的rest api就可以了。但是因为ovs agent的代码复杂一些,建议使用hyper-v agent做框架,将必要的函数从ovs agent上挪过来即可。至于如何调用vCenter soap api (wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl)去设置port group, 原有的nova-network中就有这些代码($nova/virt/vmwareapi/vim.py),即需要将$nova/virt/vmwareapi/vif.py中的ensure_vlan_bridge方法挪到neutron agent中的port_bound方法的if network_type == p_const.TYPE_VLAN:分支即可。但是这却有一个问题:

对于ovs来说,一般是由nova-compute driver完成物理tap及bridge的创建,并将tap插入bridge;
neutron ovs agent只要是发现新添加的tap之从DB中取完关联的port信息完成后续的vlan及security group的设置。

ESXi的端口组来说不存在建tap,建bridge,并插tap到bridge这三步曲啊。所以说我们不需要写vmware agent,
仍然用ovs agent去处理l3产生的网关的port即可,但需要写一个vmware mech driver即可。

原型如下(未测试),也将完整未测试的草稿patch附在附件二中(所有代码都是花了十几分钟从nova-network中挪出来的,未测试,只是说明一个原理)。

class VMwareVCMechanismDriver(mech_agent.AgentMechanismDriverBase):
    """Attach to networks using openvswitch L2 agent.

    The VMwareVCMechanismDriver integrates the ml2 plugin with the
    openvswitch L2 agent.
    """

    def __init__(self):
        super(VMwareVCMechanismDriver, self).__init__(
            constants.AGENT_TYPE_OVS,
            portbindings.VIF_TYPE_OVS,
            True)
        self._session = VMwareAPISession(scheme='https')
        self._cluster = cfg.CONF.VMWARE.cluster_name

    def check_segment_for_agent(self, segment, agent):
            return False

    def create_network_postcommit(self, context):
        """Provision the network on the vmware vCenter."""

        network = context.current
        vif.ensure_vlan_bridge(self._session, network,
                               cluster=self._cluster, create_vlan=True)


Complete Operation Process:

Two related components:
vCenter driver(nova-compute): running the vcenter driver "nova.virt.vmwareapi.driver.VMwareVCDriver" on any host to control vCenter, it uses EXSi vSwitch to privde L2 function.
vmware mech driver: creting the port group, thus the ovs bridge at l3-agent(ovs) and vwmare vSwitch at EXSi can interconnect via standard 802.1q vlan protocal
l3-agent with ovs agent, privide L3 function for vmware vm.

Precondition:
1, create EXSi vSwitch named br-int with the kernel physical NIC device named vmnic0 manually, and related configuration is as below:
   integration_bridge=br-int    # default port group name
   vlan_interface=vmnic0        # physical nic name

   esxcfg-vswitch -d vSwitch0
   esxcfg-vswitch -a vSwitch0
   esxcfg-vswitch --link=vmnic0 vSwitch0

Process:
1, create network and subnet, neutron-api will record following info into DB.
   neutron net-create net_vlan --tenant_id=$TENANT_ID  --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 122
   neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 --gateway 10.0.1.1 net_vlan 10.0.1.0/24

   and related configuration is as below:
     tenant_network_type=vlan
     network_vlan_ranges = physnet1:1:4094
     bridge_mappings = physnet1:br-phy
2, after creating network and subnet, vmware mech driver will also creating port group(network) for vmware.
3, after creating network and subnet, l3-agent will create ovs port as the gateway for the subnet and plug it into ovs bridge br-phy when collecting the routers.
4, VMwareVCDriver will retrieve the port group(network) info via get_network_ref method in $nova/virt/vmwareapi/vif.py for us when creating a VM using the command "nova boot --flavor 1 --image <image-uuid> --nic net-id=<$NETWORK_ID> vm1".


附录1,EXSi 命令操作port group

1,列出vSwitch
esxcfg-vswitch -l
2, 删除,添加vSwitch, 并将物理网卡vmnic3,vmnic4添加到vSwitch中
esxcfg-vswitch -d MainGuestVirtualSwitch
esxcfg-vswitch -a MainGuestVirtualSwitch
esxcfg-vswitch --link=vmnic3 MainGuestVirtualSwitch
esxcfg-vswitch --link=vmnic4 MainGuestVirtualSwitch
3, 添加两个端口组
esxcfg-vswitch --add-pg=PrivateNetwork MainGuestVirtualSwitch
esxcfg-vswitch --add-pg=ShopFloor MainGuestVirtualSwitch
4,将端口组和vlan号关联,这样,端口组就相当于一个友好的network name了。
esxcfg-vswitch --vlan=334 --pg=PrivateNetwork MainGuestVirtualSwitch
esxcfg-vswitch --vlan=332 --pg=ShopFloor MainGuestVirtualSwitch

代码下载.zip (21.18 KB, 下载次数: 0)

没找到任何评论,期待你打破沉寂

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条