搭建高可用OpenStack(Queen版)集群(六)之部署Neutron控制/网络节点集群

网友投稿 874 2022-11-08

搭建高可用OpenStack(Queen版)集群(六)之部署Neutron控制/网络节点集群

搭建高可用OpenStack(Queen版)集群(六)之部署Neutron控制/网络节点集群

一、搭建高可用OpenStack(Queen版)集群之部署Neutron控制/网络节点集群

一、OpenStack Neutron简介

1、概述

Openstack Networking(neutron),允许创建、插入接口设备,这些设备由其他openstack服务管理。

openstack网络主要和openstack计算交互,以提供网络连接到它的实例。

2、neutron包含的组件

(1)neutron-server        接收和路由APi请求到合适的openstack网络插件,以达到预想的目的。    (2)openstack网络插件和代理        插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖供应商和技术而不同。例如:Linux Bridge、 Open vSwitch    (3)消息队列        大多数的openstack networking安装都会用到,用于在neutron-server和各种各样的代理进程间路由信息。也为某些特定的插件扮演数据库的角色

3、网络工作模式和概念(虚拟化网络)

[ KVM ] 四种简单的网络模型​

[ KVM 网络虚拟化 ] Openvswitch

二、部署Neutron控制/网络节点集群

网卡信息根据自己情况进行修改

1、创建neutron数据库

在任意控制节点创建数据库,后台数据自动同步

mysql -u root -p

CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';flush privileges;exit;

2、创建neutron-api

在任意控制节点操作

调用neutron服务需要认证信息,加载环境变量脚本即可

. admin-openrc

1、创建neutron用户

service项目已在glance章节创建,neutron用户在”default” domain中

[root@controller01 ~]# openstack user create --domain default --password=neutron_pass neutron+---------------------+----------------------------------+| Field | Value |+---------------------+----------------------------------+| domain_id | default || enabled | True || id

2、neutron赋权

为neutron用户赋予admin权限(没有返回值)

openstack role add --project service --user neutron admin

3、创建neutron服务实体

neutron服务实体类型”network”

[root@controller01 ~]# openstack service create --name neutron --description "OpenStack Networking" network+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Networking || enabled | True || id

4、创建neutron-api

注意:

region与初始化admin用户时生成的region一致;api地址统一采用vip,如果public/internal/admin分别使用不同的vip,请注意区分;neutron-api 服务类型为network;

# neutron-api 服务类型为network;# public api[root@controller01 ~]# openstack endpoint create --region RegionTest network public Field | Value |+--------------+----------------------------------+| enabled | True || id | 87bb1951240b4cce8b56406642a0d169 || interface | public || region | RegionTest || region_id | RegionTest || service_id | db519ab1d6654bf8af0cccabddf5a0cc || service_name | neutron || service_type | network || url | |+--------------+----------------------------------+# internal api[root@controller01 ~]# openstack endpoint create --region RegionTest network internal Field | Value |+--------------+----------------------------------+| enabled | True || id | ab8bfd0e17b945e7bd54c44514965d9f || interface | internal || region | RegionTest || region_id | RegionTest || service_id | db519ab1d6654bf8af0cccabddf5a0cc || service_name | neutron || service_type | network || url | |# admin api[root@controller01 ~]#//controller:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | adbcdd77fd9347ed95023a93b62edcff || interface | admin || region | RegionTest || region_id | RegionTest || service_id | db519ab1d6654bf8af0cccabddf5a0cc || service_name | neutron || service_type | network || url | |

3、安装neutron

在全部控制节点安装neutron相关服务

yum install

4、配置neutron.conf

在全部控制节点操作

注意:

”bind_host”参数,根据节点修改;neutron.conf文件的权限:root:neutron

cp -rp /etc/neutron/neutron.conf{,.bak}egrep -v "^$|^#"

[DEFAULT]bind_host = 10.20.9.189auth_strategy = keystonecore_plugin = ml2service_plugins = routerallow_overlapping_ips = truenotify_nova_on_port_status_changes = truenotify_nova_on_port_data_changes = true# l3高可用,可以采用vrrp模式或者dvr模式;# vrrp模式下,在各网络节点(此处网络节点与控制节点混合部署)以vrrp的模式设置主备virtual router;mater故障时,virtual router不会迁移,而是将router对外服务的vip漂移到standby router上; # dvr模式下,三层的转发(L3 Forwarding)与nat功能都会被分布到计算节点上,即计算节点也有了网络节点的功能;但是,dvr依然不能消除集中式的virtual router,为了节省IPV4公网地址,仍将snat放在网络节点上提供;# vrrp模式与dvr模式不可同时使用# “l3_ha = true“参数即启用l3 ha功能l3_ha = true# 最多在几个l3 agent上创建ha routermax_l3_agents_per_router = 3# 可创建ha router的最少正常运行的l3 agnet数量min_l3_agents_per_router = 2# vrrp广播网络l3_ha_net_cidr = 169.254.192.0/18# ”router_distributed “参数本身的含义是普通用户创建路由器时,是否默认创建dvr;此参数默认值为“false”,这里采用vrrp模式,可注释此参数# 虽然此参数在mitaka(含)版本后,可与l3_ha参数同时打开,但设置dvr模式还同时需要设置网络节点与计算节点的l3_agent.ini与ml2_conf.ini文件# router_distributed = true# dhcp高可用,在3个网络节点各生成1个dhcp服务器dhcp_agents_per_network = 3# 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;# transport_url = rabbit://openstack:openstack@controller:5673# rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,强烈建议连接rabbitmq直接对接集群而非通过前端haproxytransport_url=rabbit://openstack:openstack@controller01:5672,openstack:openstack@controller02:5672,openstack:openstack@controller03:5672[agent][cors][database]connection = mysql+pymysql://neutron:123456@controller/neutron[keystone_authtoken]auth_uri = = = controller01:11211,controller02:11211,controller03:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = neutron_pass[matchmaker_redis][nova]auth_url = = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionTestproject_name = serviceusername = novapassword = nova_pass[oslo_concurrency]lock_path = /var/lib/neutron/tmp[oslo_messaging_amqp][oslo_messaging_kafka][oslo_messaging_notifications][oslo_messaging_rabbit][oslo_messaging_zmq][oslo_middleware][oslo_policy][quotas][ssl]

5、配置ml2_conf.ini

在全部控制节点操作

注意:ml2_conf.ini文件的权限:root:neutron

单网卡需要设置:flat_networks = provider

cp -rp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}

cat> /etc/neutron/plugins/ml2/ml2_conf.ini<

服务初始化调用ml2_conf.ini中的配置,但指向/etc/neutron/olugin.ini文件

ln

6、配置linuxbridge_agent.ini

在全部控制节点操作

1、配置linuxbridge_agent.ini

注意:linuxbridge_agent.ini文件的权限:root:neutron

单网卡需要设置:physical_interface_mappings = provider:ens192

cp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}cat>/etc/neutron/plugins/ml2/linuxbridge_agent.ini<

sed -i 's/10.0.0.31/10.20.9.189/g' /etc/neutron/plugins/ml2/linuxbridge_agent.ini

2、配置内核参数

bridge:是否允许桥接;如果“sysctl -p”加载不成功,报” No such file or directory”错误,需要加载内核模块“br_netfilter”;命令“modinfo br_netfilter”查看内核模块信息;命令“modprobe br_netfilter”加载内核模块

echo "# bridge" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.confsysctl -p

报错如下

# sysctl -pnet.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directorysysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

解决办法

[root@controller01 ml2]# modprobe br_netfilter[root@controller01 ml2]# ls /proc/sys/net/bridgebridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-filter-vlan-taggedbridge-nf-call-ip6tables bridge-nf-filter-pppoe-tagged bridge-nf-pass-vlan-input-dev[root@controller01 ml2]# sysctl -pnet.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1

7、配置l3_agent.ini(self-networking)

在全部控制节点操作

注意:l3_agent.ini文件的权限:root:neutron

cp -rp /etc/neutron/l3_agent.ini{,.bak}# egrep -v "^$|^#" /etc/neutron/l3_agent.inicat>/etc/neutron/l3_agent.ini<

8、配置dhcp_agent.ini

在全部控制节点操作

使用dnsmasp提供dhcp服务;

dhcp_agent.ini文件的权限:root:neutron

cp -rp /etc/neutron/dhcp_agent.ini{,.bak}# egrep -v "^$|^#" /etc/neutron/dhcp_agent.inicat>/etc/neutron/dhcp_agent.ini<

9、配置metadata_agent.ini

在全部控制节点操作

注意:

metadata_proxy_shared_secret:与/etc/nova/nova.conf文件中参数一致;metadata_agent.ini文件的权限:root:neutron

cp -rp /etc/neutron/metadata_agent.ini{,.bak}# egrep -v "^$|^#" /etc/neutron/metadata_agent.inicat>/etc/neutron/metadata_agent.ini<

10、配置nova.conf

在全部控制节点操作

注意:

配置只涉及nova.conf的”[neutron]”字段;metadata_proxy_shared_secret:与/etc/neutron/metadata_agent.ini文件中参数一致在/etc/nova/nova.conf添加如下内容

[neutron]url = = = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionTestproject_name = serviceusername = neutronpassword = neutron_passservice_metadata_proxy = truemetadata_proxy_shared_secret = neutron_metadata_secret

11、同步neutron数据库

任意控制节点操作

需要时间有点长,最后返回ok表示正常)

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"

验证

mysql -h controller01 -u neutron -p123456 -e "use neutron;show tables;"

12、启动服务

全部控制节点操作

1、变更nova配置文件,首先需要重启nova服务

systemctl restart openstack-nova-api.servicesystemctl status openstack-nova-api.service

2、启动并设置开机启动

systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service \ neutron-l3-agent.service \ neutron-dhcp-agent.service \ neutron-metadata-agent.servicesystemctl restart neutron-server.servicesystemctl restart neutron-linuxbridge-agent.servicesystemctl restart neutron-l3-agent.servicesystemctl restart neutron-dhcp-agent.servicesystemctl restart neutron-metadata-agent.service

3、查看服务状态

systemctl status neutron-server.service \ neutron-linuxbridge-agent.service \ neutron-l3-agent.service \ neutron-dhcp-agent.service \ neutron-metadata-agent.service

13、 验证

. admin-openrc

查看加载的扩展服务(因为数据有点多,就不展示了)

openstack extension list --network

查看agent服务

[root@controller01 neutron]# openstack network agent list

+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+| 1f010128-4955-4859-8cc5-7c065fdaa810 | Metadata agent | controller02 | None | :-) | UP | neutron-metadata-agent || 1f513f25-3679-43b5-8211-0935829f1022 | Linux bridge agent | controller01 | None | :-) | UP | neutron-linuxbridge-agent || 543aea85-d06f-448a-85fa-ff704dcc164b | Linux bridge agent | controller03 | None | :-) | UP | neutron-linuxbridge-agent || 62152d6e-d159-4960-b79d-de5b78e395b7 | DHCP agent | controller01 | nova | :-) | UP | neutron-dhcp-agent || 7950dcad-c195-4980-9c90-038be596a88c | L3 agent | controller02 | nova | :-) | UP | neutron-l3-agent || 9eb8f181-c422-4e48-9ee7-fa21df2abb9b | Linux bridge agent | controller02 | None | :-) | UP | neutron-linuxbridge-agent || a4305c36-0f77-441b-ab8e-ab3e21ea4ffb | DHCP agent | controller03 | nova | :-) | UP | neutron-dhcp-agent || b60b303c-b3f8-4b6d-9f9b-efe019b7d2b5 | Metadata agent | controller03 | None | :-) | UP | neutron-metadata-agent || e2748453-bde1-493a-85b9-e5aec12c87f5 | L3 agent | controller03 | nova | :-) | UP | neutron-l3-agent || ee3a0871-5451-4373-83ce-aefaefdbe6d3 | DHCP agent | controller02 | nova | :-) | UP | neutron-dhcp-agent || f4c26d8d-7e94-4d0b-a85b-373d55ca3402 | L3 agent | controller01 | nova | :-) | UP | neutron-l3-agent || fd448e65-cb6f-43c3-a98a-adba06f73176 | Metadata agent | controller01 | None | :-) | UP | neutron-metadata-agent |+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+

14、设置pcs资源

1、添加资源neutron-server,neutron-linuxbridge-agent,neutron-l3-agent,neutron-dhcp-agent与neutron-metadata-agent

pcs resource create neutron-server systemd:neutron-server --clone interleave=truepcs resource create neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent --clone interleave=truepcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=truepcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=truepcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true

2、查看pcs资源

[root@controller01 neutron]# pcs resource vip (ocf::heartbeat:IPaddr2): Started controller01 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ controller01 ] Stopped: [ controller02 controller03 ] Clone Set: openstack-keystone-clone [openstack-keystone] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Started: [ controller01 controller02 controller03 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Started: [ controller01 controller02 controller03 ] Clone Set: neutron-server-clone [neutron-server] Started: [ controller01 controller02 controller03 ] Clone Set: neutron-linuxbridge-agent-clone [neutron-linuxbridge-agent] Started: [ controller01 controller02 controller03 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ controller01 controller02 controller03 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ controller01 controller02 controller03 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ controller01 controller02 controller03 ]

二、OpenStack清除网络和路由

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:聊聊mybatis sql的括号问题
下一篇:搭建高可用OpenStack(Queen版)集群(九)之部署nova计算节点
相关文章

 发表评论

暂时没有评论,来抢沙发吧~