搭建高可用OpenStack(Queen版)集群(十)之部署分布式存储Ceph

网友投稿 616 2022-11-08

搭建高可用OpenStack(Queen版)集群(十)之部署分布式存储Ceph

搭建高可用OpenStack(Queen版)集群(十)之部署分布式存储Ceph

一、Ceph知识点学习

二、部署分布式存储Ceph

一)设置yum源

在全部控制与计算节点设置epel与ceph yum源

epel源:​​-O /etc/yum.repos.d/epel-7.repo clean allyum

3、 查看yum源

yum

二)基础环境,如hosts,时间同步ntp,开放端口iptables等相关操作

三)创建用户

在全部控制与计算节点操作

1、创建用户

useradd -d /home/ceph -m cephdeecho cephde|passwd --stdin cephdeecho 'cephde ALL=(ALL) NOPASSWD: ALL'>>/etc/sudoers

2、用户赋权

su - cephde$ echo "cephde ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephde[sudo] password for cephde:cephdesudo chmod 0440

四)设置ssh免密登陆

ceph-deploy不支持密码输入,需要在所有控制节点生成ssh秘钥,并将公钥分发到各ceph节点(控制节点和存储节点);在用户cephde下生成秘钥,不能使用sudo或root用户;默认在用户目录下生成~/.ssh目录,含生成的秘钥对;“Enter passphrase”时,回车,口令为空;另外3个控制节点均设置为ceph管理节点,应该使控制管理节点都可以ssh免密登陆到其他所有控制与存储节点

1、生成秘钥对

# su cephde$ ssh-keygent rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/ceph/.ssh/id_rsa): Created directory '/home/ceph/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ceph/.ssh/id_rsa.Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.The key fingerprint is:1e:a8:cd:c7:45:a9:e0:ef:01:6a:d2:0e:46:a5:8c:d5 cephde@controller03The key's randomart image is:+--[ RSA 2048]----+| || . . || . E . o || + o . o o ||. + + S . || . . = = o || + = o * || . = o . || . . |+-----------------+

2、分发公钥

前提是各控制与存储节点已生成相关用户;分发成功后,在~/.ssh/下生成known_hosts文件,记录相关登陆信息;3个控制节点均设置为ceph管理节点,应该使控制管理节点都可以ssh免密登陆到其他所有控制与存储节点

注意需要安装sshpass)

#!/bin/bashIP="10.20.9.18910.20.9.19010.20.9.4510.20.9.4610.20.9.198"for node in ${IP};do sshpass -p cephde ssh-copy-id -p 22 cephde@$node -o StrictHostKeyChecking=nodone

免交互批量发送公钥

注意使用cephde用户执行脚本)

3、设置环境变量(optional(三个控制节点))

在root账号主目录下,生成~/.ssh/config文件,这样在控制管理节点上执行”ceph-deploy”时可不切换用户或指定”--username {username}”;

每个节点把自己去掉

# ceph-deployHost controller02 Hostname controller02 User cephdeHost controller03 Hostname controller03 User cephdeHost compute01 Hostname compute01 User cephdeHost compute02 Hostname compute02 User cephde

/root/.ssh/config

五)安装ceph-deploy

在规划的全部控制管理节点安装ceph-deploy工具

yum install

六)创建ceph集群

在任意控制节点执行

1、创建集群

在cephde账户下操作,切忌使用sudo操作;在管理节点上生成一个目录用于存放集群相关配置文件

su - cephdemkdir

后续ceph-deploy相关操作全部在所创建的目录执行

将规划中的MON(monitor)节点纳入集群,即创建集群

cd ~/cephcluster/ceph-deploy new controller01 controller02 controller03

2、修改集群配置文件

生成集群后在集群目录下生成3个文件,其中ceph.conf即是配置文件;默认可不修改,为使服务按规划启动,可做适当修改;以下红色字体部分是在默认生成的conf文件上新增的配置

上边命令生成的ceph.conf

[global]fsid = 74082074-0322-460a-b962-436fe36f8e7bmon_initial_members = controller01, controller02, controller03mon_host = 10.20.9.189,10.20.9.190,10.20.9.45auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx

ceph.conf

修改ceph.conf文件

[global]fsid = 74082074-0322-460a-b962-436fe36f8e7bmon_initial_members = controller01, controller02, controller03mon_host = 10.20.9.189,10.20.9.190,10.20.9.45auth_cluster_required = cephxauth_service_required = cephxauth_client_required =确保public network与mon_host在相同网段,否则初始化时可能会有错误;# cluster network:后端osd心跳,数据/流复制恢复等网络public_network = 10.20.9.0/24cluster_network = 10.0.0.0/24osd_pool_default_size = 2mon_allow_pool_delete = true

七)安装ceph

在全部控制管理与存储节点安装ceph

理论上在控制节点的ceph集群目录使用ceph-deploy可统一安装,命令:ceph-deploy install controller01 controller02 controller03 compute01 compute02 compute03;  但由于网速原因大概率会失败,可在各存储节点独立安装ceph与ceph-radosgw

yum install

查看版本

[cephde@controller01 cephcluster]$ ceph -vceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)

八)初始化ceph_mon

1、初始化monitor

在任意控制管理节点操作(若/etc/ceph/ceph.conf文件存在,需加上--overwrite-conf 参数)

ceph-deploy mon create-initial

1、下面是正常情况

执行完,最后返回如下内容(说明配置没有问题)

............................................................................[controller01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-controller01/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpbCEBnS

执行完后,在当前目录下多了如下五个文件

2、下面是报错情况

报错

[controller02][WARNING] The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.[controller02][ERROR ] RuntimeError: command returned non-zero exit status: 2[ceph_deploy.mon][ERROR ] Failed to execute command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.controller02for

报错原因:epel中的ceph-deploy版本过低,从ceph官网-最新版即可。

​​-s)

systemctl status ceph-mon@controller01

九)分发ceph.conf与秘钥

分发ceph配置文件与秘钥到其他控制管理节点与存储节点;注意分发节点本身也需要包含在内,默认没有秘钥文件,需要分发;如果被分发节点已经配置文件(统一变更配置文件场景),可以使用如下命令:

ceph-deploy --overwrite-conf admin xxx

分发的配置文件与秘钥在各节点/etc/ceph/目录(若其他节点原来存在配置文件,需要加上--overwrite-conf参数)

ceph-deploy --overwrite-conf admin controller01 controller02 controller03 compute01 compute02

十)安装ceph_mgr

1、安装mgr

任意控制节点操作

luminous版本必须安装mgr(dashboard)

ceph-deploy mgr create controller01:controller01_mgr controller02:controller02_mgr controller03:controller03_mgr

查看状态

systemctl status ceph-mgr@controller01_mgrsudo netstat -tunlp | grep

2、启动mgr

任意控制节点操作

可查看mgr默认开启的服务:(sudo) ceph mgr module ls;默认dashboard服务在可开启列表中,但并未启动,需要手工开启

sudo

dashboard服务已开启,默认监听全部地址的tcp7000端口

若想修改地址和端口,如下操作即可

如果需要设置dashboard的监听地址与端口,如下:设置监听地址:(sudo) ceph config-key put mgr/dashboard/server_addr x.x.x.x设置监听端口:(sudo) ceph config-key put mgr/dashboard/server_port x

验证服务

sudo netstat -tunlp | grep

3、web登录

默认没有密码

web登陆:​​ceph mon stat

2、查看ceph状态

ceph health (detail),ceph -s,ceph -w等;

状态显示mgr处于active-standby模式

[cephde@controller01 cephcluster]$ sudos cluster: id: 74082074-0322-460a-b962-436fe36f8e7b health: HEALTH_OK services: mon: 3 daemons, quorum controller03,controller01,controller02 mgr: controller01_mgr(active), standbys: controller03_mgr, controller02_mgr osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 0B used, 0B / 0B avail pgs:

3、可在各节点查看认证信息等

[cephde@controller01 cephcluster]$ sudo ceph auth listinstalled auth entries:client.admin key: AQBebJdb89NfFhAA2D9dFESIX2GhrT/O6AmXqA== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow *client.bootstrap-mds key: AQBfbJdbcfQPDhAA1xdq6WhsiyyG79M6hgEqPQ== caps: [mon] allow profile bootstrap-mdsclient.bootstrap-mgr key: AQBgbJdbTyX/GRAA7RYzmYL7Xx3NnUFg6s9JcQ== caps: [mon] allow profile bootstrap-mgrclient.bootstrap-osd key: AQBhbJdbKAfFGRAACyKbfDP1V0Ub92Pw4aU8qQ== caps: [mon] allow profile bootstrap-osdclient.bootstrap-rgw key: AQBibJdbfcXxExAAU3Ujlajuu8Pj2vT+f9rAoQ== caps: [mon] allow profile bootstrap-rgwmgr.controller01_mgr key: AQApcJdbjzjkFBAAOz8BodoKJzI1iMeKKwksfQ== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow *mgr.controller02_mgr key: AQAqcJdbaIG5MxAA5+CM7MfiMC/dlkE6NqIdkw== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow *mgr.controller03_mgr key: AQAscJdbYtGtHhAAhmcV1PgTHQPffEnWV2Umfg== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow *

十二)创建osd(存储)

1、创建osd

1、添加完磁盘,存储节点查看

存储节点,可查看存储节点磁盘状况(磁盘直接添加上即可,不用分区格式化),以compute01节点为例

$ lsblk2:0 1 4K 0 disk sda 8:0 0 80G 0 disk ├─sda1 8:1 0 1G 0 part /boot├─sda2 8:2 0 19G 0 part │ ├─cl-root 253:0 0 77G 0 lvm /│ └─cl-swap 253:1 0 2G 0 lvm [SWAP]└─sda3 8:3 0 60G 0 part └─cl-root 253:0 0 77G 0 lvm /sdb 8:16 0 50G 0 disk └─sdb1 8:17 0 50G 0 part /optsdc 8:32 0 10G 0 disk sdd 8:48 0 16G 0 disk sde 8:64 0 16G 0 disk11:0 1 1024M 0

2、创建osd

管理节点使用ceph-deploy创建;本例中有3个osd节点,每个osd节点可运行3个osd进程(在6800~7300端口范围内,每进程监听1个本地端口)

ceph-deploy osd create compute01 --data /dev/sdcceph-deploy osd create compute01 --data /dev/sddceph-deploy osd create compute01 --data /dev/sdeceph-deploy osd create compute02 --data /dev/sdcceph-deploy osd create compute02 --data /dev/sddceph-deploy osd create compute02 --data /dev/sde

创建成功如下图

2、查看osd状态

1、在管理节点操作

查看osd列表

$ ceph-deploy osd list compute01

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd list compute01[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] debug : False[ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] subcommand : list[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph[ceph_deploy.cli][INFO ] host : ['compute01'][ceph_deploy.cli][INFO ] func : [ceph_deploy.cli][INFO ] ceph_conf : None[ceph_deploy.cli][INFO ] default_release : False[compute01][DEBUG ] connection detected need for sudo[compute01][DEBUG ] connected to host: compute01 [compute01][DEBUG ] detect platform information from remote host[compute01][DEBUG ] detect machine type[compute01][DEBUG ] find the location of an executable[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core[ceph_deploy.osd][DEBUG ] Listing disks on compute01...[compute01][DEBUG ] find the location of an executable[compute01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm list[compute01][DEBUG ] [compute01][DEBUG ] [compute01][DEBUG ] ====== osd.1 =======[compute01][DEBUG ] [compute01][DEBUG ] [block] /dev/ceph-9a7db77c-f52a-4403-a1a4-2287cf024cee/osd-block-83583e83-a824-41c5-9d54-6529b0dca943[compute01][DEBUG ] [compute01][DEBUG ] type block[compute01][DEBUG ] osd id 1[compute01][DEBUG ] cluster fsid 74082074-0322-460a-b962-436fe36f8e7b[compute01][DEBUG ] cluster name ceph[compute01][DEBUG ] osd fsid 83583e83-a824-41c5-9d54-6529b0dca943[compute01][DEBUG ] encrypted 0[compute01][DEBUG ] cephx lockbox secret [compute01][DEBUG ] block uuid 9jfm16-CoLD-pTMt-ioQB-qeK1-waCL-P8YKaz[compute01][DEBUG ] block device /dev/ceph-9a7db77c-f52a-4403-a1a4-2287cf024cee/osd-block-83583e83-a824-41c5-9d54-6529b0dca943[compute01][DEBUG ] vdo 0[compute01][DEBUG ] crush device class None[compute01][DEBUG ] devices /dev/sdd[compute01][DEBUG ] [compute01][DEBUG ] ====== osd.0 =======[compute01][DEBUG ] [compute01][DEBUG ] [block] /dev/ceph-693dac4c-5d8c-4c94-aa6e-8e7360eb3dcc/osd-block-cba9c3bc-f75b-4bc7-93e4-5e262dd891f4[compute01][DEBUG ] [compute01][DEBUG ] type block[compute01][DEBUG ] osd id 0[compute01][DEBUG ] cluster fsid 74082074-0322-460a-b962-436fe36f8e7b[compute01][DEBUG ] cluster name ceph[compute01][DEBUG ] osd fsid cba9c3bc-f75b-4bc7-93e4-5e262dd891f4[compute01][DEBUG ] encrypted 0[compute01][DEBUG ] cephx lockbox secret [compute01][DEBUG ] block uuid U1Nz2E-xF5a-wrEl-g9uI-PmBX-lbTh-2okDe9[compute01][DEBUG ] block device /dev/ceph-693dac4c-5d8c-4c94-aa6e-8e7360eb3dcc/osd-block-cba9c3bc-f75b-4bc7-93e4-5e262dd891f4[compute01][DEBUG ] vdo 0[compute01][DEBUG ] crush device class None[compute01][DEBUG ] devices /dev/sdc[compute01][DEBUG ] [compute01][DEBUG ] ====== osd.2 =======[compute01][DEBUG ] [compute01][DEBUG ] [block] /dev/ceph-bdd65160-4a3a-45ca-a416-edc4151717ab/osd-block-c1f6b583-61d4-4659-8d06-bb9d929e82cb[compute01][DEBUG ] [compute01][DEBUG ] type block[compute01][DEBUG ] osd id 2[compute01][DEBUG ] cluster fsid 74082074-0322-460a-b962-436fe36f8e7b[compute01][DEBUG ] cluster name ceph[compute01][DEBUG ] osd fsid c1f6b583-61d4-4659-8d06-bb9d929e82cb[compute01][DEBUG ] encrypted 0[compute01][DEBUG ] cephx lockbox secret [compute01][DEBUG ] block uuid DNze4H-iIcr-rB3F-gykt-oh1c-mCcS-H8ffIT[compute01][DEBUG ] block device /dev/ceph-bdd65160-4a3a-45ca-a416-edc4151717ab/osd-block-c1f6b583-61d4-4659-8d06-bb9d929e82cb[compute01][DEBUG ] vdo 0[compute01][DEBUG ] crush device class None[compute01][DEBUG ] devices /dev/sdeUnhandled exception in

ceph-deploy osd list compute01的运行结果

在管理节点查看osd状态等

[cephde@controller01 cephcluster]$ sudo ceph osd stat6 osds: 6 up, 6 in[cephde@controller01 cephcluster]$ sudo ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.08197 root default -3 0.04099 host compute01 0 hdd 0.00980 osd.0 up 1.00000 1.00000 1 hdd 0.01559 osd.1 up 1.00000 1.00000 2 hdd 0.01559 osd.2 up 1.00000 1.00000 -5 0.04099 host compute02 3 hdd 0.00980 osd.3 up 1.00000 1.00000 4 hdd 0.01559 osd.4 up 1.00000 1.00000 5 hdd 0.01559 osd.5 up 1.00000 1.00000

在管理节点查看容量及使用情况

$ sudo ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 84.0GiB 78.0GiB 6.02GiB 7.17 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS

2、在存储节点操作

osd(存储)节点查看

$ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTfd0 2:0 1 4K 0 disk sda 8:0 0 80G 0 disk ├─sda1 8:1 0 1G 0 part /boot├─sda2 8:2 0 19G 0 part │ ├─cl-root 253:0 0 77G 0 lvm /│ └─cl-swap 253:1 0 2G 0 lvm [SWAP]└─sda3 8:3 0 60G 0 part └─cl-root 253:0 0 77G 0 lvm /sdb 8:16 0 50G 0 disk └─sdb1 8:17 0 50G 0 part /optsdc 8:32 0 10G 0 disk └─ceph--693dac4c--5d8c--4c94--aa6e--8e7360eb3dcc-osd--block--cba9c3bc--f75b--4bc7--93e4--5e262dd891f4 253:2 0 10G 0 lvm sdd 8:48 0 16G 0 disk └─ceph--9a7db77c--f52a--4403--a1a4--2287cf024cee-osd--block--83583e83--a824--41c5--9d54--6529b0dca943 253:3 0 16G 0 lvm sde 8:64 0 16G 0 disk └─ceph--bdd65160--4a3a--45ca--a416--edc4151717ab-osd--block--c1f6b583--61d4--4659--8d06--bb9d929e82cb 253:4 0 16G 0 lvm sr0 11:0 1 1024M 0

ceph-osd进程,根据启动顺序,每个osd进程有特定的序号

systemctl status ceph-osd@0

osd进程端口号;

ps aux | grep osdnetstat -tunlp | grep osd

十三)登陆mgr_dashboard

浏览器输入:​​Options

3、Block

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:搭建高可用OpenStack(Queen版)集群(十二)之启动一个实例
下一篇:搭建高可用OpenStack(Queen版)集群(八)之部署块存储服务(Cinder)控制节点集群
相关文章

 发表评论

暂时没有评论,来抢沙发吧~