Ceph分布式存储平台部署手册_第1页
Ceph分布式存储平台部署手册_第2页
Ceph分布式存储平台部署手册_第3页
Ceph分布式存储平台部署手册_第4页
Ceph分布式存储平台部署手册_第5页
已阅读5页,还剩27页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、浙江移动通信有限责任公司统一信息门户项目IBM/浙江移动通信 机密 Ceph分布式存储平台部署手册目 录 TOC o 1-3 h z u HYPERLINK l _Toc502660772 1.CEPH的架构介绍 PAGEREF _Toc502660772 h 4 HYPERLINK l _Toc502660773 2.CEPH在openstack中应用的云存储的整体规划 PAGEREF _Toc502660773 h 5 HYPERLINK l _Toc502660774 3.CEPH集群安装在ubuntu 12.04 PAGEREF _Toc502660774 h 6 HYPERLINK

2、l _Toc502660775 3.1.配置ceph源 PAGEREF _Toc502660775 h 6 HYPERLINK l _Toc502660776 3.2.依需求修改ceph.conf配置文件 PAGEREF _Toc502660776 h 6 HYPERLINK l _Toc502660777 3.3.设置主机的hosts PAGEREF _Toc502660777 h 8 HYPERLINK l _Toc502660778 3.4.设置集群节点之间无密钥访问 PAGEREF _Toc502660778 h 8 HYPERLINK l _Toc502660779 3.5.创建目录

3、 PAGEREF _Toc502660779 h 9 HYPERLINK l _Toc502660780 3.6.创建分区与挂载 PAGEREF _Toc502660780 h 9 HYPERLINK l _Toc502660781 3.7.执行初始化 PAGEREF _Toc502660781 h 9 HYPERLINK l _Toc502660782 3.8.启动ceph PAGEREF _Toc502660782 h 10 HYPERLINK l _Toc502660783 3.9.ceph健康检查 PAGEREF _Toc502660783 h 10 HYPERLINK l _Toc5

4、02660784 4.CEPH集群安装在centos6.4 PAGEREF _Toc502660784 h 11 HYPERLINK l _Toc502660785 4.1.安装更新源 PAGEREF _Toc502660785 h 11 HYPERLINK l _Toc502660786 4.2.使用rpm安装ceph0.67.4 PAGEREF _Toc502660786 h 11 HYPERLINK l _Toc502660787 4.3.依需求修改ceph.conf配置文件 PAGEREF _Toc502660787 h 12 HYPERLINK l _Toc502660788 4.4

5、.设置主机的hosts PAGEREF _Toc502660788 h 20 HYPERLINK l _Toc502660789 4.5.设置集群节点之间无密钥访问 PAGEREF _Toc502660789 h 20 HYPERLINK l _Toc502660790 4.6.创建目录 PAGEREF _Toc502660790 h 21 HYPERLINK l _Toc502660791 4.7.执行初始化 PAGEREF _Toc502660791 h 21 HYPERLINK l _Toc502660792 4.8.启动ceph PAGEREF _Toc502660792 h 21 H

6、YPERLINK l _Toc502660793 4.9.ceph健康检查 PAGEREF _Toc502660793 h 22 HYPERLINK l _Toc502660794 5.openstack glance 使用CEPH集群的配置 PAGEREF _Toc502660794 h 23 HYPERLINK l _Toc502660795 5.1.创建卷池和图像池 PAGEREF _Toc502660795 h 23 HYPERLINK l _Toc502660796 5.2.增加两个池的复制水平 PAGEREF _Toc502660796 h 23 HYPERLINK l _Toc5

7、02660797 5.3.为池创建 Ceph 客户端和密钥环 PAGEREF _Toc502660797 h 23 HYPERLINK l _Toc502660798 5.4.在计算节点应用密钥环 PAGEREF _Toc502660798 h 23 HYPERLINK l _Toc502660799 5.4.1.创建libvirt密钥 PAGEREF _Toc502660799 h 23 HYPERLINK l _Toc502660800 5.4.2.计算节点ceph安装 PAGEREF _Toc502660800 h 24 HYPERLINK l _Toc502660801 5.5.更新你

8、的 glance-api 配置文件 PAGEREF _Toc502660801 h 24 HYPERLINK l _Toc502660802 6.openstack volumes使用CEPH集群的配置 PAGEREF _Toc502660802 h 26 HYPERLINK l _Toc502660803 6.1.计算节点ceph安装 PAGEREF _Toc502660803 h 26 HYPERLINK l _Toc502660804 6.2.创建临时的 secret.xml 文件 PAGEREF _Toc502660804 h 26 HYPERLINK l _Toc502660805

9、6.3.设定 libvirt 使用上面的密钥 PAGEREF _Toc502660805 h 27 HYPERLINK l _Toc502660806 6.4.更新 cinder 配置 PAGEREF _Toc502660806 h 27 HYPERLINK l _Toc502660807 6.4.1.cinder.conf文件更改 PAGEREF _Toc502660807 h 27 HYPERLINK l _Toc502660808 6.4.2.更改 cinder 启动脚本配置文件 PAGEREF _Toc502660808 h 28 HYPERLINK l _Toc502660809 6

10、.4.3.更改/etc/nova/nova.conf配置 PAGEREF _Toc502660809 h 28 HYPERLINK l _Toc502660810 6.4.4.重启 cinder 服务 PAGEREF _Toc502660810 h 28 HYPERLINK l _Toc502660811 6.5.验证cinder-volume PAGEREF _Toc502660811 h 28 HYPERLINK l _Toc502660812 6.6.验证rdb创建volume PAGEREF _Toc502660812 h 28 HYPERLINK l _Toc502660813 7.

11、挂载cephfs PAGEREF _Toc502660813 h 30 HYPERLINK l _Toc502660814 7.1.配置/etc/fstab PAGEREF _Toc502660814 h 30 HYPERLINK l _Toc502660815 7.2.挂载vm实例目录 PAGEREF _Toc502660815 h 30 HYPERLINK l _Toc502660816 8.FQA PAGEREF _Toc502660816 h 31CEPH的架构介绍CEPH的组件主要包括客户端ceph client(数据用户),元数据服务器mds(缓存和同步分布式元数据),一个对象存储

12、集群osd(将数据和元数据作为对象存储,执行其他关键职能),集群监视器mon(执行监视功能)。CEPH在openstack中应用的云存储的整体规划CEPH集群安装在ubuntu 12.04 配置ceph源echo deb /debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.listapt-get update & apt-get install ceph #安装cephceph V #检查ceph的安装版本依需求修改ceph.conf配置文件/etc/ceph/ceph.conf内容如

13、下:global auth supported = cephx keyring = /etc/ceph/keyring.adminmds keyring = /etc/ceph/keyring.$namemds.0 host = network-n01osd osd data = /srv/ceph/osd$id osd journal = /srv/ceph/osd$id/journal osd journal size = 512 osd class dir = /usr/lib/rados-classes keyring = /etc/ceph/keyring.$name ; worki

14、ng with ext4 filestore xattr use omap = true osd mkfs type = xfs osd mkfs options xfs = -f osd mount options xfs = rw,noatime ; solve rbd data corruption filestore fiemap = trueosd.0 host = network-n01 cluster addr = 1 public addr = 1 devs = /dev/sdb1osd.1 host = network-n01 cluster addr = 1 public

15、addr = 1 devs = /dev/sdc1osd.2 host = control-n01 cluster addr = 1 public addr = 1 devs = /dev/cinder-volumes/vol002mon mon data = /srv/ceph/mon$idmon.0 host = network-n01mon addr = 1:6789mon.1 Host = control-n01 Mon addr = 1:6789client.volumes keyring = /etc/ceph/ceph.client.volumes.keyring#client.

16、images# keyring = /etc/ceph/client.images.keyring设置主机的hosts主机名是ceph通信的基础,因此集群中每个节点需要hosts同步。/etc/hosts如下:1 control-n011 network-n011 compute-n012 compute-n02设置集群节点之间无密钥访问元数据节点、监控节点、存储节点ssh配置相同ssh-keygen 一直回车生成密钥ssh-copy-id i /root/.ssh/id_rsa.pub 1 #复制公钥需去认证目标主机创建目录mkdir -p /srv/ceph/osd0mkdir p /sr

17、v/ceph/osd1mkdir p /srv/ceph/osd2mkdir p /srv/ceph/mon0mkdir p /srv/ceph/mon1mkdir p /srv/ceph/mds创建分区与挂载parted /dev/sdb #parted分区parted /dev/sdc mkfs.xfs f /dev/sdb1mkfs.xfs f /dev/sdc1vim /etc/fstab #添加挂载在fstab/dev/sdb1 /srv/ceph/osd0 xfs rw,noexec,nodev,noatime,nodiratime,barrier=0 0 0/dev/sdc1 /

18、srv/ceph/osd1 xfs rw,noexec,nodev,noatime,nodiratime,barrier=0 0 0执行初始化mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.adminchmod +r /etc/ceph/keyring.admin启动cephservice ceph a startceph健康检查ceph sceph osd treeceph healthceph df CEPH集群安装在centos6.4安装更新源rpm -import /git/?p=ceph.git;a=blob_plain

19、;f=keys/release.ascrpm -Uvh /pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 使用rpm安装ceph0.67.4在系统上安装ceph/rpm-dumpling/rhel6/x86_64/依据依赖关系安装CEPH yum install -y leveldb snappy xfsprogs boost gdisk python-lockfile gperftools-libs python-flask python-requests python-argparserpm -i/rpm-dumpling/rhel6/x86_6

20、4/libcephfs1-0.67.4-0.el6.x86_64.rpmrpm -i /rpm-dumpling/rhel6/x86_64/librados2-0.67.4-0.el6.x86_64.rpm rpm -i /rpm-dumpling/rhel6/x86_64/librbd1-0.67.4-0.el6.x86_64.rpm rpm -i /rpm-dumpling/rhel6/x86_64/python-ceph-0.67.4-0.el6.x86_64.rpm rpm -i /rpm-dumpling/rhel6/x86_64/ceph-0.67.4-0.el6.x86_64.r

21、pmrpm -i/rpm-dumpling/rhel6/x86_64/ceph-fuse-0.67.4-0.el6.x86_64.rpm依需求修改ceph.conf配置文件globalauth supported = cephx max open files = 131072 log file = /var/log/ceph/$name.log pid file = /var/run/ceph/$name.pidkeyring = /etc/ceph/keyring.adminmon mon data = /srv/ceph/$name ;osd pool default size = 3 ;

22、osd pool default crush rule = 0 ;mon clock drift allowed = 1 ;mon clock drift warn backoff = 30;debug ms = 1;debug mon = 20;debug paxos = 20;debug auth = 20mon.0host = DG-store001mon addr = 00:6789mon.1host = DG-store002mon addr = 01:6789mon.2host = DG-store003mon addr = 02:6789mdskeyring = /etc/cep

23、h/keyring.$name; mds logging to debug issues.;debug ms = 1;debug mds = 20mds.0host = DG-store001mds.1host = DG-store002osdosd data = /srv/ceph/$nameosd journal = /srv/ssd/$name/journalosd journal size = 5120 ; journal size, in megabyteskeyring = /etc/ceph/keyring.$name ;journal dio = false ; osd rec

24、overy max active = 3; osd logging to debug osd issues, in order of likelihood of being; helpful;debug ms = 1;debug osd = 20;debug filestore = 20;debug journal = 20 osd mkfs type = xfsosd mount options xfs = rw,noatimeosd mkfs options xfs = -fosd.0host = DG-store001public address = 00cluster address

25、= 0devs = /dev/sdbosd.1host = DG-store001public address = 00cluster address = 0devs = /dev/sdcosd.2host = DG-store001public address = 00cluster address = 0devs = /dev/sddosd.3host = DG-store001public address = 00cluster address = 0devs = /dev/sdeosd.4host = DG-store001public address = 00cluster addr

26、ess = 0devs = /dev/sdfosd.5host = DG-store001public address = 00cluster address = 0devs = /dev/sdgosd.6host = DG-store001public address = 00cluster address = 0devs = /dev/sdhosd.7host = DG-store001public address = 00cluster address = 0devs = /dev/sdiosd.8host = DG-store002public address = 01cluster

27、address = 1devs = /dev/sdbosd.9host = DG-store002public address = 01cluster address = 1devs = /dev/sdcosd.10host = DG-store002public address = 01cluster address = 1devs = /dev/sddosd.11host = DG-store002public address = 01cluster address = 1devs = /dev/sdeosd.12host = DG-store002public address = 01c

28、luster address = 1devs = /dev/sdfosd.13host = DG-store002public address = 01cluster address = 1devs = /dev/sdgosd.14host = DG-store002public address = 01cluster address = 1devs = /dev/sdhosd.15host = DG-store002public address = 01cluster address = 1devs = /dev/sdiosd.16host = DG-store003public addre

29、ss = 02cluster address = 2devs = /dev/sdbosd.17host = DG-store003public address = 02cluster address = 2devs = /dev/sdcosd.18host = DG-store003public address = 02cluster address = 2devs = /dev/sddosd.19host = DG-store003public address = 02cluster address = 2devs = /dev/sdeosd.20host = DG-store003publ

30、ic address = 02cluster address = 2devs = /dev/sdfosd.21host = DG-store003public address = 02cluster address = 2devs = /dev/sdgosd.22host = DG-store003public address = 02cluster address = 2devs = /dev/sdhosd.23host = DG-store003public address = 02cluster address = 2devs = /dev/sdiclient.volumeskeyrin

31、g = /etc/ceph/keyring.client.volumesclient.imageskeyring = /etc/ceph/keyring.client.images设置主机的hosts主机名是ceph通信的基础,因此集群中每个节点需要hosts同步。/etc/hosts如下:00 DG- DG-store00101 DG- DG-store00202 DG- DG-store003设置集群节点之间无密钥访问元数据节点、监控节点、存储节点ssh配置相同ssh-keygen 一直回车生成密钥rsync -r .ssh/ root39:.ssh/ #同步公钥需去认证目标主机创建目录#

32、!/usr/sbin/bin/rm -rf /etc/ceph/keyring.*/bin/rm -rf /srv/ceph/bin/rm -rf /srv/ssd/bin/rm -rf /var/log/ceph/*.log/bin/rm -rf /var/log/ceph/*.gz/bin/umount /srv/ceph/osd.16,17,18,19,20,21,22,23/bin/mkdir -p /srv/ceph/osd.16,17,18,19,20,21,22,23/bin/mkdir -p /srv/ceph/bin/mkdir -p /srv/ceph/mds.0,1,/b

33、in/mkdir -p /srv/ceph/mon.0,1,2/bin/mkdir -p /srv/ssd/执行初始化mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin -mkfschmod +r /etc/ceph/keyring.admin启动cephservice ceph a startceph健康检查ceph sceph osd treeceph healthceph df openstack glance 使用CEPH集群的配置映像(Image): 使用OpenStack Glance对VM的映像进行管理卷(V

34、olume): OpenStack使用卷来启动VM,它还能向运行中的VM添加额外的卷创建卷池和图像池rados mkpool volumesrados mkpool imagesrados lspools 增加两个池的复制水平ceph osd pool set volumes size 3ceph osd pool set images size 3为池创建 Ceph 客户端和密钥环ceph auth get-or-create client.volumes mon allow r osd allow class-read object_prefix rbd_children, allow r

35、wx pool=volumes, allow rx pool=images -o /etc/ceph/client.volumes.keyringceph auth get-or-create client.images mon allow r osd allow class-read object_prefix rbd_children, allow rwx pool=images -o /etc/ceph/client.images.keyring在计算节点应用密钥环将上面的密钥环复制到 glance-api,并将 cinder-volume 节点和 ceph.conf 文件拷贝到 计算节

36、点的/etc/ceph/目录下。运行 nova-compute 的主机无需密钥环。密钥需储存于libvirt。创建libvirt密钥要创建 libvirt 密钥,你需要来自 client.volumes.keyring 的密钥。ceph-authtool -p -n client.volumes /etc/ceph/client.volumes.keyring /etc/ceph/client.volumes当需要时,拷贝这个密钥到 nova-compute 节点。在设置完成后,你可以删除它。同时,要安装 Ceph 组件,计算节点ceph安装 wget -q -O - /git/?p=ceph

37、.git;a=blob_plain;f=keys/release.asc | apt-key add -echo deb /debian-cuttlefish/ precise main /etc/apt/sources.list.d/ceph.list在 glance-api 主机,你需要为 librbd 绑定 Python:apt-get install -y python-ceph在你拷贝 Ceph 集群的 ceph.conf 到 glance-api,且 cinder-volume 节点更新密钥环路径之后:client.volumes keyring = /etc/ceph/clien

38、t.volumes.keyringclient.imageskeyring = /etc/ceph/client.images.keyring更新你的 glance-api 配置文件更新/etc/glance/glance-api.conf如下:default_store = rbdrbd_store_user = glancerbd_store_pool = imagesshow_image_direct_url = True并重启 glance-api 服务:service glance-api restartopenstack volumes使用CEPH集群的配置计算节点ceph安装 w

39、get -q -O - /git/?p=ceph.git;a=blob_plain;f=keys/release.asc | apt-key add -echo deb /debian-cuttlefish/ precise main /etc/apt/sources.list.d/ceph.list在 glance-api 主机,你需要为 librbd 绑定 Python:apt-get install -y python-ceph在你拷贝 Ceph 集群的 ceph.conf 到 glance-api,且 cinder-volume 节点更新密钥环路径之后:client.volumes k

40、eyring = /etc/ceph/client.volumes.keyringclient.imageskeyring = /etc/ceph/client.images.keyring创建临时的 secret.xml 文件cat secret.xml EOFclient.volumes secretEOF从创建的 secret.xml 文件创建密钥:virsh secret-define -file secret.xml设定 libvirt 使用上面的密钥virsh secret-set-value -secret your_key -base64 $(cat /path/to/clie

41、nt.volumes)应用如下:virsh secret-set-value -secret f4b2d686-b945-3409-606c-41facea82289 -base64 AQANtMJR6PvVIhAA7JF4NMoNsplPW39Xmj+4LA=rm client.volumes secret.xml将密钥记录下载,以便于使用上面的命令添加cinder节点。需要使用这个密钥来配置 cinder。重启 nova-compute:service nova-compute restart在 cinder-volume 主机,安装客户端命令行工具:apt-get install -y

42、ceph-common更新 cinder 配置 cinder.conf文件更改/etc/cinder/cinder.conf内容部分更改如下#iscsi_helper = tgtadm#volumes_dir = /var/lib/cinder/volumes# Folsom#volume_driver=cinder.volume.driver.RBDDriver# Grizzlyvolume_driver=cinder.volume.drivers.rbd.RBDDriverrbd_pool=volumesrbd_user=volumesrbd_secret_uuid= f4b2d686-b945-3409-606c-41facea82289更改 cinder 启动脚本配置文件(

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论