版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、Ceph技术权威概览原理、安装、维护、最佳实践目 录 TOC o 1-3 h z u HYPERLINK l _Toc523340093 一、Ceph在存储中的层次 PAGEREF _Toc523340093 h 4 HYPERLINK l _Toc523340094 二、Ceph原理 PAGEREF _Toc523340094 h 4 HYPERLINK l _Toc523340095 1.Ceph存储系统的逻辑结构 PAGEREF _Toc523340095 h 5 HYPERLINK l _Toc523340096 2.Rados的系统逻辑结构 PAGEREF _Toc523340096
2、 h 5 HYPERLINK l _Toc523340097 3.Ceph寻址流程 PAGEREF _Toc523340097 h 5 HYPERLINK l _Toc523340098 4.ceph部署网络拓扑 PAGEREF _Toc523340098 h 6 HYPERLINK l _Toc523340099 三、Ceph安装(ceph-deploy) PAGEREF _Toc523340099 h 6 HYPERLINK l _Toc523340100 1.环境准备以及各ceph节点初始化 PAGEREF _Toc523340100 h 6 HYPERLINK l _Toc523340
3、101 2.部署Ceph集群 PAGEREF _Toc523340101 h 8 HYPERLINK l _Toc523340102 四、Ceph安装(kolla) PAGEREF _Toc523340102 h 11 HYPERLINK l _Toc523340103 1.环境准备以及各ceph节点初始化 PAGEREF _Toc523340103 h 11 HYPERLINK l _Toc523340104 2.部署Ceph集群 PAGEREF _Toc523340104 h 13 HYPERLINK l _Toc523340105 五、Ceph维护 PAGEREF _Toc5233401
4、05 h 15 HYPERLINK l _Toc523340106 1.整个集群维护 PAGEREF _Toc523340106 h 15 HYPERLINK l _Toc523340107 2.OSD查看|增加|删除 PAGEREF _Toc523340107 h 15 HYPERLINK l _Toc523340108 3.Monitor增加|删除|查看 PAGEREF _Toc523340108 h 17 HYPERLINK l _Toc523340109 4.MDS维护暂不考虑 PAGEREF _Toc523340109 h 18 HYPERLINK l _Toc523340110 5
5、.Pools & pg维护 PAGEREF _Toc523340110 h 18 HYPERLINK l _Toc523340111 6.对象存储 PAGEREF _Toc523340111 h 19 HYPERLINK l _Toc523340112 7.块存储 PAGEREF _Toc523340112 h 22 HYPERLINK l _Toc523340113 8.文件存储(暂不考虑) PAGEREF _Toc523340113 h 24 HYPERLINK l _Toc523340114 9.推送配置文件 & 重启服务 PAGEREF _Toc523340114 h 24 HYPER
6、LINK l _Toc523340115 10.环境清理至刚安装ceph时 PAGEREF _Toc523340115 h 24 HYPERLINK l _Toc523340116 11.硬盘操作 PAGEREF _Toc523340116 h 25 HYPERLINK l _Toc523340117 12.任意节点能执行管理功能 PAGEREF _Toc523340117 h 27 HYPERLINK l _Toc523340118 六、Ceph注意事项/问题排查 PAGEREF _Toc523340118 h 27 HYPERLINK l _Toc523340119 1.Raw drive
7、禁用写缓存 PAGEREF _Toc523340119 h 27 HYPERLINK l _Toc523340120 2.谨慎选择文件系统 PAGEREF _Toc523340120 h 27 HYPERLINK l _Toc523340121 3.cephX认证系统 PAGEREF _Toc523340121 h 27 HYPERLINK l _Toc523340122 4.升级时需先关闭认证 PAGEREF _Toc523340122 h 27 HYPERLINK l _Toc523340123 5.跑Fetching Ceph keyrings报错 PAGEREF _Toc5233401
8、23 h 28 HYPERLINK l _Toc523340124 6.Monitor时钟问题 PAGEREF _Toc523340124 h 28Ceph在存储中的层次第一层:物理存储介质。LUN:通常将硬件生成生成的虚拟磁盘叫LUN, 比如raid卡生成的虚拟磁盘。Volume:通常将软件层次生成的虚拟磁盘叫做卷,比如LVM生成的逻辑卷。Disk:就是物理磁盘第二层:内核层次的文件系统,维护文件到磁层磁盘的映射关系。(用户一般不需要管)第三层:应用层次的文件系统(需要用户自己手工安装应用程序,启动应用进程)第四层:网络文件访问系统NFS, CIFS(服务器端装Server,客户端装Clie
9、nt,挂载目录远程访问)Ceph原理参考: HYPERLINK /yunjisuan/2014041107.asp /yunjisuan/2014041107.aspCeph存储系统的逻辑结构Rados的系统逻辑结构Ceph寻址流程ceph部署网络拓扑备注:Cluster Network可选,但是最好建议有该网络,用于OSD扩展时,后端网络传输数据用。在实际工作中时,深有体会,如果只有public network,在OSD扩展时,由于ceph需要重新“搬运”数据,导致升级长达5个小时。如果有专门的集群网络(万兆交换机+光钎),几分钟升级完成。Ceph安装(ceph-deploy)参考: HYP
10、ERLINK / / HYPERLINK /ceph/ /ceph/ HYPERLINK / / (上述官网的汉化版) HYPERLINK /mitaka/config-reference/block-storage.html /mitaka/config-reference/block-storage.html环境准备以及各ceph节点初始化部署逻辑架构节点安装组件备注hostname:ceph1ip:19ceph-deploy、mon、osdOS:Centos7.2Admin Node与Ceph1共享hostname:ceph2ip:24mon、osdhostname:ceph3ip:12
11、mon、osd该章节的操作均通过root执行且在各个ceph节点均要执行修改/etc/hostname #vi /etc/hostname #如果为其他节点调整为其他节点的名称cephnumber #如ceph1#hostname -F /etc/hostname #立即生效,断开shell重新登录创建安装用户irteam且该用户不需要tty#useradd -d /home/irteam -k /etc/skel -m irteam#sudo passwd irteam#echo irteam ALL = (root) NOPASSWD: ALL | sudo tee /etc/sudoer
12、s.d/irteam#chmod 0440 /etc/sudoers.d/irteam修改/etc/sudoers,irteam用户不需要tty#chmod 755 /etc/sudoers#vi /etc/sudoers #添加如下配置,而不是将原来的Default requiretty注释掉Defaults:irteam !requiretty#chmod 440 /etc/sudoersyum源以及ceph源设置#yum clean all#rm -rf /etc/yum.repos.d/*.repo#wget -O /etc/yum.repos.d/CentOS-Base.repo /
13、repo/Centos-7.repo#wget -O /etc/yum.repos.d/epel.repo /repo/epel-7.repo#sed -i /aliyuncs/d /etc/yum.repos.d/CentOS-Base.repo#sed -i s/$releasever/7.2.1511/g /etc/yum.repos.d/CentOS-Base.repo#vi /etc/yum.repos.d/ceph.repo #增加ceph源cephname=cephbaseurl=/ceph/rpm-jewel/el7/x86_64/gpgcheck=0ceph-noarchna
14、me=cephnoarchbaseurl=/ceph/rpm-jewel/el7/noarch/gpgcheck=0安装ceph#yum makecache#yum install -y ceph#ceph -version #版本查看ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)关闭selinux & firewalld#sed -i s/SELINUX=.*/SELINUX=disabled/ /etc/selinux/config#setenforce 0#systemctl stop firewalld#sy
15、stemctl disable firewalld同步时间节点(rdate & ntp均可以)参考: HYPERLINK /html/201407/26/46537.html /html/201407/26/46537.html#timedatectl set-timezone Asia/Shanghai #设置时区#yum install -y rdate#rdate -s #选个可用且权威的服务器#echo 00 0 1 * * root rdate -s /etc/crontab #加入调度部署Ceph集群备注:以下操作均在admin-node节点执行,在本文中,由于admin-node
16、与ceph1共享,所以在ceph1执行就可以了,统一用用户:irteam执行修改/etc/hosts#sudo vi /etc/hosts19 ceph124 ceph212 ceph3生成密钥对 & 复制秘钥到各节点(防止部署输入密码,即无密码验证)#sudo su - irteam#ssh-keygenGenerating public/private key pair.Enter file in which to save the key (/irteam/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same
17、 passphrase again:Your identification has been saved in /irteam/.ssh/id_rsa.Your public key has been saved in /irteam/.ssh/id_rsa.pub.#ssh-copy-id irteamceph1#ssh-copy-id irteamceph2#ssh-copy-id irteamceph3用户配置,防止部署时输入用户名#sudo su - irteam #如果当前登录用户是irteam,则忽略该步骤#vi /.ssh/configStrictHostKeyChecking
18、noHost ceph1 Hostname ceph1 User irteamHost ceph2 Hostname ceph2 User irteamHost ceph3 Hostname ceph3 User irteam#chmod 600 /.ssh/config安装部署工具#sudo yum -y install ceph-deploy#ceph-deploy -version1.5.34创建集群#sudo su - irteam #如果当前登录用户是irteam,则不用执行#mkdir /my-cluster & cd /my-cluster#创建集群:在当前目录下生成ceph.c
19、onf以及ceph.mon.keyring#ceph-deploy new ceph1 ceph2 ceph3#ls /my-cluster #查看生成的文件ceph.conf ceph-deploy-ceph.log ceph.mon.keyring修改集群ceph.conf配置,增加public_network,增加monitor之间的时差(默认为0.05s,现改为2s),总共的副本数据调整为2#vi ceph.confglobalfsid = 7cec0691-c713-46d0-bce8-5cb1d57f051fmon_initial_members = ceph1, ceph2, c
20、eph3 #也可以用IP,用hostname最佳mon_host = 19,24,12auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxpublic_network = /24mon_clock_drift_allowed = 2osd_pool_default_size = 2部署monitors#ceph-deploy mon create-initial#ll /my-clusterceph.bootstrap-mds.keyring ceph.bootstrap-r
21、gw.keyring ceph.conf ceph.mon.keyringceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log#sudo ceph -s #查看集群情况 cluster 7cec0691-c713-46d0-bce8-5cb1d57f051f health HEALTH_ERR no osds monmap e1: 3 mons at ceph1=19:6789/0,ceph2=24:6789/0,ceph3=12:6789/0 election epoch 4, quorum 0,1
22、,2 ceph3,ceph1,ceph2 osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating部署osds由于没有足够多的磁盘(如果用磁盘请参考维护章节),用文件夹:#以下创建文件夹,赋权的动作均在ceph1,ceph2,ceph3上执行#sudo mkdir /var/local/osd1 & sudo chmod 777 -R /var/local/osd1osd预处
23、理与激活#ceph-deploy osd prepare ceph1:/var/local/osd1 ceph2:/var/local/osd1 ceph3:/var/local/osd1#ceph-deploy osd activate ceph1:/var/local/osd1 ceph2:/var/local/osd1 ceph3:/var/local/osd1备注:a.如果你有足够多的磁盘,你也可以直接对磁盘操作#ceph-deploy osd prepare ceph1:sdb#ceph-deploy osd activate ceph1:sdbb.对上述osd prepare &
24、osd activate,也可以一步完成#ceph-deploy osd create ceph1:sdb查看集群状态#sudo ceph -s cluster 7cec0691-c713-46d0-bce8-5cb1d57f051f health HEALTH_OK monmap e1: 3 mons at ceph1=19:6789/0,ceph2=24:6789/0,ceph3=12:6789/0 election epoch 4, quorum 0,1,2 ceph3,ceph1,ceph2 osdmap e15: 3 osds: 3 up, 3 in flags sortbitwis
25、e pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects 29590 MB used, 113 GB / 142 GB avail 64 active+cleanCeph安装(kolla)除了官方推荐的ceph-deploy安装方式,你还可以选择如下安装方式:通过工具ansible,远程登录到各node节点安装,并且让mon,osd,rgw用docker方式来承载另外:由于我们使用docker的目的是想部署openstack,涉及到openstack部分,则不涉及。如果不部署openstack,则需要将openstack的所有组件设置为no,
26、只需要打开ceph部分,如:vi /git/kolla/etc/kolla/globals.ymlenable_keystone: “no”,enable_horizon:”no”,enable_$compName: “no”,enable_ceph: “yes”.环境准备以及各ceph节点初始化部署逻辑架构节点安装组件备注Kolla节点ip:44kollaOS:Centos7.2hostname:ceph1ip:33mon、osdhostname:ceph2ip:17mon、osdhostname:ceph3ip:48mon、osd该章节的操作可以通过root用户执行且在各个ceph节点均要
27、执行修改/etc/hostname #vi /etc/hostname #如果为其他节点调整为其他节点的名称cephnuber #如ceph1#hostname -F /etc/hostname #立即生效,断开shell重新登录创建安装用户irteam且该用户不需要tty#useradd -d /home/irteam -k /etc/skel -m irteam#sudo passwd irteam#echo irteam ALL = (root) NOPASSWD: ALL | sudo tee /etc/sudoers.d/irteam#chmod 0440 /etc/sudoers.
28、d/irteam关闭selinux & firewalld#sed -i s/SELINUX=.*/SELINUX=disabled/ /etc/selinux/config#setenforce 0#systemctl stop firewalld#systemctl disable firewalld同步时间节点(rdate & ntp均可以)参考: HYPERLINK /html/201407/26/46537.html /html/201407/26/46537.html#timedatectl set-timezone Asia/Shanghai #设置时区#yum install
29、-y rdate#rdate -s #选个可用且权威的服务器#echo 00 0 1 * * root rdate -s /etc/crontab #加入调度安装docker以及docker-py#curl -sSL https:/get.docker.io | bash#docker -versionDocker version $version, build 20f81dd#vi /usr/lib/systemd/system/docker.service #添加下面的MountFlags=sharedMountFlags=shared#systemctl daemon-reload#sy
30、stemctl restart docker #重启docker#yum install -y python-pip#pip install -U docker-py部署Ceph集群以下操作在kolla节点执行,统一用用户:irteam执行修改/etc/hosts#sudo vi /etc/hosts33 ceph117 ceph248 ceph3生成密钥对 & 复制秘钥到各节点(ansible通过公钥连接到主机)#sudo su - irteam#ssh-keygenGenerating public/private key pair.Enter file in which to save
31、the key (/irteam/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /irteam/.ssh/id_rsa.Your public key has been saved in /irteam/.ssh/id_rsa.pub.#ssh-copy-id irteamceph1#ssh-copy-id irteamceph2#ssh-copy-id irteamceph3下载kolla & 安
32、装ansible & 配置ansible的inventory文件#sudo mkdir -p /git/ & cd /git/ #目录根据自己的调整#git clone HYPERLINK /paas/kolla.git /paas/kolla.git #也可以从gitHub下载#pip install -U ansible=1.9.4 #如果从github下载kolla,注意kolla版本与ansible版本的#对应关系#sudo vi multimode-inventory.略ceph-monceph1ceph-osdceph1 ansible_sudo=Trueceph2 ansible
33、_sudo=Trueceph3 ansible_sudo=Trueceph-rgw略备注:irteam必须要有sudo到root用户的权限将上面ceph-osd节点的磁盘标注为kolla可以识别的名称#登录到各ceph节点,确认哪些盘可以作为osd盘,打上标记,执行完后返回kolla节点#sudo parted /dev/xvdb -s - mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1#sudo parted /dev/xvdb print #查看Model: Xen Virtual Block Device (xvd)Disk /dev/x
34、vdb: 53.7GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 53.7GB 53.7GB KOLLA_CEPH_OSD_BOOTSTRAP可选设置外置的journal drive如果不设置,有没有关系,会自动划分5G的journal 空间设置ceph可部署 & 部署#vi /git/kolla/etc/kolla/globals.yml略enable_ceph: yesenable_
35、ceph_rgw: no略#kolla最终部署#/git/kolla/tools/kolla-ansible deploy -configdir /git/openstack-deploy/config-test -i /git/openstack-deploy/config-test/multinode-inventoryCeph维护整个集群维护Ceph状态查看#sudo ceph -s #当前状态查看#sudo ceph -w #实时状态查看 health HEALTH_OK monmap e3: 1 mons at ceph3=12:6789/0 election epoch 7, qu
36、orum 0 ceph3 osdmap e67: 4 osds: 4 up, 4 in flags sortbitwise pgmap v60965: 64 pgs, 1 pools, 0 bytes data, 0 objects 57416 MB used, 133 GB / 189 GB avail 64 active+clean2016-08-19 01:16:01.623581 mon.0 INF pgmap v60965: 64 pgs: 64 active+clean; 0 bytes data, 57416 MB used, 133 GB / 189 GB avail2016-
37、08-19 01:16:05.582505 mon.0 INF pgmap v60966: 64 pgs: 64 active+clean; 0 bytes data, 57416 MB used, 133 GB / 189 GB avail.#sudo ceph health detail #ceph健康状态查看,详情HEALTH_OKCeph存储空间查看#sudo ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 189G 133G 57411M 29.54 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rb
38、d 0 0 0 64203M 0OSD查看|增加|删除备注:OSD数量大于=2且位于不同的物理节点查看OSD状态#sudo ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.13889 root default -2 0.04630 host ceph1 0 0.04630 osd.0 up 1.00000 1.00000 -3 0.04630 host ceph2 1 0.04630 osd.1 up 1.00000 1.00000 -4 0.04630 host ceph3 2 0.04630 o
39、sd.2 up 1.00000 1.00000#ceph osd dumpepoch 22fsid ee45dfa5-234d-48f3-a8a5-32e9ca781f47created 2016-09-21 14:05:24.512685modified 2016-09-22 15:14:54.317395flags pool 0 rbd replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_wid
40、th 0max_osd 5osd.0 up in weight 1 up_from 11 up_thru 21 down_at 0 last_clean_interval 0,0) 12:6800/5903 12:6801/5903 12:6802/5903 12:6803/5903 exists,up 418cbbe0-ea7e-42d8-b43d-e48dd7e53a00osd.1 up in weight 1 up_from 10 up_thru 21 down_at 0 last_clean_interval 0,0) 34:6800/5639 34:6801/5639 34:6802
41、/5639 34:6803/5639 exists,up f53ec139-9816-46a8-b7d5-41cb5dd57a0fosd.2 up in weight 1 up_from 11 up_thru 21 down_at 0 last_clean_interval 0,0) 35:6800/5659 35:6801/5659 35:6802/5659 35:6803/5659 exists,up 67ca0418-a495-45a7-914b-197dff732220osd.3 down out weight 0 up_from 0 up_thru 0 down_at 0 last_
42、clean_interval 0,0) :/0 :/0 :/0 :/0 exists,newosd.4 down out weight 0 up_from 0 up_thru 0 down_at 0 last_clean_interval 0,0) :/0 :/0 :/0 :/0 exists,new列出磁盘#ceph-deploy disk list ceph1擦除磁盘:删除磁盘的分区#ceph-deploy disk zap ceph1:sdb备注:此动作由于删除分区表,会删除所有数据,务必小心创建OSD#ceph-deploy osd prepare ceph1:sdb #准备OSD#c
43、eph-deploy osd activate ceph1:sdb #激活OSD或#ceph-deploy osd create ceph1:sdb #准备+激活=创建,1步到位移除OSD参考: HYPERLINK /docs/hammer/rados/operations/add-or-rm-osds/ l removing-osds-manual /docs/hammer/rados/operations/add-or-rm-osds/#removing-osds-manual HYPERLINK https:/wiki.deimos.fr/Ceph_:_performance,_reli
44、ability_and_scalability_storage_solution l Add_an_OSD https:/wiki.deimos.fr/Ceph_:_performance,_reliability_and_scalability_storage_solution#Add_an_OSD#sudo ceph osd out osd.number#sudo ceph osd down osd.number #停止进程,也可以登录到节点停止该进程#sudo ceph osd crush remove osd.number#sudo ceph auth del osd. number#
45、sudo ceph osd rm number#sudo rm -rf /var/lib/ceph/osd/ceph-number #登录到所在节点执行Monitor增加|删除|查看说明:Monitor数量要求=2n+1 (n=0,整数),生产环境至少3个移除Monitor#将ceph1这个monitor节点从/my-cluster/ceph.conf删除#cd /my-cluster/ & vi ceph.conf略mon_initial_members = ceph2, ceph3mon_host = 24,12略#推送/my-cluster/ceph.conf到各节点#ceph-depl
46、oy -overwrite-conf config push ceph1 ceph2 ceph3#删除该monitor可能要执行两次,最后通过sudo ceph -s查看#ceph-deploy mon destroy ceph1添加Monitor#将ceph1这个monitor节点添加到/my-cluster/ceph.conf#vi /my-cluster/ceph.conf略mon_initial_members = ceph1,ceph2, ceph3mon_host = 19,24,12略#推送/my-cluster/ceph.conf到各节点#ceph-deploy -overwr
47、ite-conf config push ceph1 ceph2 ceph3#增加monitor #ceph-deploy -overwrite-conf mon create ceph1查看Monitor#sudo ceph -s cluster 773b310b-5faf-4d98-8761-651ba7daecfb health HEALTH_OK monmap e8: 2 mons at ceph2=15:6789/0,ceph3=26:6789/0 election epoch 42, quorum 0,1 ceph2,ceph3 osdmap e76: 3 osds: 3 up,
48、3 in flags sortbitwise pgmap v30914: 136 pgs, 10 pools, 38194 kB data, 195 objects 21925 MB used, 120 GB / 142 GB avail 136 active+clean#当添加或删除Monitor后,查看选举状态#ceph quorum_ status -format json-prettyMDS维护暂不考虑Pools & pg维护参考: HYPERLINK /docs/hammer/rados/operations/pools/ /docs/hammer/rados/operations/
49、pools/ HYPERLINK /content-8246357.html /content-8246357.html (pg状态说明)查看Pools#sudo ceph osd lspools #或sudo rados lspools #或sudo ceph osd pool lsrbd#rados dfpool name KB objects clones degraded unfound rd rd KB wr wr KBrbd 0 0 0 0 0 0 0 0 0 total used 58986376 0 total avail 139998408 total space 19898
50、4784创建Pools &查看pool的副本&查看pool的详细信息#sudo ceph osd pool create images 100 #100指pg numberpool images created默认pool的pg number配置: HYPERLINK /docs/hammer/rados/configuration/pool-pg-config-ref/ /docs/hammer/rados/configuration/pool-pg-config-ref/#sudo ceph osd pool set images size 3 #设置pool:images的副本数为3se
51、t pool 1 size to 3#sudo ceph osd dump | grep replicated size #查看所有pool的副本数#sudo ceph osd dump | grep $poolName #查看pool的详细情况#sudo ceph osd pool set-quota images max_objects 10000 #设置quota:最大对象数set-quota max_objects = 10000 for pool images删除Pools#sudo ceph osd pool delete images images -yes-i-really-r
52、eally-mean-itpool images removed对象存储创建对象#rados put test-object-1 a.txt -pool=data查看池子中的对象#rados -p data lstest-object-1#ceph osd map data test-object-1 #确定对象的位置osdmap e75 pool data (2) object test-object-1 - pg 2.74dc35e2 (2.62) - up (4,5, p4) acting (4,5, p4)删除对象#rados rm test-object-1 -pool=data对象
53、网关(通过web方式访问)参考: HYPERLINK /documentation/ses-1/book_storage_admin/data/sect1_14_chapter_book_storage_admin.html /documentation/ses-1/book_storage_admin/data/sect1_14_chapter_book_storage_admin.html HYPERLINK /docs/hammer/install/install-ceph-gateway/ /docs/hammer/install/install-ceph-gateway/ HYPER
54、LINK /docs/hammer/radosgw/config/ /docs/hammer/radosgw/config/ 在任意ceph节点安装对象网管#sudo yum install -y ceph-radosgw radosgw-agent 基本配置设置创建keyring#sudo ceph-authtool -create-keyring /etc/ceph/ceph.client.radosgw.keyring#sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring创建key#sudo ceph-authtool /etc/ceph
55、/ceph.client.radosgw.keyring -n client.radosgw.gateway -gen-keyAdd capabilities to Key#sudo ceph-authtool -n client.radosgw.gateway -cap osd allow rwx -cap mon allow rwx /etc/ceph/ceph.client.radosgw.keyringAdd the key to Ceph Storage Cluster#sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
56、 client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring如果其他节点需要ceph.client.radosgw.keyring,可以推送到/etc/ceph目录下Add gateway configuration to /etc/ceph/ceph.confclient.radosgw.gatewayhost = ceph1keyring = /etc/ceph/ceph.client.radosgw.keyringrgw socket path = /var/run/ceph/ceph.radosgw.gateway.f
57、astcgi.socklog file = /var/log/radosgw/client.radosgw.gateway.log 启动radosgw服务(默认监听7480端口)#/usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway 创建用户#sudo radosgw-admin user create -uid=ningyougang -display-name=ningyougang HYPERLINK mailto:-email=ningyougang -email=ningyougang S3客户端测试 HY
58、PERLINK /free-amazon-s3-explorer-cloudfront-IAM.aspx /free-amazon-s3-explorer-cloudfront-IAM.aspx到该地址下载S3客户端,连接 S3运维-radosgw,以下含相关运维命令请参考: HYPERLINK /docs/master/radosgw/ /docs/master/radosgw/ HYPERLINK /docs/master/man/8/radosgw-admin/ /docs/master/man/8/radosgw-admin/创建用户#sudo radosgw-admin user c
59、reate -uid=newtouchstep -display-name=newtouchstep HYPERLINK mailto:-email=jingyi.liu -email=jingyi.liu修改用户#sudo radosgw-admin user modify -uid=newtouchstep -display-name=newtouchstep HYPERLINK mailto:-email=jingyi.liu -email=jingyi.liu查看用户信息#sudo radosgw-admin user info -uid=newtouchone删除用户#sudo ra
60、dosgw-admin user rm -uid=newtouchone #没有数据才可以删除#sudo radosgw-admin user rm -uid=newtouchone -purge-data #删除用户删除数据暂停用户使用#sudo radosgw-admin user suspend -uid=newtouchone用户生效#sudo radosgw-admin user enable -uid=newtouchone用户检查#sudo radosgw-admin user check -uid=newtouchone查询bucket#sudo radosgw-admin b
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年滑县城投招聘面试专业岗专属题库及对应标准答案
- 2026基层就业面试题及答案
- 2025-2026学年小组课教案化学
- 2026list面试题及答案
- 2025-2026学年语文课程标准教学设计
- 2025-2026学年写作课教学设计高英
- 2025-2026学年全家福教案大班
- 2026中国宝武秋招面试题及答案
- 2026中关村发展集团招聘试题及答案
- 2025年事业单位联考笔试真题答案带得分点标注版
- 国家安全概论-西安交通大学中国大学mooc课后章节答案期末考试题库2023年
- 检验检测机构资质认定评审准则释义
- YS/T 1018-2015铼粒
- GB/T 4450-1995船用盲板钢法兰
- GB/T 39489-2020全尾砂膏体充填技术规范
- GB/T 14598.301-2010微机型发电机变压器故障录波装置技术要求
- GB 30526-2019烧结墙体材料和泡沫玻璃单位产品能源消耗限额
- GA 139-2009灭火器箱
- 2023年江苏专转本计算机真题及答案
- 部编版小学道德与法治五年级下册第1课《读懂彼此的心》课件
- 医疗机构消防安全管理九项规定(2020版)-精品原创课件
评论
0/150
提交评论