




已阅读5页,还剩9页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Centos 6.5 安装 ceph1. 介绍2. 实验环境节点IP 主机名系统版本MON80CentOS release 6.5 (Final) -x86_64MDS80CentOS release 6.5 (Final) -x86_64OSD083CentOS release 6.5 (Final) -x86_64OSD83CentOS release 6.5 (Final) -x86_64CCentOS release 6.3 (Final) -x86_643. 安装步骤yum install make automake autoconf boost-devel fuse-devel gcc-c+ libtool libuuid-devel libblkid-devel keyutils-libs-devel cryptopp-devel fcgi-devel libcurl-devel expat-devel gperftools-devel libedit-devel libatomic_ops-devel snappy-devel leveldb-devel libaio-devel xfsprogs-devel git libudev-devel btrfs-progstar xf ceph-0.80.1.tar.gzcd ceph-0.80.1./autogen.sh./configure -prefix=/usr/local/ceph make & make install/usr/local/bin/ceph osd createTraceback (most recent call last): File /usr/local/bin/ceph, line 53, in import argparseImportError: No module named argparseyum install python-pippip install argparse/usr/local/bin/ceph osd createTraceback (most recent call last): File /usr/local/bin/ceph, line 56, in import radosImportError: No module named rados cp -vf /usr/local/ceph/lib/python2.6/site-packages/* /usr/lib64/python2.6echo /usr/local/ceph/lib /etc/ld.so.conf.d/ceph.confldconfig4. ceph配置因为ceph配置文件使用的 hostname -s显示的短名,所有需要修改 /etc/ app180-184除 client 其它都做ssh key互相认证所有节点上都需要新建目录mkdir /etc/ceph加环境if ! grep ceph /etc/profile;then echo export PATH=$PATH:/usr/local/ceph/bin:/usr/local/ceph/sbin /etc/profile;export PATH=$PATH:/usr/local/ceph/bin:/usr/local/ceph/sbin;fi4.1 osd节点搭建新建分区,格式化为 xfs分区,然后挂载mount /dev/sdb1 /mnt/osd1mount /dev/sdb2 /mnt/osd2如果挂载的硬盘分区格式是ext3或者ext4,一定要加上参数-o user_xattr新建目录,根据 ceph.conf 中配置的内容新建 osd data,osd journal 新建目录,这文档的配置文件内容不需要新建。4.2 配置mon节点,mds节点新建目录, 根据 ceph.conf 中配置的内容新建,mon.0 mkdir -p /var/lib/ceph/mon/mon.0CEPH的源码包中带有三个非常重要的文件,一个是sample.ceph.conf,是配置文件的样例;一个是sample.fetch_conf,是个脚本,作用是在各节点间同步配置文件;还有一个是init-ceph,是启动脚本,用来启动各节点服务。将他们三个分别拷贝到相应路径复制事例配置文件及启动文件到相应目录cd /root/ceph-0.80.1/srccp sample.ceph.conf /etc/ceph/ceph.confsample.fetch_config /etc/ceph/fetch_configcp init-ceph /etc/init.d/ceph4.3 配置ceph.confglobal public network = /24 pid file = /var/run/ceph/$name.pid auth cluster required = cephx auth service required = cephx auth client required = cephx;客户端挂载时认证时使用,none不认证 keyring = /etc/ceph/keyring.$name osd pool default size = 1;这两个配置影响了health状态 osd pool default min size = 1;这两个配置影响了health状态 osd pool default crush rule = 0 osd crush chooseleaf type = 1mon mon data = /var/lib/ceph/mon/$name mon clock drift allowed = .15 keyring = /etc/ceph/keyring.$namemon.0host=app180-180mon addr = 80:6789mds keyring = /etc/ceph/keyring.$namemds.0host=app180-180osd osd data = /mnt/osd$id osd recovery max active = 5 osd mkfs type = xfs osd journal = /mnt/osd$id/journal osd journal size = 1000 ; journal size, in megabytes keyring=/etc/ceph/keyring.$nameosd.0 host = app180-183 devs = /dev/sdb1osd.1 host = app180-183 devs = /dev/sdb24.4 启动ceph初始化:mkcephfs -a -c /etc/ceph/ceph.conf/etc/init.d/ceph -a start 提示Error ENOENT: osd.0 does not exist. create it before updating the crush map运行,进行创建/usr/local/bin/ceph osd create目前发现有几个osd需要运行几次,比如 osd.0运行1次,osd.1运行2次,以此类推。很不合理啊.创建时会运行第一次会出现0,第二次出现 1,依次类推创建好后运行/etc/init.d/ceph -a start osd4.5执行健康检查ceph health # 也可以使用ceph -s命令查看状态如果返回的是HEALTH_OK,则代表成功!如果遇到如下提示:HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds说明没有检测到 OSD。或者遇到如下提示:HEALTH_WARN 178 pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2/24 objects degraded(8.333%)执行如下命令,就可以解决:# ceph pg dump_stuck stale & ceph pg dump_stuck inactive & ceph pg dump_stuck unclean如果遇到如下提示:HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42degraded (50.000%)则说明osd数量不够,Ceph默认至少需要提供两个osd。如果遇到如下提示: health HEALTH_WARN clock skew detected on mon.0使用 ceph -w,查看会发现2014-06-12 12:35:59.525504 mon.0 WRN mon.1 83:6789/0 clock skew 0.207341s max 0.05s说明2个mon之间的时间相差超过允许值了,需要做同步。也可以设置monmon clock drift allowed = .15 #允许在mon之间差多少时间,默认为0.050秒5. 挂载ceph方法一升级内核到3.10以上然后挂载.mount -t ceph 80:/ /mnt 如果打开了认证模式,那么加载时需要mount -t ceph 80:/ /mnt -o name=admin,secret=AQCGdJ5TYLNrCBAAkoMJgdYHW66ITpnWyItccw=或直接使用mount -t ceph 80:/ /mnt -o name=admin,secret=ceph-authtool /etc/ceph/keyring.client.admin -psecret在keyring.client.admin中方法二不用升级内核scp ceph.conf keyring.client.admin 80:/etc/ceph/然后在client上执行ceph-fuse /mnt这种方法不推荐,客户端获取了ceph.conf文件是不安全的。报错mount error 5 = Input/output error如果出现这种情况,使用ceph -s 查看,以下是其中的两种情况health HEALTH_WARN 398 pgs stale; 1/3 in osds are down (OSD down了)health HEALTH_WARN mds 0 is laggy(MDS down了)必需health OK 才可以。6. 进阶篇6.1 增加OSD这里在80上新增一块磁盘 /dev/sdb1,mount /dev/sdb1 /mnt/osd2,配置文件新增osd.2 host = app180-180devs = /dev/sdb1手工复制配置文件到另一台机器保持,所有节点配置同步scp keyring.client.admin ceph.conf 80:/etc/ceph/以下操作都在新增OSD节点上操作初始化新增osd节点,需要在新增的节点机器上运行,这里在 80上运行ceph-osd -i 2 -mkfs -mkkey2014-06-16 14:06:11.439967 7f513c04d7a0 -1 journal FileJournal:_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2014-06-16 14:06:11.470440 7f513c04d7a0 -1 journal FileJournal:_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2014-06-16 14:06:11.471249 7f513c04d7a0 -1 filestore(/mnt/osd2) could not find 23c2fcde/osd_superblock/0/-1 in index: (2) No such file or directory2014-06-16 14:06:11.507303 7f513c04d7a0 -1 created object store /mnt/osd2 journal /mnt/osd2/journal for osd.2 fsid 8f8bad40-42e8-45a1-8ed5-ead5b617a4a52014-06-16 14:06:11.507409 7f513c04d7a0 -1 auth: error reading file: /etc/ceph/keyring.osd.2: cant open /etc/ceph/keyring.osd.2: (2) No such file or directory2014-06-16 14:06:11.507650 7f513c04d7a0 -1 created new key in keyring /etc/ceph/keyring.osd.2加入节点ceph auth add osd.2 osd allow * mon allow rwx -i /etc/ceph/keyring.osd.2added key for osd.2ceph osd create/etc/init.d/ceph -a start osd.2= osd.2 = Mounting xfs on app180-180:/mnt/osd2create-or-move updating item name osd.2 weight 0.01 at location host=app180-180-for-yum,root=default to crush mapStarting Ceph osd.2 on app180-180.starting osd.2 at :/0 osd_data /mnt/osd2 /mnt/osd2/journal查看状态ceph -sceph auth list,能查看所有认证节点Additional OSD - CRUSH mapceph osd getcrushmap -o crushmap.bingot crush map from osdmap epoch 12crushtool -d crushmap.bin -o crushmap.txt根据新增内容,编辑 crushmap.txt,增加devices,以及节点下的osd.2,注意backets下的包含关系,从无到有都是包含关系。crushtool -c crushmap.txt -o crushmap.binceph osd setcrushmap -i crushmap.binceph osd crush tunables optimal6.2 增加MDS增加83 MDS到节点,增加到配置文件,并同步到节点mds.1 host=app180-183以下操作都在新增OSD节点上操作生成keyceph-authtool -create-keyring -gen-key -n mds.1
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
评论
0/150
提交评论