您现在的位置是:首页 > 经验记录>服务器相关>ceph块存储的搭建[不在mon节点] 网站首页 服务器相关
ceph块存储的搭建[不在mon节点]
块存储客户端搭建[我用ceph-client这个名称代替这台新服务器]:
一:在主节点[113]创建ceph块客户端用户名和认证密钥
[ceph-admin@ceph113 my-cluster]$ sudo ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd’ |tee ./ceph.client.rbd.keyring
[client.rbd]
key = AQChG2Vcu552KRAAMf4/SdfSVa4sFDZPfsY8bg==
二:在新的服务器[ceph-client]上修改主机名和hosts文件[我是开的一个虚拟机/同内网]
[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]# hostnamectl set-hostname ceph-client
[root@localhost ~]# hostname
ceph-client
[root@localhost ~]# vi /etc/hosts
//添加:
172.16.1.13 ceph113
172.16.1.14 ceph114
172.16.1.15 ceph115
172.16.3.16 ceph-client
//保存后在其他三个ceph节点物理机上去修改hosts为一样的(前面配置的少了最后一个[172.16.3.16 ceph-client])
三:把生成密匙文件拷贝到[ceph-client][在ceph-client服务上操作]
[root@localhost ~]# mkdir /etc/ceph -p //创建文件夹
[root@localhost ~]# vi /etc/ceph/ceph.client.rbd.keyring
//这里把刚刚第一步生成的:
[client.rbd]
key = AQChG2Vcu552KRAAMf4/SdfSVa4sFDZPfsY8bg==
//复制进来,你别复制我这个
四: 检查客户端是否符合块设备环境要求[在ceph-client服务上操作]
[root@localhost ~]# uname -r
3.10.0-862.el7.x86_64
[root@localhost ~]# modprobe rbd
[root@localhost ~]# echo $?
0
五:安装ceph客户端[在ceph-client服务上操作]
[root@localhost ~]# yum -y install wget
[root@localhost ~]# wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
[root@localhost ~]# yum install -y ceph
六:测试密钥连接集群[在ceph-client服务上操作]
[root@localhost ~]# ceph -s --name client.rbd
cluster:
id: cde2c9f7-009e-4bb4-a206-95afa4c43495
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3
osd: 9 osds: 9 up, 9 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 9.06GiB used, 171GiB / 180GiB avail
pgs:
七:创建rbd池[注意用户名和文件夹地址] [在113上操作]
[ceph-admin@ceph-node1 my-cluster]$ ceph osd lspools
[ceph-admin@ceph-node1 my-cluster]$ ceph osd pool create rbd 128
pool 'rbd' created
备注:【
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
• 少于 5 个 OSD 时可把 pg_num 设置为 128
• OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
• OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
• OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
注意:不一定要按照上述规则创建,比如我的是36个osd,但是我创建设置pg_num就是1024,不是4096,因为要保证控制评价per osd 100-150[pg_num/osd]
】
[ceph-admin@ceph-node1 my-cluster]$ ceph osd lspools
1 rbd,
八:客户端创建块设备[在ceph-client服务上操作]
[root@localhost ~]# rbd create rbd1 --size 10240 --name client.rbd
九:查看[在ceph-client服务上操作]
[root@localhost ~]# rbd ls -p rbd --name client.rbd
rbd1
[root@localhost ~]# rbd list --name client.rbd
rbd1
[root@localhost ~]# rbd --image rbd1 info --name client.rbd
rbd image 'rbd1':
size 10GiB in 2560 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.faa76b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Thu Feb 14 17:53:54 2019
十:映射到客户端[在ceph-client服务上操作]
[ceph-admin@ceph113 my-cluster]# sudo rbd map --image rbd1 --name client.rbd
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
映射报错
layering:分层支持
exclusive-lock:排它锁定支持对
object-map:对象映射支持,需要排它所(exclusive-lock)。
deep-flatten:快照平支持(snapshot flatten support)
fast-diff:在client-node1上使用krbd(内核rbd)客户机进行快速diff计算(需要对象映射),我们将无法在centos内核3.10上映射块设备映像,因为该内核不支持对象映射(object-map)、深平(deep-flatten)和快速diff(fast-diff)(在内核4.9中引入了支持)。为了解决这个问题,我们将禁用不支持的特性:
动态禁用:
[root@localhost ~]# rbd feature disable rbd1 exclusive-lock object-map fast-diff deep-flatten --name client.rbd
再次映射到客户端:
[root@localhost ~]# rbd map --image rbd1 --name client.rbd
/dev/rbd0
[root@localhost ~]# rbd showmapped --name client.rbd
id pool image snap device
0 rbd rbd1 - /dev/rbd0
十一:创建文件系统,并挂载[在ceph-client服务上操作]
[root@localhost ~]# fdisk -l /dev/rbd0
Disk /dev/rbd0: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
[root@localhost ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=16, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# mkdir /mnt/ceph-disk1
[root@localhost ~]# mount /dev/rbd0 /mnt/ceph-disk1
[root@localhost ~]# df -h /mnt/ceph-disk1
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 10G 33M 10G 1% /mnt/ceph-disk1
写入数据测试
[root@localhost ~]# ll /mnt/ceph-disk1/
total 0
[root@localhost ~]# dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.127818 s, 820 MB/s
[root@localhost ~]# ll /mnt/ceph-disk1/
total 102400
-rw-r--r-- 1 root root 104857600 Feb 15 10:47 file1
[root@localhost ~]# df -h /mnt/ceph-disk1/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 10G 133M 9.9G 2% /mnt/ceph-disk1
开机自动挂载[在ceph-client服务上操作]
下载脚本
[root@localhost ~]# wget -O /usr/local/bin/rbd-mount https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount
[root@localhost ~]# chmod +x /usr/local/bin/rbd-mount
[root@localhost ~]# vi /usr/local/bin/rbd-mount
#!/bin/bash
# Pool name where block device image is stored
export poolname=rbd
# Disk image name
export rbdimage=rbd1
# Mounted Directory
export mountpoint=/mnt/ceph-disk1
# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
modprobe rbd
rbd feature disable $rbdimage object-map fast-diff deep-flatten
rbd map $rbdimage --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
mkdir -p $mountpoint
mount /dev/rbd/$poolname/$rbdimage $mountpoint
fi
if [ "$1" == "u" ]; then
umount $mountpoint
rbd unmap /dev/rbd/$poolname/$rbdimage
注意如上:rbd标红是改之后的,下载的原本是ceph.client.test.keyring,更具自己创建的名称修改
做成系统服务
[root@localhost ~]# wget -O /etc/systemd/system/rbd-mount.service https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount.service
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl enable rbd-mount.service
重启查看自动挂载
[root@localhost ~]# reboot -f
[root@localhost ~]# df -h /mnt/ceph-disk1/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd1 10G 133M 9.9G 2% /mnt/ceph-disk1
[root@localhost ~]# ll -h /mnt/ceph-disk1/
total 100M
-rw-r--r-- 1 root root 100M Feb 15 10:47 file1
上一篇:ceph三节点基本服务搭建
下一篇:ceph-对象存储服务搭建
随机推荐
- git强制拉取线上最新代码?
- Mysql允许远程链接(直接使用navicat走非ssh通道连接)的方法
- ubuntu下PHP的扩展安装(非PHP.INI)
- Ubuntu上安装和配置Supervisor及运行pkg_resources.DistributionNotFound:报错处理
- 使用ceph-deploy 2.0.0 部署ceph 12.2.5集群
- laravel7之后,日期序列化Carbon,保留以前YmdHis格式的办法
- Mysql检查链接数,状态,最大链接数等
- ubuntu16环境下,PHP7.0所有扩展简易安装
- 阿里云 Failed to download metadata for repo ‘AppStream‘(centos8放弃维护)
- ubuntu 下搭建redis和php的redis的拓展