1.环境
版本:
centos 7.4
ceph 10.2.9
oracle 11.2.0.4
机器构成:
ceph 集群的构成:
ceph-mon1 管理节点和监控节点 192.168.0.170
ceph-osd1 存储节点 192.168.0.171
ceph-osd2 存储节点 192.168.0.172
ceph-osd3 存储节点 192.168.0.173
规划的每台机器贡献出来的存储空间
ceph-osd1 20g
ceph-osd2 30g
ceph-osd3 40g
我这里是测试,正常规划,应该大小上接近,因为默认池的副本为3.
数据库环境:
ceph-rac1 数据库节点 192.168.0.175
ceph-rac2 数据库节点 192.168.0.176
rac ip
192.168.0.175 ceph-rac1
192.168.0.176 ceph-rac2
192.168.0.177 ceph-rac1-vip
192.168.0.178 ceph-rac2-vip
192.168.0.179 ceph-scanip
2.安装ceph 集群
在 ceph-mon1,ceph-osd1 ,ceph-osd2,ceph-osd3上安装ceph集群
具体安装见另外的帖子.
3.规划磁盘
规划的每台机器贡献出来的存储空间
ceph-osd1 20g 分区 /dev/sdb
ceph-osd2 30g 分区 /dev/sdb
ceph-osd3 40g 分区 /dev/sdb
3.1 列出每个节点上的磁盘
查看一下 Ceph 存储节点的硬盘情况:
# ceph-deploy disk list ceph-osd1
# ceph-deploy disk list ceph-osd2
# ceph-deploy disk list ceph-osd3
3.2 擦出盘所有数据
# ceph-deploy disk zap ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb
3.3 格式化盘
# ceph-deploy osd prepare ceph-osd1:/dev/sdb ceph-osd2:/dev/sdb ceph-osd3:/dev/sdb
3.4 激活盘
#ceph-deploy osd activate ceph-osd1:/dev/sdb1 ceph-osd2:/dev/sdb1 ceph-osd3:/dev/sdb1
[root@ceph-mon1 ceph-cluster]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
76756M 71317M 5439M 7.09
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0
默认有一个池 ,rbd
3.5 创建新的池
3.5.1 删除原来的池
rados rmpool rbd rbd --yes-i-really-really-mean-it
3.5.2 新增加池
这里的池,就有点像我们lvm中的卷组
考虑到我们要安装rac,因此,把vote,ocr盘 ,与数据盘分开,因此,我们建立两个池.
ceph osd pool create votpool 128
ceph osd pool create asmpool 120
注意:以上在ceph-mon1机器上操作.
4. rac节点安装ceph 软件
在ceph-rac1,ceph-rac2上安装ceph软件,让他们可以使用ceph存储
具体安装安装简略如下:
4.1 安装软件
yum install ceph ceph-radosgw rdate -y
注:这个操作在ceph-rac1,ceph-rac2上操作
4.2 把配置文件和密钥拷贝到client节点ceph-rac1,ceph-rac2
[root@ceph-mon1 ~]# cd /root/ceph-cluster/
[root@ceph-mon1 ceph-cluster]# ll
total 220
-rw------- 1 root root 113 Oct 26 04:21 ceph.bootstrap-mds.keyring
-rw------- 1 root root 113 Oct 26 04:21 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Oct 26 04:21 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 129 Oct 26 04:21 ceph.client.admin.keyring
-rw-r--r-- 1 root root 258 Oct 26 04:20 ceph.conf
-rw-r--r-- 1 root root 200577 Oct 27 07:21 ceph-deploy-ceph.log
-rw------- 1 root root 73 Oct 26 04:16 ceph.mon.keyring
看到有很多密钥,mds,osd,rgw,client密钥,需要拷贝的就是ceph.client.admin.keyring和 ceph.conf
可以用ceph-deploy工具拷贝
[root@ceph-mon1 ceph-cluster]# ceph-deploy admin ceph-rac1 ceph-rac2
ceph-deploy 工具会把密钥环复制到 /etc/ceph 目录,要确保此密钥环文件有读权限
(如 chmod +r /etc/ceph/ceph.client.admin.keyring)。
注意:以上在ceph-mon1机器上操作.
5.创建具体的image
在 ceph-rac1 节点上创建一个块设备 image 。
--创建3个1G 的vote盘 Image
[root@ceph-oracle ceph]#
rbd create votpool/img_vot1 --size 1G --image-format 2 --image-feature layering
rbd create votpool/img_vot2 --size 1G --image-format 2 --image-feature layering
rbd create votpool/img_vot3 --size 1G --image-format 2 --image-feature layering
--创建5个2G 的数据文件盘 Image
rbd create asmpool/img_asm1 --size 2G --image-format 2 --image-feature layering
rbd create asmpool/img_asm2 --size 2G --image-format 2 --image-feature layering
rbd create asmpool/img_asm3 --size 2G --image-format 2 --image-feature layering
rbd create asmpool/img_asm4 --size 2G --image-format 2 --image-feature layering
rbd create asmpool/img_asm5 --size 2G --image-format 2 --image-feature layering
--查看rbd池中的块设备
[root@ceph-mon1 ceph-cluster]# rbd ls --pool votpool
img_vot1
img_vot2
img_vot3
[root@ceph-mon1 ceph-cluster]# rbd ls --pool asmpool
img_asm1
img_asm2
img_asm3
img_asm4
img_asm5
6.映射块设备
这个必须在客户端节点上执行,比如ceph-rac1,ceph-rac2上执行.
[root@ceph-rac1 ceph]#
rbd map votpool/img_vot1
rbd map votpool/img_vot2
rbd map votpool/img_vot3
rbd map asmpool/img_asm1
rbd map asmpool/img_asm2
rbd map asmpool/img_asm3
rbd map asmpool/img_asm4
rbd map asmpool/img_asm5
这个时候,我们通过lsblk上,可以看到新的块设备了:
[root@ceph-rac1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 2G 0 lvm [SWAP]
└─centos-home 253:2 0 47G 0 lvm /home
rbd0 252:0 0 1G 0 disk
rbd1 252:16 0 1G 0 disk
rbd2 252:32 0 1G 0 disk
rbd3 252:48 0 2G 0 disk
rbd4 252:64 0 2G 0 disk
rbd5 252:80 0 2G 0 disk
rbd6 252:96 0 2G 0 disk
rbd7 252:112 0 2G 0 disk
--要查看这个对应关系:
[root@ceph-oracle ~]# rbd showmapped
id pool image snap device
0 votpool img_vot1 - /dev/rbd0
1 votpool img_vot2 - /dev/rbd1
2 votpool img_vot3 - /dev/rbd2
3 asmpool img_asm1 - /dev/rbd3
4 asmpool img_asm2 - /dev/rbd4
5 asmpool img_asm3 - /dev/rbd5
6 asmpool img_asm4 - /dev/rbd6
7 asmpool img_asm5 - /dev/rbd7
7.修改权限
这个块设备,每次重起,权限都会变回root:root
而且,这个对照也必须手工重起映射.
因此,我们可以在 /etc/rc.local中,对应权限和映射,都写在里面
[root@ceph-rac1 ~]# cat /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
rbd map votpool/img_vot1
rbd map votpool/img_vot2
rbd map votpool/img_vot3
rbd map asmpool/img_asm1
rbd map asmpool/img_asm2
rbd map asmpool/img_asm3
rbd map asmpool/img_asm4
rbd map asmpool/img_asm5
chown oracle:dba /dev/rbd*
chmod 660 /dev/rbd*
编辑完后,执行以下语句,否则,系统重起,不会执行/etc/rc.local
chmod +x /etc/rc.d/rc.local
之后,每次启动后,映射和权限对应都会自动修改.
8.安装rac
具体安装步骤略
安装过程和普通的rac安装没有任何差别.
磁盘的对应关系:
/dev/rbd0 dgocr
/dev/rbd1 dgocr
/dev/rbd2 dgocr
/dev/rbd3 dgdata
/dev/rbd4 dgdata
/dev/rbd5 dgdata
/dev/rbd6 dgdata
/dev/rbd7 dgdata
9.安装完成:
1* select name,path,os_mb from v$asm_disk
NAME PATH OS_MB
------------------------------ ------------------------------ ----------
DGDATA_0004 /dev/rbd7 2048
DGDATA_0003 /dev/rbd6 2048
DGDATA_0002 /dev/rbd5 2048
DGDATA_0001 /dev/rbd4 2048
DGDATA_0000 /dev/rbd3 2048
DGOCR_0002 /dev/rbd2 1024
DGOCR_0001 /dev/rbd1 1024
DGOCR_0000 /dev/rbd0 1024
资源状态:
[oracle@ceph-rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DGDATA.dg
ONLINE ONLINE ceph-rac1
ONLINE ONLINE ceph-rac2
ora.DGOCR.dg
ONLINE ONLINE ceph-rac1
ONLINE ONLINE ceph-rac2
ora.LISTENER.lsnr
ONLINE ONLINE ceph-rac1
ONLINE ONLINE ceph-rac2
ora.asm
ONLINE ONLINE ceph-rac1 Started
ONLINE ONLINE ceph-rac2 Started
ora.gsd
OFFLINE OFFLINE ceph-rac1
OFFLINE OFFLINE ceph-rac2
ora.net1.network
ONLINE ONLINE ceph-rac1
ONLINE ONLINE ceph-rac2
ora.ons
ONLINE ONLINE ceph-rac1
ONLINE ONLINE ceph-rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ceph-rac1
ora.ceph-rac1.vip
1 ONLINE ONLINE ceph-rac1
ora.ceph-rac2.vip
1 ONLINE ONLINE ceph-rac2
ora.cvu
1 ONLINE ONLINE ceph-rac1
ora.oc4j
1 ONLINE ONLINE ceph-rac1
ora.scan1.vip
1 ONLINE ONLINE ceph-rac1
ora.sztech.db
1 ONLINE ONLINE ceph-rac1 Open
2 ONLINE ONLINE ceph-rac2 Open
|