Before this page, please check these post.
Try to install Ceph in CentOS 7 referencing "STORAGE CLUSTER QUICK START" - AKAI TSUKI
Install Ceph in CentOS 7. - AKAI TSUKI
I'd like to use CephFS.
http://docs.ceph.com/docs/master/cephfs/createfs/
I create pools.
Rgarding to pg_num
, I see below url.
http://docs.ceph.com/docs/master/rados/operations/placement-groups/
[cuser@ceph01 ~]$ sudo ceph osd pool create cephfs_data 128 pool 'cephfs_data' created [cuser@ceph01 ~]$ [cuser@ceph01 ~]$ sudo ceph osd pool create cephfs_metadata 128 Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3) [cuser@ceph01 ~]$
Error occurred. so I lower pg_num
value for cephfs_metadata
.
[cuser@ceph01 ~]$ sudo ceph osd pool create cephfs_metadata 64 pool 'cephfs_metadata' created [cuser@ceph01 ~]$
[cuser@ceph01 ~]$ sudo ceph fs new testfs cephfs_metadata cephfs_data new fs with metadata pool 2 and data pool 1 [cuser@ceph01 ~]$ [cuser@ceph01 ~]$ sudo ceph fs ls name: testfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] [cuser@ceph01 ~]$
I check status.
[cuser@ceph01 ~]$ sudo ceph osd stat 3 osds: 3 up, 3 in; epoch: e22 [cuser@ceph01 ~]$ sudo ceph osd status +----+--------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+--------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | ceph01 | 1027M | 14.9G | 0 | 0 | 0 | 0 | exists,up | | 1 | ceph02 | 1027M | 14.9G | 0 | 0 | 0 | 0 | exists,up | | 2 | ceph03 | 1027M | 14.9G | 0 | 0 | 0 | 0 | exists,up | +----+--------+-------+-------+--------+---------+--------+---------+-----------+ [cuser@ceph01 ~]$ [cuser@ceph01 ~]$ sudo ceph osd versions { "ceph version 13.2.0 (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic (stable)": 3 } [cuser@ceph01 ~]$
[cuser@ceph01 ~]$ sudo ceph osd pool ls detail pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 22 flags hashpspool stripe_width 0 application cephfs pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 22 flags hashpspool stripe_width 0 application cephfs [cuser@ceph01 ~]$ [cuser@ceph01 ~]$ sudo ceph mds stat testfs-1/1/1 up {0=ceph01=up:active} [cuser@ceph01 ~]$ [cuser@ceph01 ~]$ sudo cat /etc/ceph/ceph.client.admin.keyring [client.admin] key = *snip* caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" [cuser@ceph01 ~]$
Clinet Side
I setup client to use CephFS.
[root@ceph05 ~]# chmod 600 admin.secret [root@ceph05 ~]# ls -l total 8 -rw------- 1 root root 41 Jul 15 20:10 admin.secret -rw-------. 1 root root 1329 Jul 14 21:50 anaconda-ks.cfg [root@ceph05 ~]# mkdir /mnt/mycephfs [root@ceph05 ~]# ls -l /mnt/mycephfs/ total 0 [root@ceph05 ~]# [root@ceph05 ~]# sudo mount -t ceph ceph01:/ /mnt/mycephfs -o name=admin,secretfile=/root/admin.secret [root@ceph05 ~]# [root@ceph05 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 28G 1.5G 27G 6% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 171M 844M 17% /boot tmpfs tmpfs 380M 0 380M 0% /run/user/0 172.16.10.111:/ ceph 15G 0 15G 0% /mnt/mycephfs [root@ceph05 ~]#
I create a file on shared disk of CephFS.
[root@ceph05 ~]# ls -l /mnt/mycephfs/ total 0 [root@ceph05 ~]# vi /mnt/mycephfs/test.txt [root@ceph05 ~]# cat /mnt/mycephfs/test.txt message [root@ceph05 ~]#
After I unmount, I can not see created file.
[root@ceph05 ~]# umount /mnt/mycephfs [root@ceph05 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 28G 1.5G 27G 6% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 171M 844M 17% /boot tmpfs tmpfs 380M 0 380M 0% /run/user/0 [root@ceph05 ~]# [root@ceph05 ~]# ls -l /mnt/mycephfs/ total 0 [root@ceph05 ~]#
I mount again.
[root@ceph05 ~]# sudo mount -t ceph ceph01:/ /mnt/mycephfs -o name=admin,secretfile=/root/admin.secret [root@ceph05 ~]# [root@ceph05 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 28G 1.5G 27G 6% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 171M 844M 17% /boot tmpfs tmpfs 380M 0 380M 0% /run/user/0 172.16.10.111:/ ceph 15G 0 15G 0% /mnt/mycephfs [root@ceph05 ~]# [root@ceph05 ~]# ls -l /mnt/mycephfs/ total 1 -rw-r--r-- 1 root root 8 Jul 15 20:14 test.txt [root@ceph05 ~]# cat /mnt/mycephfs/test.txt message [root@ceph05 ~]#
and I confirm status.
[cuser@ceph01 ~]$ sudo ceph fs status testfs - 1 clients ====== +------+--------+--------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+--------+---------------+-------+-------+ | 0 | active | ceph01 | Reqs: 0 /s | 13 | 14 | +------+--------+--------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 26.2k | 14.1G | | cephfs_data | data | 8 | 14.1G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ +-------------+ MDS version: ceph version 13.2.0 (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic (stable) [cuser@ceph01 ~]$