快速上手glusterfs

一、安装glusterfs

配置yum源

  1. [root@docker ~]# cat /etc/yum.repos.d/gluster-epel.repo
  2. [glusterfs-epel]
  3. name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
  4. baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-//
  5. enabled=1
  6. skip_if_unavailable=1
  7. gpgcheck=1
  8. gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/rsa.pub
  9. [glusterfs-noarch-epel]
  10. name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
  11. baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-/noarch
  12. enabled=1
  13. skip_if_unavailable=1
  14. gpgcheck=1
  15. gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/rsa.pub
  16. [glusterfs-source-epel]
  17. name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
  18. baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-/SRPMS
  19. enabled=0
  20. skip_if_unavailable=1
  21. gpgcheck=1
  22. gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/rsa.pub

安装gluster服务

  1. yum install glusterfs-server -y

启动服务查看状态

  1. [root@docker yum.repos.d]# systemctl start glusterd.service
  2. [root@docker yum.repos.d]# systemctl status glusterd.service
  3. glusterd.service - GlusterFS, a clustered file-system server
  4. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
  5. Active: active (running) since Wed 2016-06-29 09:37:55 CST; 4s ago
  6. Process: 5570 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  7. Main PID: 5571 (glusterd)
  8. CGroup: /system.slice/glusterd.service
  9. └─5571 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  10. Jun 29 09:37:55 docker-18120 systemd[1]: Starting GlusterFS, a clustered file-system server...
  11. Jun 29 09:37:55 docker-18120 systemd[1]: Started GlusterFS, a clustered file-system server.
  12. [root@docker-18121 yum.repos.d]# systemctl start glusterd.service
  13. [root@docker-18121 yum.repos.d]# systemctl status glusterd.service
  14. glusterd.service - GlusterFS, a clustered file-system server
  15. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
  16. Active: active (running) since Wed 2016-06-29 09:37:55 CST; 4s ago
  17. Process: 5570 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  18. Main PID: 5571 (glusterd)
  19. CGroup: /system.slice/glusterd.service
  20. └─5571 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  21. Jun 29 09:37:55 docker-18121 systemd[1]: Starting GlusterFS, a clustered file-system server...
  22. Jun 29 09:37:55 docker-18121 systemd[1]: Started GlusterFS, a clustered file-system server.
  23. [root@docker-18122 yum.repos.d]# systemctl start glusterd.service
  24. [root@docker-18122 yum.repos.d]# systemctl status glusterd.service
  25. glusterd.service - GlusterFS, a clustered file-system server
  26. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
  27. Active: active (running) since Wed 2016-06-29 09:37:55 CST; 4s ago
  28. Process: 5570 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  29. Main PID: 5571 (glusterd)
  30. CGroup: /system.slice/glusterd.service
  31. └─5571 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  32. Jun 29 09:37:55 docker-18122 systemd[1]: Starting GlusterFS, a clustered file-system server...
  33. Jun 29 09:37:55 docker-18122 systemd[1]: Started GlusterFS, a clustered file-system server.
  34. [root@dockereg-18123 yum.repos.d]# systemctl start glusterd.service
  35. [root@dockereg-18123 yum.repos.d]# systemctl status glusterd.service
  36. glusterd.service - GlusterFS, a clustered file-system server
  37. Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
  38. Active: active (running) since Wed 2016-06-29 18:16:25 CST; 4s ago
  39. Process: 3572 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
  40. Main PID: 3573 (glusterd)
  41. CGroup: /system.slice/glusterd.service
  42. └─3573 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
  43. Jun 29 18:16:25 dockereg-18123 systemd[1]: Starting GlusterFS, a clustered file-system server...
  44. Jun 29 18:16:25 dockereg-18123 systemd[1]: Started GlusterFS, a clustered file-system server.

增加信任池,在一台上增加信任即可

  1. [root@docker yum.repos.d]# systemctl stop glusterfsd.service
  2. [root@docker yum.repos.d]# gluster peer probe 10.10.18.121
  3. peer probe: success.
  4. [root@docker yum.repos.d]# gluster peer probe 10.10.18.122
  5. peer probe: success.
  6. [root@docker yum.repos.d]# gluster peer probe 10.10.18.123
  7. peer probe: success.

查看信任池状态

  1. [root@docker-18120 ~]# gluster peer status
  2. Number of Peers: 3
  3. Hostname: 10.10.18.121
  4. Uuid: c60707ee-f94e-4b1c-a9ac-bf3bb501771a
  5. State: Peer in Cluster (Connected)
  6. Hostname: 10.10.18.122
  7. Uuid: 1a65fe10-75a6-416b-9193-59982db70d28
  8. State: Peer in Cluster (Connected)
  9. Hostname: 10.10.18.123
  10. Uuid: 150b1e9f-9d94-44f9-a5a8-277ae8d834ae
  11. State: Peer in Cluster (Connected)

分别创建四个brick

  1. [root@dockereg-18120 ~]mkdir -p /data/docker-registry-18-120
  2. [root@dockereg-18121 ~]mkdir -p /data/docker-registry-18-121
  3. [root@dockereg-18122 ~]mkdir -p /data/docker-registry-18-122
  4. [root@dockereg-18123 ~]mkdir -p /data/docker-registry-18-123

创建一个分布式复制卷,由于使用根分区,所以会报错

  1. [root@docker ~]# gluster volume create docker-registry-volume replica 2 transport tcp 10.10.18.120:/data/docker-registry-18-120 10.10.18.121:/data/docker-registry-18-121 10.10.18.122:/data/docker-registry-18-122 10.10.18.123:/data/docker-registry-18-123
  2. volume create: docker-registry-volume: failed: The brick 10.10.18.120:/data/docker-registry-18-120 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
  3. [root@docker ~]# gluster volume create docker-registry-volume replica 2 transport tcp 10.10.18.120:/data/docker-registry-18-120 10.10.18.121:/data/docker-registry-18-121 10.10.18.122:/data/docker-registry-18-122 10.10.18.123:/data/docker-registry-18-123 force
  4. volume create: docker-registry-volume: success: please start the volume to access data

查看卷状态

  1. [root@docker ~]# gluster volume info
  2. Volume Name: docker-registry-volume
  3. Type: Distributed-Replicate
  4. Volume ID: 5c673ad6-06b8-4ef1-add1-9b41505eff10
  5. Status: Created
  6. Number of Bricks: 2 x 2 = 4
  7. Transport-type: tcp
  8. Bricks:
  9. Brick1: 10.10.18.120:/data/docker-registry-18-120
  10. Brick2: 10.10.18.121:/data/docker-registry-18-121
  11. Brick3: 10.10.18.122:/data/docker-registry-18-122
  12. Brick4: 10.10.18.123:/data/docker-registry-18-123
  13. Options Reconfigured:
  14. performance.readdir-ahead: on

start此卷

  1. [root@docker ~]# gluster volume start docker-registry-volume
  2. volume start: docker-registry-volume: success
  3. [root@docker ~]# gluster volume info

状态已有created改变为started

  1. Volume Name: docker-registry-volume
  2. Type: Distributed-Replicate
  3. Volume ID: 5c673ad6-06b8-4ef1-add1-9b41505eff10
  4. Status: Started
  5. Number of Bricks: 2 x 2 = 4
  6. Transport-type: tcp
  7. Bricks:
  8. Brick1: 10.10.18.120:/data/docker-registry-18-120
  9. Brick2: 10.10.18.121:/data/docker-registry-18-121
  10. Brick3: 10.10.18.122:/data/docker-registry-18-122
  11. Brick4: 10.10.18.123:/data/docker-registry-18-123
  12. Options Reconfigured:
  13. performance.readdir-ahead: on

挂载到此分布式复制卷上

  1. [root@docker-18120 gluster-test]# mount -t glusterfs 10.10.18.120:/docker-registry-volume /gluster-test

查看挂载情况

  1. [root@docker gluster-test]# df -h
  2. Filesystem Size Used Avail Use% Mounted on
  3. /dev/sda3 484G 1.7G 482G 1% /
  4. devtmpfs 3.9G 0 3.9G 0% /dev
  5. tmpfs 3.9G 0 3.9G 0% /dev/shm
  6. tmpfs 3.9G 33M 3.8G 1% /run
  7. tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
  8. /dev/sda1 509M 144M 366M 29% /boot
  9. tmpfs 783M 0 783M 0% /run/user/1000
  10. 10.10.18.120:/docker-registry-volume 717G 2.9G 714G 1% /gluster-test

启动磁盘配额, 查看卷状态

  1. [root@docker gluster-test]# gluster volume quota docker-registry-volume enable
  2. volume quota : success
  3. [root@docker gluster-test]# gluster volume info
  4. Volume Name: docker-registry-volume
  5. Type: Distributed-Replicate
  6. Volume ID: 5c673ad6-06b8-4ef1-add1-9b41505eff10
  7. Status: Started
  8. Number of Bricks: 2 x 2 = 4
  9. Transport-type: tcp
  10. Bricks:
  11. Brick1: 10.10.18.120:/data/docker-registry-18-120
  12. Brick2: 10.10.18.121:/data/docker-registry-18-121
  13. Brick3: 10.10.18.122:/data/docker-registry-18-122
  14. Brick4: 10.10.18.123:/data/docker-registry-18-123
  15. Options Reconfigured:
  16. features.quota-deem-statfs: on
  17. features.inode-quota: on
  18. features.quota: on
  19. performance.readdir-ahead: on

使用下面命令对磁盘使用进行限制

  1. [root@docker gluster-test]# gluster volume quota docker-registry-volume limit-usage / 700GB
  2. volume quota : success
  3. [root@docker gluster-test]# df -h
  4. Filesystem Size Used Avail Use% Mounted on
  5. /dev/sda3 484G 1.7G 482G 1% /
  6. devtmpfs 3.9G 0 3.9G 0% /dev
  7. tmpfs 3.9G 0 3.9G 0% /dev/shm
  8. tmpfs 3.9G 33M 3.8G 1% /run
  9. tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
  10. /dev/sda1 509M 144M 366M 29% /boot
  11. tmpfs 783M 0 783M 0% /run/user/1000
  12. localhost:docker-registry-volume 700G 0 700G 0% /run/gluster/docker-registry-volume

在分布式复制卷中加入两个新brcik

  1. [root@docker ~]# mkdir /add1
  2. [root@docker ~]# mkdir /add2
  3. [root@docker ~]# gluster volume add-brick docker-registry-volume 10.10.18.120:/add1 10.10.18.121:/add2 force
  4. volume add-brick: success
  5. [root@docker ~]# gluster volume info docker-registry-volume
  6. Volume Name: docker-registry-volume
  7. Type: Distributed-Replicate
  8. Volume ID: 5c673ad6-06b8-4ef1-add1-9b41505eff10
  9. Status: Started
  10. Number of Bricks: 3 x 2 = 6
  11. Transport-type: tcp
  12. Bricks:
  13. Brick1: 10.10.18.120:/data/docker-registry-18-120
  14. Brick2: 10.10.18.121:/data/docker-registry-18-121
  15. Brick3: 10.10.18.122:/data/docker-registry-18-122
  16. Brick4: 10.10.18.123:/data/docker-registry-18-123
  17. Brick5: 10.10.18.120:/add1
  18. Brick6: 10.10.18.121:/add2
  19. Options Reconfigured:
  20. features.quota-deem-statfs: on
  21. features.inode-quota: on
  22. features.quota: on
  23. performance.readdir-ahead: on

添加完要重新均衡一下卷

  1. [root@docker ~]# gluster volume rebalance docker-registry-volume start
  2. volume rebalance: docker-registry-volume: success: Rebalance on docker-registry-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
  3. ID: f17c36d1-64e1-4a0f-b28f-b1d1fe8ba766

新增的brick上已经有了文件

  1. [root@docker ~]# tree /add1/
  2. /add1/
  3. ├── 16
  4. ├── 18
  5. ├── 19
  6. ├── 22
  7. ├── 23
  8. ├── 27
  9. ├── 28
  10. ├── 32
  11. ├── 34
  12. ├── 37
  13. ├── 38
  14. ├── 41
  15. ├── 43
  16. ├── 44
  17. ├── 56
  18. ├── 57
  19. ├── 60
  20. ├── 63
  21. ├── 69
  22. ├── 71
  23. ├── 78
  24. ├── 80
  25. ├── 82
  26. ├── 83
  27. ├── 86
  28. ├── 89
  29. ├── 92
  30. ├── 99
  31. └── b
  32. 0 directories, 29 files

收缩两个brick

  1. [root@docker-18120 ~]# gluster volume remove-brick docker-registry-volume 10.10.18.120:/add1 10.10.18.121:/add2 force
  2. Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
  3. volume remove-brick commit force: success

查看卷信息

  1. [root@docker ~]# gluster volume info
  2. Volume Name: docker-registry-volume
  3. Type: Distributed-Replicate
  4. Volume ID: 5c673ad6-06b8-4ef1-add1-9b41505eff10
  5. Status: Started
  6. Number of Bricks: 2 x 2 = 4
  7. Transport-type: tcp
  8. Bricks:
  9. Brick1: 10.10.18.120:/data/docker-registry-18-120
  10. Brick2: 10.10.18.121:/data/docker-registry-18-121
  11. Brick3: 10.10.18.122:/data/docker-registry-18-122
  12. Brick4: 10.10.18.123:/data/docker-registry-18-123
  13. Options Reconfigured:
  14. features.quota-deem-statfs: on
  15. features.inode-quota: on
  16. features.quota: on
  17. performance.readdir-ahead: on

查看被收缩卷的情况,文件并未丢失

  1. [root@docker ~]# tree /add1/
  2. /add1/
  3. ├── 16
  4. ├── 18
  5. ├── 19
  6. ├── 22
  7. ├── 23
  8. ├── 27
  9. ├── 28
  10. ├── 32
  11. ├── 34
  12. ├── 37
  13. ├── 38
  14. ├── 41
  15. ├── 43
  16. ├── 44
  17. ├── 56
  18. ├── 57
  19. ├── 60
  20. ├── 63
  21. ├── 69
  22. ├── 71
  23. ├── 78
  24. ├── 80
  25. ├── 82
  26. ├── 83
  27. ├── 86
  28. ├── 89
  29. ├── 92
  30. ├── 99
  31. └── b
  32. 0 directories, 29 files

删除brick后再次平衡一次

  1. [root@docker ~]# gluster volume rebalance docker-registry-volume start
  2. volume rebalance: docker-registry-volume: success: Rebalance on docker-registry-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
  3. ID: b02a1012-c41c-48a6-ade9-df9aa74180cf

注:本文使用的是gluster的分布式复制模式,也是生产中的最佳实践,后续的分布式存储中会对各种模式进行详细介绍。

0
未经许可,不得转载,否则将受到作者追究,博主联系方式见首页右上角
  • 转载请注明来源:快速上手glusterfs
  • 本文永久链接地址:http://www.52devops.com/chuck/667.html

该文章由 发布

这货来去如风,什么鬼都没留下!!!
发表我的评论
取消评论
代码 贴图 加粗 链接 删除线 签到

(1)条精彩评论:
  1. 匿名
    zhehuo
    匿名2016-08-22 11:38 回复