自动化工具神器之saltstack

u=1864758764,3984801642&fm=214&gp=0.jpg

一、Saltstack环境准备

第一台:linux-node1,既作为salt-master,又作为salt-minion

  1. [root@linux-node1 ~]# cat /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 10.0.0.7 linux-node1
  5. 10.0.0.8 linux-node2
  6. [root@linux-node1 ~]# cat /etc/redhat-release
  7. CentOS release 6.7 (Final)
  8. [root@linux-node1 ~]# uname -m
  9. x86_64
  10. [root@linux-node1 ~]# uname -r
  11. 2.6.32-573.el6.x86_64
  12. [root@linux-node1 ~]# uname -a
  13. Linux linux-node1 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

第二台:linux-node2,只作为salt-minion

  1. [root@linux-node2 ~]# uname -r
  2. 2.6.32-573.el6.x86_64
  3. [root@linux-node2 ~]# uname -m
  4. x86_64
  5. [root@linux-node2 ~]# uname -a
  6. Linux linux-node2 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  7. [root@linux-node2 ~]# cat /etc/hosts
  8. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  9. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  10. 10.0.0.7 linux-node1
  11. 10.0.0.8 linux-node2

二、Saltstack介绍

2.1 Salt三用运行方式

  • local本地运行
  • Master/Minion
  • Salt ssh

2.2 Salt的三大功能

  • 远程执行
  • 配置管理(状态管理)
  • 云管理:阿里云,aws,openstack都提供了封装好的接口,可以使用salt-cloud进行云主机的管理

三、Salt安装配置启动

此处使用yum安装,生产也建议使用yum安装,minion在装操作系统的时候就装上,也可以使用salt ssh安装minion,后续会提到

  • linux-node1
  1. [root@linux-node1 ~]#rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-6.noarch.rpm
  2. [root@linux-node1 ~]#yum install -y salt-master salt-minion
  3. [root@linux-node1 ~]# chkconfig salt-master on
  4. [root@linux-node1 ~]# chkconfig salt-minion on
  • linux-node2
  1. [root@linux-node2 ~]#rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-6.noarch.rpm
  2. [root@linux-node2 ~]#yum install -y salt-minion
  3. [root@linux-node1 ~]# chkconfig salt-minion on

启动salt

  1. [root@linux-node1 ~]# /etc/init.d/salt-master start
  2. Starting salt-master daemon: [确定]

修改两个salt-minion的配置文件,指出salt-master的主机,这里可以使用ip地址,如果有内部dns也可以使用主机名,使用主机名方便以后salt-master的迁移

  1. [root@linux-node1 ~]#sed -i '16s#\#master: salt#master: 10.0.0.7#g' /etc/salt/minion
  2. [root@linux-node2 ~]#sed -i '16s#\#master: salt#master: 10.0.0.7#g' /etc/salt/minion

注意:下面配置文件中的id十分重要,在生产上可以用来配置主机名,后面会有主机面的配置策略,如果不进行配置此id,将默认使用fqdn

  1. [root@linux-node1 ~]# sed -n '68,74p' /etc/salt/minion
  2. # Explicitly declare the id for this minion to use, if left commented the id
  3. # will be the hostname as returned by the python call: socket.getfqdn()
  4. # Since salt uses detached ids it is possible to run multiple minions on the
  5. # same machine but with different ids, this can be useful for salt compute
  6. # clusters.
  7. #id:

启动salt-master和salt-minion

  1. [root@linux-node1 ~]# /etc/init.d/salt-master start
  2. [root@linux-node1 ~]# /etc/init.d/salt-minion start

四、Saltstack的认证

minion首次启动后会在minion端看到minion的私钥和公钥,salt会把公钥发送给master

  1. [root@linux-node2 minion]# pwd
  2. /etc/salt/pki/minion
  3. [root@linux-node2 minion]# ls
  4. minion.pem minion.pub

master启动后也会生成key,此时master需要统一minion的请求

  1. [root@linux-node1 master]# pwd
  2. /etc/salt/pki/master
  3. [root@linux-node1 master]# ls
  4. master.pem master.pub minions minions_autosign minions_denied minions_pre minions_rejected

使用salt-key查看各种状态的key

  1. [root@linux-node1 pki]# salt-key
  2. Accepted Keys:
  3. Denied Keys:
  4. Unaccepted Keys:
  5. linux-node1
  6. linux-node2
  7. Rejected Keys:

接受两个新的key,这里使用-A接受所有,也可使用-a指定某个minion,也可使用通配符匹配。具体salt-key的指令参数,请看salt-key –help查看

  1. [root@linux-node1 pki]# salt-key -A
  2. The following keys are going to be accepted:
  3. Unaccepted Keys:
  4. linux-node1
  5. linux-node2
  6. Proceed? [n/Y] y
  7. Key for minion linux-node1 accepted.
  8. Key for minion linux-node2 accepted.
  9. [root@linux-node1 pki]# salt-key
  10. Accepted Keys:
  11. linux-node1
  12. linux-node2
  13. Denied Keys:
  14. Unaccepted Keys:
  15. Rejected Keys:

这时就可以在master端的已接受minionm看到minion端id文件了

  1. [root@linux-node1 master]# pwd
  2. /etc/salt/pki/master
  3. [root@linux-node1 master]# tree
  4. .
  5. ├── master.pem
  6. ├── master.pub
  7. ├── minions
  8. ├── linux-node1
  9. └── linux-node2
  10. ├── minions_autosign
  11. ├── minions_denied
  12. ├── minions_pre
  13. └── minions_rejected
  14. 5 directories, 4 files

实际上master的minions列表文件中存放的就是minion的公钥

  1. [root@linux-node1 minions]# cat linux-node1
  2. -----BEGIN PUBLIC KEY-----
  3. MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7tnEScZ0vLwevAwFCQp5
  4. kADzCOcZ3pHc+zFVugnzGCxtrmymwgV0QFARSqGQU9eWL/vaY2hz8YIwmPIU5Ri2
  5. j+A0l8K15q2X2hgKepiU+qZG1Xc9EeAX/DPD+qynxXCd9EGMH32U1nQxlbnOwHUH
  6. dDUbfAXf6Mxm/8/5VqNEWnx8ymug6N2MAWvJbLn2+24jhMxjeJrJRxz4nVTqOa4y
  7. cOHiPqdwCaAUc9ul/sOp6VFlE+TsRQ3mcOHbYCDy9NgGmz3GNAtsdr6LcfEvYq4q
  8. q78DK6Y5i5eEKsVbDT8BBP5I9D8YwL8fymFB8LcTPiiRlwPaAvgL2KeL10C9Q1z6
  9. cwIDAQAB
  10. -----END PUBLIC KEY-----

master也会把自己的公钥发送到minion端

  1. [root@linux-node1 minion]# pwd
  2. /etc/salt/pki/minion
  3. [root@linux-node1 minion]# cat minion_master.pub
  4. -----BEGIN PUBLIC KEY-----
  5. MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoPuOLwx+0cL+BKZZRmT4
  6. JYhdGfC4ww5ku2Na8ZP4fPy73iZ5KXDG8z/fwsueHXssnsAgsY3EbyyjIa6Cx8Lh
  7. a0T+N9U00olpHshOWUjy1kRmMjMYnveuU8cw0MDTZ327Ze6TEUfR9DbFCcz1uzCn
  8. rCuCMUohtUA/ErwttAuERnaM5R7xZV4fG/eO8B0vXQv2nisJNIMRZbbCiaJTARir
  9. ULqq8mpWIuqww3jZznef6R6WwhMCh+9vQTNVEXYropKQjm7cGgleQhUpRqPgtEw8
  10. 80qxybjMflOJZzOVTc1L72ah1s3unRReHU+olH+Zhxb2lb7/YpA2DoURf/b25M0h
  11. 6wIDAQAB
  12. -----END PUBLIC KEY-----

五、Saltstack的远程执行

使用test.ping测试master和minion是否连通
salt:基本命令; *:代表所有minion主机;test:模块; ping:test模块的一个方法,这里的单引号也可以使用双引号

  1. [root@linux-node1 minion]# salt '*' test.ping
  2. linux-node2:
  3. True
  4. linux-node1:
  5. True

使用cmd.run远程执行命令,cmd是模块,run是cmd模块的一个方法

  1. [root@linux-node1 minion]# salt '*' cmd.run 'w'
  2. linux-node2:
  3. 21:19:48 up 16:54, 2 users, load average: 0.00, 0.00, 0.00
  4. USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
  5. root tty1 - 29Feb16 62days 0.10s 0.10s -bash
  6. root pts/2 10.0.0.1 19:30 39:12 0.04s 0.04s -bash
  7. linux-node1:
  8. 21:19:48 up 17:05, 2 users, load average: 0.12, 0.03, 0.01
  9. USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
  10. root tty1 - 29Feb16 8days 0.15s 0.15s -bash
  11. root pts/2 10.0.0.1 19:30 1.00s 0.89s 0.78s /usr/bin/python

六、配置管理

6.1启用配置管理

修改salt-master的配置文件,打开416至418行的注释
file_root代表文件目录所在,base指base环境,是必须存在的,这里支持多种(测试开发生产等)环境,后续会提到

  1. [root@linux-node1 minion]# sed -n '416,418p' /etc/salt/master
  2. file_roots:
  3. base:
  4. - /srv/salt
  5. [root@linux-node1 minion]# mkdir /srv/salt
  6. [root@linux-node1 minion]# /etc/init.d/salt-master restart

6.2简单安装一个apache服务

编写apache.sls

  1. [root@linux-node1 salt]# pwd
  2. /srv/salt
  3. [root@linux-node1 salt]# cat -A apache.sls
  4. apache-install:$ #服务ID
  5. pkg.installed:$ #apache:模块 install:方法
  6. - names:$ #names列表
  7. - httpd$ #会使用yum安装httpd
  8. - httpd-devel$ #会使用yum安装httpd-devel
  9. apache-service:$ #服务ID
  10. service.running:$ #service:模块 running:方法
  11. - name: httpd$ #name:指定http的服务用来service.running
  12. - enable: True$ #开机启动
  13. - reload: True$ #支持重载

执行上面的状态文件,salt:命令 *:代表所有minion,具体匹配方法后面会有详解 state:模块 sls:方法 apache:要执行的state文件

  1. [root@linux-node1 salt]# salt '*' state.sls apache
  2. linux-node2:
  3. ID: apache-install
  4. Function: pkg.installed
  5. Name: httpd
  6. Result: True
  7. Comment: Package httpd is already installed.
  8. Started: 23:26:15.045492
  9. Duration: 2256.368 ms
  10. Changes:
  11. ID: apache-install
  12. Function: pkg.installed
  13. Name: httpd-devel
  14. Result: True
  15. Comment: Package httpd-devel is already installed.
  16. Started: 23:26:17.302343
  17. Duration: 1.577 ms
  18. Changes:
  19. ID: apache-service
  20. Function: service.running
  21. Name: httpd
  22. Result: True
  23. Comment: Service httpd is already enabled, and is in the desired state
  24. Started: 23:26:17.305384
  25. Duration: 137.522 ms
  26. Changes:
  27. Summary
  28. Succeeded: 3
  29. Failed: 0
  30. Total states run: 3
  31. linux-node1:
  32. ID: apache-install
  33. Function: pkg.installed
  34. Name: httpd
  35. Result: True
  36. Comment: Package httpd is already installed.
  37. Started: 23:26:15.152083
  38. Duration: 2307.265 ms
  39. Changes:
  40. ID: apache-install
  41. Function: pkg.installed
  42. Name: httpd-devel
  43. Result: True
  44. Comment: Package httpd-devel is already installed.
  45. Started: 23:26:17.459645
  46. Duration: 1.052 ms
  47. Changes:
  48. ID: apache-service
  49. Function: service.running
  50. Name: httpd
  51. Result: True
  52. Comment: Service httpd is already enabled, and is in the desired state
  53. Started: 23:26:17.462565
  54. Duration: 122.922 ms
  55. Changes:
  56. Summary
  57. Succeeded: 3
  58. Failed: 0
  59. Total states run: 3

查看apahce服务状态

  1. [root@linux-node1 salt]# lsof -i:80
  2. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  3. httpd 8054 root 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  4. httpd 8058 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  5. httpd 8059 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  6. httpd 8060 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  7. httpd 8061 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  8. httpd 8062 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  9. httpd 8063 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  10. httpd 8064 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)
  11. httpd 8065 apache 4u IPv6 86585 0t0 TCP *:http (LISTEN)

6.3 编写top file,执行高级状态

top.sls是默认的入口文件,名称也是top.sls,必须放在base环境下

  1. [root@linux-node1 salt]# cat top.sls
  2. base: #base环境
  3. 'linux-*': #指定base环境下的minion主机
  4. - apache #高级状态需要执行服务

执行高级状态,意思是从top.sls开始读入,进行匹配执行状态文件

  1. [root@linux-node1 salt]# salt '*' state.highstate

七、saltstack的数据系统

7.1 学习grains

grains:搜集minion启动时的系统信息,只有在minion启动时才会搜集,grains更适合做一些静态的属性值的采集,例如设备的角色(role),磁盘个数(disk_num)等诸如此类非常固定的属性,另一个作用可以用来匹配minion

7.1.1 远程执行获取信息

列出所有的grains选项

  1. [root@linux-node1 ~]# salt 'linux-node1*' grains.ls
  2. linux-node1:
  3. - SSDs
  4. - biosreleasedate
  5. - biosversion
  6. - cpu_flags
  7. - cpu_model
  8. - cpuarch
  9. - domain
  10. - fqdn
  11. - fqdn_ip4
  12. - fqdn_ip6
  13. - gpus
  14. - host
  15. - hwaddr_interfaces
  16. - id
  17. - init
  18. - ip4_interfaces
  19. - ip6_interfaces
  20. - ip_interfaces
  21. - ipv4
  22. - ipv6
  23. - kernel
  24. - kernelrelease
  25. - locale_info
  26. - localhost
  27. - lsb_distrib_codename
  28. - lsb_distrib_id
  29. - lsb_distrib_release
  30. - machine_id
  31. - manufacturer
  32. - master
  33. - mdadm
  34. - mem_total
  35. - nodename
  36. - num_cpus
  37. - num_gpus
  38. - os
  39. - os_family
  40. - osarch
  41. - oscodename
  42. - osfinger
  43. - osfullname
  44. - osmajorrelease
  45. - osrelease
  46. - osrelease_info
  47. - path
  48. - productname
  49. - ps
  50. - pythonexecutable
  51. - pythonpath
  52. - pythonversion
  53. - saltpath
  54. - saltversion
  55. - saltversioninfo
  56. - selinux
  57. - serialnumber
  58. - server_id
  59. - shell
  60. - virtual
  61. - zmqversion
  62. ```
  63. 列出所有grains和内容
  64. ```bash
  65. [root@linux-node1 ~]# salt 'linux-node1*' grains.items
  66. linux-node1:
  67. SSDs:
  68. biosreleasedate:
  69. 05/20/2014
  70. biosversion:
  71. 6.00
  72. cpu_flags:
  73. - fpu
  74. - vme
  75. - de
  76. - pse
  77. - tsc
  78. - msr
  79. - pae
  80. - mce
  81. - cx8
  82. - apic
  83. - sep
  84. - mtrr
  85. - pge
  86. - mca
  87. - cmov
  88. - pat
  89. - pse36
  90. - clflush
  91. - dts
  92. - mmx
  93. - fxsr
  94. - sse
  95. - sse2
  96. - ss
  97. - syscall
  98. - nx
  99. - rdtscp
  100. - lm
  101. - constant_tsc
  102. - up
  103. - arch_perfmon
  104. - pebs
  105. - bts
  106. - xtopology
  107. - tsc_reliable
  108. - nonstop_tsc
  109. - aperfmperf
  110. - unfair_spinlock
  111. - pni
  112. - pclmulqdq
  113. - ssse3
  114. - cx16
  115. - sse4_1
  116. - sse4_2
  117. - popcnt
  118. - xsave
  119. - avx
  120. - hypervisor
  121. - lahf_lm
  122. - arat
  123. - epb
  124. - pln
  125. - pts
  126. - dts
  127. cpu_model:
  128. Intel(R) Core(TM) i3-2330M CPU @ 2.20GHz
  129. cpuarch:
  130. x86_64
  131. domain:
  132. fqdn:
  133. linux-node1
  134. fqdn_ip4:
  135. - 10.0.0.7
  136. fqdn_ip6:
  137. gpus:
  138. |_
  139. ----------
  140. model:
  141. SVGA II Adapter
  142. vendor:
  143. unknown
  144. host:
  145. linux-node1
  146. hwaddr_interfaces:
  147. ----------
  148. eth0:
  149. 00:0c:29:2c:10:a1
  150. eth1:
  151. 00:0c:29:2c:10:ab
  152. lo:
  153. 00:00:00:00:00:00
  154. id:
  155. linux-node1
  156. init:
  157. upstart
  158. ip4_interfaces:
  159. ----------
  160. eth0:
  161. - 10.0.0.7
  162. eth1:
  163. - 172.16.1.7
  164. lo:
  165. - 127.0.0.1
  166. ip6_interfaces:
  167. ----------
  168. eth0:
  169. - fe80::20c:29ff:fe2c:10a1
  170. eth1:
  171. - fe80::20c:29ff:fe2c:10ab
  172. lo:
  173. - ::1
  174. ip_interfaces:
  175. ----------
  176. eth0:
  177. - 10.0.0.7
  178. - fe80::20c:29ff:fe2c:10a1
  179. eth1:
  180. - 172.16.1.7
  181. - fe80::20c:29ff:fe2c:10ab
  182. lo:
  183. - 127.0.0.1
  184. - ::1
  185. ipv4:
  186. - 10.0.0.7
  187. - 127.0.0.1
  188. - 172.16.1.7
  189. ipv6:
  190. - ::1
  191. - fe80::20c:29ff:fe2c:10a1
  192. - fe80::20c:29ff:fe2c:10ab
  193. kernel:
  194. Linux
  195. kernelrelease:
  196. 2.6.32-573.el6.x86_64
  197. locale_info:
  198. ----------
  199. defaultencoding:
  200. UTF8
  201. defaultlanguage:
  202. zh_CN
  203. detectedencoding:
  204. UTF-8
  205. localhost:
  206. linux-node1
  207. lsb_distrib_codename:
  208. Final
  209. lsb_distrib_id:
  210. CentOS
  211. lsb_distrib_release:
  212. 6.7
  213. machine_id:
  214. 53d3f8757a7bdf1be87664bd00000012
  215. manufacturer:
  216. VMware, Inc.
  217. master:
  218. 10.0.0.7
  219. mdadm:
  220. mem_total:
  221. 992
  222. nodename:
  223. linux-node1
  224. num_cpus:
  225. 1
  226. num_gpus:
  227. 1
  228. os:
  229. CentOS
  230. os_family:
  231. RedHat
  232. osarch:
  233. x86_64
  234. oscodename:
  235. Final
  236. osfinger:
  237. CentOS-6
  238. osfullname:
  239. CentOS
  240. osmajorrelease:
  241. 6
  242. osrelease:
  243. 6.7
  244. osrelease_info:
  245. - 6
  246. - 7
  247. path:
  248. /sbin:/usr/sbin:/bin:/usr/bin
  249. productname:
  250. VMware Virtual Platform
  251. ps:
  252. ps -efH
  253. pythonexecutable:
  254. /usr/bin/python2.6
  255. pythonpath:
  256. - /usr/bin
  257. - /usr/lib64/python26.zip
  258. - /usr/lib64/python2.6
  259. - /usr/lib64/python2.6/plat-linux2
  260. - /usr/lib64/python2.6/lib-tk
  261. - /usr/lib64/python2.6/lib-old
  262. - /usr/lib64/python2.6/lib-dynload
  263. - /usr/lib64/python2.6/site-packages
  264. - /usr/lib64/python2.6/site-packages/gtk-2.0
  265. - /usr/lib/python2.6/site-packages
  266. pythonversion:
  267. - 2
  268. - 6
  269. - 6
  270. - final
  271. - 0
  272. saltpath:
  273. /usr/lib/python2.6/site-packages/salt
  274. saltversion:
  275. 2015.5.8
  276. saltversioninfo:
  277. - 2015
  278. - 5
  279. - 8
  280. - 0
  281. selinux:
  282. ----------
  283. enabled:
  284. False
  285. enforced:
  286. Disabled
  287. serialnumber:
  288. VMware-56 4d 3d be 86 1f f0 55-7e 57 0a 5a a5 2c 10 a1
  289. server_id:
  290. 1879729795
  291. shell:
  292. /bin/bash
  293. virtual:
  294. VMware
  295. zmqversion:
  296. 3.2.5

显示单个grains内容,get方法直接显示值,item方法会把条目名也显示出来

  1. [root@linux-node1 ~]# salt 'linux-node1*' grains.item fqdn
  2. linux-node1:
  3. ----------
  4. fqdn:
  5. linux-node1
  6. [root@linux-node1 ~]# salt 'linux-node1*' grains.get fqdn_ip4
  7. linux-node1:
  8. - 10.0.0.7

7.1.2 使用grains匹配minion主机

模拟使用grains匹配minion,-G代表指定grains匹配

  1. [root@linux-node1 ~]# salt -G 'os:centos' grains.get fqdn
  2. linux-node2:
  3. linux-node2
  4. linux-node1:
  5. linux-node1

修改minion配置文件,简单手动设置一个grains

  1. [root@linux-node1 ~]# sed -n '84,87p' /etc/salt/minion
  2. grains:
  3. roles:
  4. - webserver
  5. - memcache

重启grains,测试手动添加结果

  1. [root@linux-node1 ~]# /etc/init.d/salt-minion restart
  2. Stopping salt-minion daemon: [确定]
  3. Starting salt-minion daemon: [确定]
  4. [root@linux-node1 ~]# salt -G 'roles:memcache' cmd.run 'uptime'
  5. linux-node1:
  6. 20:43:25 up 1 day, 5:21, 2 users, load average: 0.15, 0.04, 0.01

添加grains,默认会到/etc/salt/grains中读取,手动添加到/etc/salt/grains即可

  1. [root@linux-node2 ~]# cat /etc/salt/grains
  2. app:
  3. nginx
  4. [root@linux-node2 ~]# /etc/init.d/salt-minion restart
  5. Stopping salt-minion daemon: [确定]
  6. Starting salt-minion daemon: [确定]
  7. [root@linux-node1 ~]# salt '*' grains.item app
  8. linux-node2:
  9. ----------
  10. app:
  11. nginx
  12. linux-node1:
  13. ---

7.1.3使用grains在state文件中使用grains

  1. [root@linux-node1 salt]# cat top.sls
  2. base:
  3. 'app:nginx': #标记grains内容
  4. - match: grain #指定使用grains
  5. - apache

7.1.4 在jinja模板中使用grains

后续会有详细应用说明,此处不多赘述

  1. keepalived-server:
  2. file.managed:
  3. - name: /etc/keepalived/keepalived.conf
  4. - source: salt://cluster/files/haproxy-outside-keepalived.conf
  5. - mode: 644
  6. - user: root
  7. - group: root
  8. - template: jinja
  9. {% if grains['fqdn'] == 'ip-172-31-43-148.eu-west-1.compute.internal' %}
  10. - ROUTID: haproxy_ha
  11. - ROLE: MASTER
  12. - PRIORITYID: 150
  13. {% elif grains['fqdn'] == 'ip-172-31-43-123.eu-west-1.compute.internal' %}
  14. - ROUTID: haproxy_ha
  15. - ROLE: BACKUP
  16. - PRIORITYID: 100
  17. {% endif %}
  18. ```
  19. ##7.2学习pillar
  20. ###7.2.1 pillar介绍
  21.   Pillar 是 Salt 非常重要的一个组件,它用于给特定的 minion 定义任何你需要的数据, 这些数据可以被 Salt 的其他组件使用。Salt 在 0.9.8 版本中引入了 Pillar。Pillar 在解析完成 后,是一个嵌套的 dict 结构;最上层的 key 是 minion ID,其 value 是该 minion 所拥有的 Pillar 数据;每一个 value 也都是 key/value。这里可以看出 Pillar 的一个特点,Pillar 数据是与特定 minion 关联的,也就是说每一个minion 都只能看到自己的数据, 所以 Pillar 可以用来传递敏感数据 (在 Salt 的设计中, Pillar 使用独立的加密 session,也是为了保证敏感数据的安全性) 。 Pillar 可以用在哪些地方?
  22. **敏感数据**
  23.   例如 ssh key,加密证书等,由于 Pillar 使用独立的加密 session,可以确保这些敏感数据不被其他 minion 看到。
  24. **变量**
  25.   可以在 Pillar 中处理平台差异性,比如针对不同的操作系统设置软件包的名字,然后在State 中引用。
  26. **其他任何数据**
  27.   可以在 Pillar 中添加任何需要用到的数据。比如定义用户和 UID 的对应关系,mnion 的角色等。
  28. ###7.2.2 pillar基础
  29. 更改配置文件打开pillar,默认是关闭的
  30. ```bash
  31. [root@linux-node1 ~]# sed -n '552p' /etc/salt/master
  32. pillar_opts: True
  33. [root@linux-node1 ~]# /etc/init.d/salt-master restart
  34. Stopping salt-master daemon: [确定]
  35. Starting salt-master daemon: [确定]

查看master自带的pillar条目,实际生产是不打开的,自带的pillar没什么卵用,所以一般都会设置成false,使用自己定义的pillar

  1. [root@linux-node1 ~]# salt 'linux-node1*' pillar.items
  2. linux-node1:
  3. ----------
  4. master:
  5. ----------
  6. __role:
  7. master
  8. auth_mode:
  9. 1
  10. auto_accept:
  11. False
  12. cache_sreqs:
  13. True
  14. cachedir:
  15. /var/cache/salt/master
  16. cli_summary:
  17. False
  18. client_acl:
  19. ----------
  20. client_acl_blacklist:
  21. ----------
  22. cluster_masters:
  23. cluster_mode:
  24. paranoid
  25. con_cache:
  26. False
  27. conf_file:
  28. /etc/salt/master
  29. config_dir:
  30. /etc/salt
  31. cython_enable:
  32. False
  33. daemon:
  34. True
  35. default_include:
  36. master.d/*.conf
  37. enable_gpu_grains:
  38. False
  39. enforce_mine_cache:
  40. False
  41. enumerate_proxy_minions:
  42. False
  43. environment:
  44. None
  45. event_return:
  46. event_return_blacklist:
  47. event_return_queue:
  48. 0
  49. event_return_whitelist:
  50. ext_job_cache:
  51. ext_pillar:
  52. extension_modules:
  53. /var/cache/salt/extmods
  54. external_auth:
  55. ----------
  56. failhard:
  57. False
  58. file_buffer_size:
  59. 1048576
  60. file_client:
  61. local
  62. file_ignore_glob:
  63. None
  64. file_ignore_regex:
  65. None
  66. file_recv:
  67. False
  68. file_recv_max_size:
  69. 100
  70. file_roots:
  71. ----------
  72. base:
  73. - /srv/salt
  74. fileserver_backend:
  75. - roots
  76. fileserver_followsymlinks:
  77. True
  78. fileserver_ignoresymlinks:
  79. False
  80. fileserver_limit_traversal:
  81. False
  82. gather_job_timeout:
  83. 10
  84. gitfs_base:
  85. master
  86. gitfs_env_blacklist:
  87. gitfs_env_whitelist:
  88. gitfs_insecure_auth:
  89. False
  90. gitfs_mountpoint:
  91. gitfs_passphrase:
  92. gitfs_password:
  93. gitfs_privkey:
  94. gitfs_pubkey:
  95. gitfs_remotes:
  96. gitfs_root:
  97. gitfs_user:
  98. hash_type:
  99. md5
  100. hgfs_base:
  101. default
  102. hgfs_branch_method:
  103. branches
  104. hgfs_env_blacklist:
  105. hgfs_env_whitelist:
  106. hgfs_mountpoint:
  107. hgfs_remotes:
  108. hgfs_root:
  109. id:
  110. linux-node1
  111. interface:
  112. 0.0.0.0
  113. ioflo_console_logdir:
  114. ioflo_period:
  115. 0.01
  116. ioflo_realtime:
  117. True
  118. ioflo_verbose:
  119. 0
  120. ipv6:
  121. False
  122. jinja_lstrip_blocks:
  123. False
  124. jinja_trim_blocks:
  125. False
  126. job_cache:
  127. True
  128. keep_jobs:
  129. 24
  130. key_logfile:
  131. /var/log/salt/key
  132. keysize:
  133. 2048
  134. log_datefmt:
  135. %H:%M:%S
  136. log_datefmt_logfile:
  137. %Y-%m-%d %H:%M:%S
  138. log_file:
  139. /var/log/salt/master
  140. log_fmt_console:
  141. [%(levelname)-8s] %(message)s
  142. log_fmt_logfile:
  143. %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s][%(process)d] %(message)s
  144. log_granular_levels:
  145. ----------
  146. log_level:
  147. warning
  148. loop_interval:
  149. 60
  150. maintenance_floscript:
  151. /usr/lib/python2.6/site-packages/salt/daemons/flo/maint.flo
  152. master_floscript:
  153. /usr/lib/python2.6/site-packages/salt/daemons/flo/master.flo
  154. master_job_cache:
  155. local_cache
  156. master_pubkey_signature:
  157. master_pubkey_signature
  158. master_roots:
  159. ----------
  160. base:
  161. - /srv/salt-master
  162. master_sign_key_name:
  163. master_sign
  164. master_sign_pubkey:
  165. False
  166. master_tops:
  167. ----------
  168. master_use_pubkey_signature:
  169. False
  170. max_event_size:
  171. 1048576
  172. max_minions:
  173. 0
  174. max_open_files:
  175. 100000
  176. minion_data_cache:
  177. True
  178. minionfs_blacklist:
  179. minionfs_env:
  180. base
  181. minionfs_mountpoint:
  182. minionfs_whitelist:
  183. nodegroups:
  184. ----------
  185. open_mode:
  186. False
  187. order_masters:
  188. False
  189. outputter_dirs:
  190. peer:
  191. ----------
  192. permissive_pki_access:
  193. False
  194. pidfile:
  195. /var/run/salt-master.pid
  196. pillar_opts:
  197. True
  198. pillar_roots:
  199. ----------
  200. base:
  201. - /srv/pillar
  202. pillar_safe_render_error:
  203. True
  204. pillar_source_merging_strategy:
  205. smart
  206. pillar_version:
  207. 2
  208. pillarenv:
  209. None
  210. ping_on_rotate:
  211. False
  212. pki_dir:
  213. /etc/salt/pki/master
  214. preserve_minion_cache:
  215. False
  216. pub_hwm:
  217. 1000
  218. publish_port:
  219. 4505
  220. publish_session:
  221. 86400
  222. queue_dirs:
  223. raet_alt_port:
  224. 4511
  225. raet_clear_remotes:
  226. False
  227. raet_main:
  228. True
  229. raet_mutable:
  230. False
  231. raet_port:
  232. 4506
  233. range_server:
  234. range:80
  235. reactor:
  236. reactor_refresh_interval:
  237. 60
  238. reactor_worker_hwm:
  239. 10000
  240. reactor_worker_threads:
  241. 10
  242. renderer:
  243. yaml_jinja
  244. ret_port:
  245. 4506
  246. root_dir:
  247. /
  248. rotate_aes_key:
  249. True
  250. runner_dirs:
  251. saltversion:
  252. 2015.5.8
  253. search:
  254. search_index_interval:
  255. 3600
  256. serial:
  257. msgpack
  258. show_jid:
  259. False
  260. show_timeout:
  261. True
  262. sign_pub_messages:
  263. False
  264. sock_dir:
  265. /var/run/salt/master
  266. sqlite_queue_dir:
  267. /var/cache/salt/master/queues
  268. ssh_passwd:
  269. ssh_port:
  270. 22
  271. ssh_scan_ports:
  272. 22
  273. ssh_scan_timeout:
  274. 0.01
  275. ssh_sudo:
  276. False
  277. ssh_timeout:
  278. 60
  279. ssh_user:
  280. root
  281. state_aggregate:
  282. False
  283. state_auto_order:
  284. True
  285. state_events:
  286. False
  287. state_output:
  288. full
  289. state_top:
  290. salt://top.sls
  291. state_top_saltenv:
  292. None
  293. state_verbose:
  294. True
  295. sudo_acl:
  296. False
  297. svnfs_branches:
  298. branches
  299. svnfs_env_blacklist:
  300. svnfs_env_whitelist:
  301. svnfs_mountpoint:
  302. svnfs_remotes:
  303. svnfs_root:
  304. svnfs_tags:
  305. tags
  306. svnfs_trunk:
  307. trunk
  308. syndic_dir:
  309. /var/cache/salt/master/syndics
  310. syndic_event_forward_timeout:
  311. 0.5
  312. syndic_jid_forward_cache_hwm:
  313. 100
  314. syndic_master:
  315. syndic_max_event_process_time:
  316. 0.5
  317. syndic_wait:
  318. 5
  319. timeout:
  320. 5
  321. token_dir:
  322. /var/cache/salt/master/tokens
  323. token_expire:
  324. 43200
  325. transport:
  326. zeromq
  327. user:
  328. root
  329. verify_env:
  330. True
  331. win_gitrepos:
  332. - https://github.com/saltstack/salt-winrepo.git
  333. win_repo:
  334. /srv/salt/win/repo
  335. win_repo_mastercachefile:
  336. /srv/salt/win/repo/winrepo.p
  337. worker_floscript:
  338. /usr/lib/python2.6/site-packages/salt/daemons/flo/worker.flo
  339. worker_threads:
  340. 5
  341. zmq_filtering:
  342. False
  343. ```
  344. ###7.2.3 设置pillar环境
  345. 修改master的配置文件,设置pillr_root,可以看出pillar是支持环境的,同样也许存在base环境,而且也是支持topfile的,可以指定具体哪个minion配置哪个minion
  346. ```bash
  347. [root@linux-node1 ~]# sed -n '529,531p' /etc/salt/master
  348. pillar_roots:
  349. base:
  350. - /srv/pillar
  351. [root@linux-node1 ~]# /etc/init.d/salt-master restart
  352. Stopping salt-master daemon: [确定]
  353. Starting salt-master daemon: [确定]

7.2.4手动定义一个pillar

  1. [root@linux-node1 pillar]# pwd
  2. /srv/pillar
  3. [root@linux-node1 pillar]# cat apache.sls
  4. {% if grains['os'] == 'CentOS' %}
  5. apache: httpd
  6. {% elif grains['os'] == 'Debain' %}
  7. apache: apache2
  8. {% endif %}
  9. [root@linux-node1 pillar]# cat top.sls
  10. base:
  11. 'linux-node2*':
  12. - apache
  13. [root@linux-node1 pillar]# salt '*' pillar.items
  14. linux-node1:
  15. ----------
  16. linux-node2:
  17. ----------
  18. apache:
  19. httpd
  20. ```
  21. 如果对pillar具体内容进行修改,需要执行刷新pillar
  22. ```bash
  23. [root@linux-node1 pillar]# salt '*' saltutil.refresh_pillar
  24. linux-node2:
  25. True
  26. linux-node1:
  27. True

7.2.5 使用pillar匹配minion

salt -I 指定pillar匹配

  1. [root@linux-node1 pillar]# salt -I 'apache:httpd' cmd.run 'cd /etc/salt &&pwd'
  2. linux-node2:
  3. /etc/salt

7.3 grains与pillar的区别

  • grains存储的是静态、不常变化的内容;pillar则相反,存储的是动态数据
  • grains是存储在minion本地,可以使用saltutil.sync_grains刷新;而pillar存储在master本地,可以使用saltutil.refresh_pillar来刷新
  • minion有权限操作自己的grains值,如增加、删除,可以用来做资产管理等;pillar存储在master中指定数据,只有指定的minion才可以看到,可以用来存储敏感数据,minion无权修改
1
未经许可,不得转载,否则将受到作者追究,博主联系方式见首页右上角

该文章由 发布

这货来去如风,什么鬼都没留下!!!
发表我的评论
取消评论
代码 贴图 加粗 链接 删除线 签到

(2)条精彩评论:
  1. 匿名
    :grin: :grin: :grin:
    匿名2016-10-06 16:57 回复
    • admin
      谢谢
      admin2016-10-19 11:13 回复