ZFS is a great filesystem, with many, many features. But for all that it is still easy to manage, in my opinion easier than other filesystems. Managing storage is usually a high risk task, which makes automation harder. Would you change the size for a critical filesystem with an automated method? If it is an ext3 filesystem on LVM and software-raid, maybe not. If it is on ZFS, a low risk modification of the quota could be enough, e.g. zfs set quota=800g rpool/criticalfilesystem
. That’s easy to automate. Nowadays automation becomes even necessary because, the amount of ZFS filesystems is growing. And if you like to use more features you likely need to set more ZFS properties
.
Puppet has an own resource type for ZFS which is pretty good, it is easy to manage all the ZFS properties, for example:
1 | zfs { 'rpool/dbfs': |
But usually managing the ZFS layer is not enough, you also want to change the permissions of the mountpoint, which is done with the file
resource type. If you change the file owner
you also have a dependency to a user
resource, etc. To make it short, real world Puppet manifests for managing ZFS are easily more complex than expected.
Module: zfsdir
In the last 12 months I have written some internal manifests to manage our ZFS filesystems for our databases and I am refactoring them now. The most general use caseses I will move into my first public Puppet module. I call it zfsdir
because it is mostly an abstraction of the zfs
and file
resource type.
Basic usage
1 | zfsdir { 'rpool/test': |
In my old manifests I made the Puppet configuration dynamic, by marking the ZFS filesystems with custom ZFS properties. If Puppet found these custom properties, the filesystem got configured. This served us quite well, but it required that these properties got set on the target system, before the Puppet manifests got applied. Additionally in the last year, I became a big fan of hiera
. I also want to manage the ZFS configuration with hiera, that I can just add ZFS filesystems to the hiera files for a server or group. It also simplifies version control.
Hiera
You only need to add a hash in hiera, for example if you use YAML:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23---
zfsdirs:
rpool/mysqlbin:
ensure: 'present'
zfs:
mountpoint: '/mysqlbin'
compression: 'on'
file:
owner: 'mysql'
group: 'bin'
rpool/oradata:
zfs:
mountpoint: '/mysqldata'
recordsize: '16K'
logbias: 'throughput'
file:
owner: 'mysql'
group: 'bin'
rpool/dumpdir:
zfs:
mountpoint: '/dumpdir'
file:
mode: '0777'
And in the manifests you only need to add the following two lines:1
2$zfsdirs = hiera_hash('zfsdirs',{})
create_resources(zfsdir, $zfsdirs)
Test the module:
# puppet apply manage-zfsdir2.pp Notice: /Stage[main]/Main/Zfsdir[rpool/myysqldata]/Zfs[rpool/mysqldata]/ensure: created Notice: /Stage[main]/Main/Zfsdir[rpool/mysqldata]/File[/mysqldata]/owner: owner changed 'root' to 'mysql' Notice: /Stage[main]/Main/Zfsdir[rpool/mysqldata]/File[/mysqldata]/group: group changed 'root' to 'bin' Notice: /Stage[main]/Main/Zfsdir[rpool/mysqlbin]/Zfs[rpool/mysqlbin]/ensure: created Notice: /Stage[main]/Main/Zfsdir[rpool/mysqlbin]/File[/mysqlbin]/owner: owner changed 'root' to 'mysql' Notice: /Stage[main]/Main/Zfsdir[rpool/mysqlbin]/File[/mysqlbin]/group: group changed 'root' to 'bin' Notice: /Stage[main]/Main/Zfsdir[rpool/dumpdir]/Zfs[rpool/dumpdir]/ensure: created Notice: /Stage[main]/Main/Zfsdir[rpool/dumpdir]/File[/dumpdir]/mode: mode changed '0755' to '0777' Notice: Finished catalog run in 1.74 seconds # zfs get all rpool/mysqldata | grep local rpool/mysqldata logbias throughput local rpool/mysqldata mountpoint /mysqldata local rpool/mysqldata recordsize 16K local # ls -all /mysqldata/ total 24 drwxr-xr-x 2 mysql bin 2 Jan 5 20:40 . drwxr-xr-x 29 root root 32 Jan 5 20:40 ..
The first version of this module can be found on GitHub:
Module source: Zfsdir