Remote Management of ZFS servers with Puppet and RAD

A few months ago I had the chance to test an Oracle ZFS Storage Appliance (ZFS SA) and the appliance made a very good impression on many areas. It especially brought to my mind again, that ZFS shines even more if you use it as NAS (Network Attached Storage), as central fileserver which shares its storage capacity for example via NFS.

But I did not really like the distributed storage configuration. E.g. a database server needs the correct ZFS properties set on the ZFS storage appliance via the web-interface or the custom CLI and also the corresponding NFS mount options in /etc/vfstab on the database server. Maybe this sounds like no big issue to you, for example, if you are also the admin responsible for the storage appliance, or if you have a perfect collaboration with the storage team. But especially if you want to automate the storage configuration, this distribution adds a significant complexity.

Of course I wanted to manage the configuration with Puppet like a local ZFS filesystem.
I don’t yet have a ZFS SA at work to deal with, but the availability of the new RAD REST interface in Solaris 11.3 motivated me to experiment with an own Puppet resource type to manage the remote ZFS filesystems directly on the client server.

Please note: The Puppet provider which is described in the following examples was developed for the Solaris RAD API and not the Oracle ZFS SA API, therefore it does currently not support the ZFS SA.

The new Puppet resource type is called remote_zfs and based on the local_zfs type, which I published in the last post. To start using this type you need my radproviders Puppet module and an enabled remote RAD REST service. See the document DOC-918902 from Gary Pennington and Glynn Foster.

After you configured the HTTPS port in the SMF manifest (e.g. 12303) you need to configure that address and the credentials in rad_config.json in :

radproviders/lib/puppet_x/mzachh/rad/rad_config.jsonlink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"default" : "zfsserver1",
"connections" : {
"zfsserver1" : {
"address" : "https://zfsserver1.example.com:12303",
"verify_ssl" : "false",
"auth" : {
"username": "root",
"password": "password1",
"scheme": "pam",
"preserve": true,
"timeout": -1
}

}

}

Now you can start using the new type:

create-fs.pp
1
2
3
4
5
remote_zfs { "rpool/project1/video":
ensure => present,
mountpoint => "/mnt/project1/video",
compression => "on"
}

As you see, it uses the same resource attributes like the original zfs type. In the following extended example you can see that a simple resource dependency (require => Remote_zfs[...]) is enough to relate the configuration of the networked ZFS server with the client server:

create-fs.pp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
remote_zfs { "rpool/project1/video":
ensure => present,
compression => "on",
recordsize => "1M",
logbias => "throughput",
sharenfs => "on",
mountpoint => "/mnt/project1/video"
}

mount { "/mnt/video":
require => Remote_zfs["rpool/project1/video"],
device => "fileserver:/mnt/project1/video",
fstype => "nfs",
ensure => "mounted",
options => "-",
atboot => false
}
# puppet apply create_fs.pp
Notice: /Stage[main]/Main/Remote_zfs[fileserver#rpool/project1/video]/ensure: created
Notice: /Stage[main]/Main/Mount[/mnt/video]/ensure: ensure changed 'unmounted' to 'mounted'

You can use the root account in the config file, but likely you don’t like to distribute the root password of your central file server to all client servers. Luckily, you can use a non-root user by setting ZFS permissions:

# useradd zfsadmin1
# passwd zfsadmin1 (set a password)
# mkdir /mnt/project1
# chown -R zfsadmin1 /mnt/project1
# zfs create rpool/project1
# zfs allow zfsadmin1 compression,create,destroy,mount,mountpoint,share,recordsize,logbias,sharenfs rpool/project1

If you now set the zfsadmin1 user in rad_config.json the Puppet provider uses the non-root user.

If managing the storage directly with Puppet is too scary to you (which is understandable and fine), you could use Puppet in read-only or noop mode. So you can still use Puppet reporting. Also your user only needs read permissions:

create-fs.pp
1
2
3
4
5
remote_zfs { "rpool/project1/video":
ensure => present,
noop => "true",
compression => "on"
}
# puppet apply create_fs.pp
Notice: /Stage[main]/Main/Remote_zfs[rpool/project1/video]/compression: current_value off, should be on (noop)

In case that you have more than one networked ZFS server, the setting of a default connection in rad_config.json is not enough. But you can encode the connection identifier into the resource name with <connection identifier>#<filesystem>, for example:

create-fs.pp
1
2
3
4
5
remote_zfs { "zfsserver2#rpool/project1/video":
ensure => present,
mountpoint => "/mnt/project1/video",
compression => "on"
}

The code is still quite new and not pushed to Puppet Forge, you can get it from Github:

Do you think this could be useful? Feel free to leave a comment.

Share Comments