This role can automatically create any number of zpool, to do so, you must specify at least their names and geometry with the zfs_zpools dictionary , for example:
zfs_zpools:
- name: "nvme"
geometry: "mirror nvme0n1 nvme1n1"
This will create a zpool named nvme in mirror mode with the 2 drives nvme0n1 and nvme1n1. This will also set the following default option for the zpool creation:
-O compression=lz4 -O atime=off -O xattr=sa -O mountpoint=none -O acltype=posixacl
If you want different options, you can set them by adding the keyword options, for example:
zfs_zpools:
- name: "nvme"
geometry: "mirror nvme0n1 nvme1n1"
options: "-O compression=lz4 -O atime=off -O xattr=sa -O mountpoint=none -O acltype=posixacl"
After the zpool has been created, this role will export it and import it back with the -d /dev/disk/by-id option to use the device id for future imports.
This role can also create any number of dataset with any zfs properties. The syntax is:
zfs_dataset_list:
- name: "tank/something"
properties:
mountpoint: "/media/something"
recordsize: "1M"
- name: "tank/something_else"
The previous example will simply result in the following zfs commands:
zfs create tank/something -o mountpoint=/media/something -o recordsize=1M
zfs create tank/something_else
You can adjust the maximum size of the ARC, which is the maximum amount of memory that ZFS can use at any given time.
The default is 1/8 of the total memory. You can adjust this setting by changing the variable max_arc_size, since ansible uses the variable ansible_memtotal_mb for the host memory, you have to adjust the variable like this to get the amount of RAM that you want for ZoL:
# 1/2 of RAM {#12-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 524288 }}"
# 40% of RAM {#40-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 419430 }}"
# 1/3 of RAM {#13-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 349525 }}"
# 1/4 of RAM {#14-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 262144 }}"
# 1/5 of RAM {#15-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 209715 }}"
# 1/6 of RAM {#16-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 174763 }}"
# 1/7 of RAM {#17-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 149797 }}"
# 1/8 of RAM {#18-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 131072 }}"
# 1/16 of RAM {#116-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 65536 }}"
# 1/32 of RAM {#132-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 32768 }}"
# 1/64 of RAM {#164-of-ram}
max_arc_size: "{{ ansible_memtotal_mb * 16384 }}"
This is the same thing as max_arc_size: {{ ansible_memtotal_mb * 1024 * 1024 / [RATIO] }}, for example:
{{ ansible_memtotal_mb * 1024 * 1024 / 4 }} = {{ ansible_memtotal_mb * 262144 }}
By default, zfs set the the zfs_arc_dnode_limit_percent to 10% and the zfs_arc_meta_limit_percent to 75%.
From experience, most of the time it doesn't make any sense to limit ZFS for either values, so they are both set to 95% of the maximum memory in this role.
You can adjust both parameters with those values if needed:
zfs_arc_meta_limit_percent: 95
zfs_arc_dnode_limit_percent: 95
Currently, you can set the "zfs_txg_timeout" module option with this value:
zfs_txg_timeout: n
Where n is a number, in second (the default is 5).
On Ubuntu, a crontab exists to run a scrub the second Sunday of every month at midnight. This disrupt the normal server usage.
By default this role deactivate this scrub, you can let it active by setting this variable:
zfs_auto_scrub: True