Installing ZFS on Debian Linux
Saturday, 6 Sep 2014
ZFS is arguably the most reliable and most advanced filesystem ever, with over a decade of stable implementations in Solaris and FreeBSD operating systems.
apt-fails
For several years a port has been made available, including debian packages. Following the official build steps, we install the required GPG keys and then retrieve packages from the archive:
apt-get upgrade -y
gpg --keyserver pool.sks-keyservers.net --recv-keys 9A55B33CA71C1E00
gpg --keyserver pgp.mit.edu --recv-keys 0AB9E991C6AF658B
echo deb http://archive.zfsonlinux.org/debian wheezy-daily main > \
/etc/apt/sources.list.d/zfsonlinux.list
apt-get update
apt-get install debian-zfs -y
And … sadly… this doesn’t work in GCE.
Building from source
Instead, we’ll need to build from source:
cd /root
apt-get upgrade -y
apt-get install -y build-essential gawk alien fakeroot linux-headers-$(uname -r)
apt-get install -y zlib1g-dev uuid-dev libblkid-dev libselinux-dev parted lsscsi wget
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.3.tar.gz
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.3.tar.gz
tar xzf spl*.gz
tar xzf zfs*.gz
cd ./spl*
./configure && make deb-utils deb-kmod
dpkg -i *.deb
cd ../zfs*
./configure && make deb-utils deb-kmod
cd ..
mkdir packages
mv spl*/*.deb zfs*/*.deb packages/
dpkg -i packages/*.deb
Create a GCE instance
gcloud compute instances create \
--zone $GCE_ZONE \
--project $GCE_PROJECT \
--boot-disk-type pd-standard \
--machine-type n1-standard-2 \
--format=yaml \
d1
YAML output
Created [https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/instances/d1].
---
canIpForward: false
creationTimestamp: '2014-09-06T05:18:10.677-07:00'
disks:
- autoDelete: true
boot: true
deviceName: persistent-disk-0
index: 0
kind: compute#attachedDisk
mode: READ_WRITE
source: d1
type: PERSISTENT
id: '3133832163030713339'
kind: compute#instance
machineType: n1-standard-2
metadata:
fingerprint: HmQ7mkgzeTI=
kind: compute#metadata
name: d1
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 130.211.60.134
type: ONE_TO_ONE_NAT
name: nic0
network: default
networkIP: 10.240.59.42
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
selfLink: https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/instances/d1
serviceAccounts:
- email: 47829432879432-jfdhagfdgasyifsa@developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
status: RUNNING
tags:
fingerprint: 42WmSpB8rSM=
zone: europe-west1-b
Create Disks for Mirroring
gcloud compute disks create \
--zone $GCE_ZONE \
--project $GCE_PROJECT \
--size 200 \
--type pd-standard \
--format=yaml \
z1m0 z1m1
YAML output
Created [https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/disks/z1m0].
Created [https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/disks/z1m1].
---
creationTimestamp: '2014-09-06T05:16:57.968-07:00'
id: '16004603353646764437'
kind: compute#disk
name: z1m0
selfLink: https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/disks/z1m0
sizeGb: '200'
status: READY
type: pd-standard
zone: europe-west1-b
---
creationTimestamp: '2014-09-06T05:16:58.085-07:00'
id: '3611550562822565338'
kind: compute#disk
name: z1m1
selfLink: https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/disks/z1m1
sizeGb: '200'
status: READY
type: pd-standard
zone: europe-west1-b
Attach the Disks
gcloud compute instances attach-disk \
--zone $GCE_ZONE \
--project $GCE_PROJECT \
--disk z1m0 \
--device-name z1m0 \
--format=yaml \
d1
YAML output
Updated [https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/instances/d1].
---
canIpForward: false
creationTimestamp: '2014-09-06T04:30:04.764-07:00'
disks:
- autoDelete: true
boot: true
deviceName: persistent-disk-0
index: 0
kind: compute#attachedDisk
mode: READ_WRITE
source: d1
type: PERSISTENT
- autoDelete: false
deviceName: z1m0
index: 1
kind: compute#attachedDisk
mode: READ_WRITE
source: z1m0
type: PERSISTENT
id: '197810110694391657'
kind: compute#instance
machineType: g1-small
metadata:
fingerprint: HmQ7mkgzeTI=
kind: compute#metadata
name: d1
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 130.211.107.123
type: ONE_TO_ONE_NAT
name: nic0
network: default
networkIP: 10.240.88.55
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
selfLink: https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/instances/d1
serviceAccounts:
- email: 47829432879432-jfdhagfdgasyifsa@developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
status: RUNNING
tags:
fingerprint: 42WmSpB8rSM=
zone: europe-west1-b
gcloud compute instances attach-disk \
--zone $GCE_ZONE \
--project $GCE_PROJECT \
--disk z1m1 \
--device-name z1m1 \
--format=yaml \
d1
YAML output
Updated [https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/instances/d1].
---
canIpForward: false
creationTimestamp: '2014-09-06T04:30:04.764-07:00'
disks:
- autoDelete: true
boot: true
deviceName: persistent-disk-0
index: 0
kind: compute#attachedDisk
mode: READ_WRITE
source: d1
type: PERSISTENT
- autoDelete: false
deviceName: z1m0
index: 1
kind: compute#attachedDisk
mode: READ_WRITE
source: z1m0
type: PERSISTENT
- autoDelete: false
deviceName: z1m1
index: 2
kind: compute#attachedDisk
mode: READ_WRITE
source: z1m1
type: PERSISTENT
id: '197810110694391657'
kind: compute#instance
machineType: g1-small
metadata:
fingerprint: HmQ7mkgzeTI=
kind: compute#metadata
name: d1
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 130.211.107.123
type: ONE_TO_ONE_NAT
name: nic0
network: default
networkIP: 10.240.88.55
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
selfLink: https://www.googleapis.com/compute/v1/projects/sw-lab/zones/europe-west1-b/instances/d1
serviceAccounts:
- email: 47829432879432-jfdhagfdgasyifsa@developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
status: RUNNING
tags:
fingerprint: 42WmSpB8rSM=
zone: europe-west1-b
Inside the Instance
Let’s ssh in, and see the 2 named disk devices in our GCE instance:
root@d1:/home/dch# l /dev/disk/by-id/go*
lrwxrwxrwx 1 root 9 Sep 6 12:20 /dev/disk/by-id/google-persistent-disk-0 -> ../../sda
lrwxrwxrwx 1 root 10 Sep 6 12:21 /dev/disk/by-id/google-persistent-disk-0-part1 -> ../../sda1
lrwxrwxrwx 1 root 9 Sep 6 12:23 /dev/disk/by-id/google-z1m0 -> ../../sdb
lrwxrwxrwx 1 root 9 Sep 6 12:24 /dev/disk/by-id/google-z1m1 -> ../../sdc
Mount them up and set it up as a mirrored zpool. We’ll set a few properties that make our zpool more useful - disabling access time updates, and enabling compression too, on the root dataset, which shares the same name as the pool by default.
# zpool create zroot -o ashift=12 -f mirror /dev/disk/by-id/google-z1*
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zroot 199G 604K 199G 0% 1.00x ONLINE -
# zpool status
pool: zroot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
google-z1m0 ONLINE 0 0 0
google-z1m1 ONLINE 0 0 0
errors: No known data errors
# zfs set compression=lz4 zroot
# zfs set atime=off zroot
To detach the disk, first unmount, snapshot or quiesce it in the instance, and then:
gcloud compute instances detach-disk d1 \
--zone $GCE_ZONE \
--project $GCE_PROJECT \
--format=yaml \
--disk z1m0
gcloud compute instances detach-disk d1 \
--zone $GCE_ZONE \
--project $GCE_PROJECT \
--format=yaml \
--disk z1m1
gcloud compute disks delete --quiet z1m0 z1m1