Setting up a FreeBSD 10.1 box within vagrant
Sunday, 7 Sep 2014
Here’s a few brief notes on setting up a FreeBSD 10.1 vagrant config based on the current beta. Once 10.1 is released, I’ll turn this into an ansible config, and create a vagrantcloud box.
config
- 4GB RAM if you’re going to use jails etc, otherise 1 is probably enough
- 20 GB split disk called zroot
- use VT + EPT virtualisation engine
- remove soundcard, usb hub
- add a serial port
- boot from CD
install
- enable IPv6
- use DHCP for everything
- use zfs everywhere
- your clock will not be set to UTC
- use the UTC timezone
Before you reboot, let’s make some further changes in the shell. Firstly, we can optimise zfs a wee bit, and grab a snapshot to roll back to if we do stupid things during install.
zfs
# zfs set atime=off zroot
# zfs set compression=lz4 zroot
# zfs set checksum=sha256 zroot
# set SNAP=zroot@`date -u "+%Y%m%d-%H%M"`:post-install
# zfs snapshot -r $SNAP
packages
# pkg install -y mosh rsync open-vm-tools-nox11 python27 sudo \
vim-lite mDNSResponder_nss
/etc/rc.conf
hostname="ice.skunkwerks.at"
# services
ntpd_enable="YES"
sendmail_enable="NONE"
sshd_enable="YES"
zfs_enable="YES"
# zeroconf
mdnsd_enable="YES"
mdnsresponderposix_enable="YES"
mdnsresponderposix_flags="-f /usr/local/etc/mdnsresponder.conf"
zeroconf aka bonjour
Zeroconf is great because it allows local servers and services to advertise their capabilities over the local network, without needing a central DNS server. With virtual machines and vagrant workflows, you will find resources such as ssh access are on different IPs depending on what network you are working from, or which hosts were started first.
Being able to look hosts and services up using DNS makes this really easy. Most
UNIX-like OS include a dns-sd
service discovery tool that does this for you,
enumerating advertised domains, services, and servers, and obviously you can
include this functionality in any applications you write as both DNS and mDNS
are readily available. For example I can ssh to wintermute.local
without
needing to know what DHCP address has been assigned to that instance.
Instead of remembering and maintaining IP addresses for machines and services, you can simply have the server publish them over the zeroconf protocol when it starts up, using DHCP and the mDNS protocol, and make them immediately available to any machine that has zeroconf-enhanced DNS lookups.
To support local domain lookups, you’ll need to enable mDNS and change the nameserver lookup method to try mDNS before falling back to DNS. This works OOTB on Apple systems, and is trivial to enable for most other UNIX platforms.
/etc/nsswitch.conf
Replace the ^hosts:
line like so:
hosts: files mdns_minimal [NOTFOUND=return] dns mdns
That’s the lookup end taken care of, now we need to get the records generated
and broadcast when the vagrant box starts up. Sadly, With FreeBSD 10.1, a
number of annoying little things changed. The mdnsresponderposix
package
changed its name and the flags and rc.conf
settings have also changed
since 10.0. The config file format is completely barmy, but it is what it
is.
/usr/local/etc/mdnsresponder.conf
The simple and clear config file format below is no longer acceptable;
#name #type #domain #port #text
ice _ssh._tcp local. 2200 "FreeBSD 10.1 amd64"
thaw _ssh._tcp local. 22 "Abandon Hope"
Instead you need to use:
ice
_ssh._tcp
2200
FreeBSD 10.1 amd64
thaw
_ssh_._tcp
22
Abandon Hope, all Ye Who Enter HEre
Adding a vagrant user, and ssh keys
Firstly we do need a vagrant user for login:
$ echo vagrant|sudo pw useradd -n vagrant -s /usr/local/bin/bash -m -G wheel -h 0
Then, in /root/.ssh/
and /home/vagrant/.ssh/
:
# mkdir -m 0700 /root/.ssh /home/vagrant/.ssh
# fetch -o /root/.ssh/ https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant
# cp /root/.ssh/authorized_keys /home/vagrant/.ssh/
# chmod 0600 /root/.ssh/authorized_keys /home/vagrant/.ssh/authorized_keys
# chown -R root:wheel /root/.ssh
# chown -R vagrant:vagrant /home/vagrant/.ssh
# cp /root/.ssh/authorized_keys /home/vagrant/.ssh/
loader.conf
autoboot_delay="2"
RAMdisk for speed
Create your mountpoint via sudo mkdir -m 0775 /ramdisk
, then add a simple
line to /etc/fstab
:
none /ramdisk tmpfs rw,size=3221225472 0 0
The actual RAM used is dependent on the data in the RAMdisk, you should make
a sensible limit so your ramdisk can’t bring down the machine itself. The
size
parameter is in bytes, so the above size is 3GiB. I leave a minimum
of 1 GiB to ensure my OS doesn’t completely tank if I accidentally fill up
the entire ramdisk.
sudo
I put sudo configs in their own file /usr/local/etc/sudoers.d/ansible
for
example.
# ansible managed
%wheel ALL=(ALL) NOPASSWD: ALL
%ansible ALL=(ALL) NOPASSWD: ALL
vagrant ALL=(ALL) NOPASSWD: ALL
ssh
This needs to be appended to /etc/ssh/sshd_config
and then without closing
your existing ssh session, so service sshd restart
. sshd is cowardly and
won’t restart if the config isn’t valid, but you can still easily lock
yourself out if you’re not very careful.
Port 2200
UseDNS no
PermitRootLogin no
PasswordAuthentication no
ChallengeResponseAuthentication no
Boxing for Vagrant
Finally, we want to package the new VM as a vagrant box. Just before shutting down the box, we will do some cleanup:
- remove unnecessary packages and their cache directory
- remove all the host private ssh keys, they’ll be recreated on next boot
- remove logfiles and temporary files
- fill up remaining space with zeros and then delete it
- snapshot the system at this point
# sudo pkg clean -y
# sudo rm -rf /var/cache/pkg/*
# sudo rm -rf /etc/ssh/ssh_host_key*
# sudo rm -rf /var/log/* /var/tmp/* /tmp/*
# dd if=/dev/zero of=filler bs=1m; rm filler
# set SNAP=zroot@`date -u "+%Y%m%d-%H%M"`:post-config
# zfs snapshot -r $SNAP
Then shut down the VM and VMware Fusion completely, and prepare the VM
for vagrant. This process is very simple, but I’m amazed it isn’t documented
anywhere. In brief, a vagrant box is simply a tarball of the files that
comprise a VM, and a little metadata.json
file containing the type of VM
this was created from. The hardest part is working out where your VM is
located.
From the directory of your Vagrantfile
, there’s a hidden folder tree,
similar to this one:
/ramdisk/.vagrant/machines/default/vmware_fusion/72740eda-2302-4cfd-b0bb-82413f74a7f0
However it’s possible that this directory was customised somewhat. You
should always be able to find it via find . -type f -name '*.vmx'
. Change
to that directory and run these. For the .vmdk there can be several
different files, but only the first one (shortest name) is usually required
to be shrunk; the rest will spit out spurious warnings if processed.
rm -f *.nvram *.log *.plist plist nvram
vmware-vdiskmanager -d $YOUR.vmdk
vmware-vdiskmanager -k $YOUR.vmdk
echo '{"provider":"vmware_desktop"}' > metadata.json
tar cvzf /tmp/freebsd-10.1.box *
vagrant box add freebsd-10.1 /tmp/freebsd10.1.box
The vmware-vdiskmanager
tools will spew out a few errors, but don’t worry.
Then compact the virtual disks, to save a tremendous amount of space after
our zeroing out above.
The vagrant box add
step simply imports the newly made box back into
~/.vagrant.d/boxes/.../
and unpacks it again. Look for the *.vmx
files
in there.
If you can find the RAM, doing the entire vagrant build and packaging within a ramdisk is a delightfully quick experience.