ZFS snapshots and clones

If you’re new to ZFS you probably want to start here


ZFS RAID is not your backup

Besides being a fantastic alternative to hardware RAID ZFS provides other useful features. However, while features are neat all that really matters is what problems they solve. While RAID solves data loss in the instance of disk failure it does not protect your data from many other catastrophes. It is never safe to consider RAID as your backup solution, even if it is ZFS with scrubbing.

ZFS Snapshots and clones are powerful, it’s awesome

The problems that ZFS snapshots can solve are probably not as mind blowing today as they were when I first learned about them in 2010 but even today I am still impressed

# Problem Pro-active steps Recovery cost Traditional method
1 File system corruption Take a snapshot Copy uncorrupted data from snapshot or clone Restore from backup
2 Human error (deleted
or overwritten data)
Take a snapshot Copy correct version of file from snapshot or clone Restore from backup
3 Live backup of running VM Take a snapshot
(You’ll want write-cache
off for this)
Make a clone (takes less than 1 minute) and run
VM from new clone
Stop VM or use scripts/tools
to perform backup
4 Create test environement Take a snapshot and make
a clone of it
Clone only takes up as much space as is newly written
in test. Delete the clone whenever you want.
Full duplication of production

Continue reading ZFS snapshots and clones

Proxmox VE 3.2 software RAID using MDADM


Proxmox is presently my GUI of choice for using KVM. However, during Proxmox’s ISO install you are only given the choice of what disk to install but not the layout. There are other articles on how to do this but for VE 3.2 there is no single place you can find instructions because the partition type changed from MSDOS to GPT.

These instructions only concern fresh installs of Proxmox VE 3.2 from the ISO because if you upgraded — performed a dist-upgrade — your will retain the disk layout of 3.1.

Proxmox VE 3.2 software raid

Fresh install of Proxmox VE 3.2

My setup

Install type Bare metal
Proxmox Version 3.2-5a885216-5 built on 2014-05-06
Install disk /dev/sda: a WD 1TB black
Mirrored disk /dev/sdb: a WD 1TB black
RAID setup mdadm –level=1: mirror setup

Continue reading Proxmox VE 3.2 software RAID using MDADM

Supermicro IPMI – password vulnerability


I love Supermicro, they make great boards and some of my favorite chassis. Typically I like to build my own servers so I’m not stuck buying hard drives just to get trays or subject to back doors out of the box. I build most servers from parts so I can pick the hardware I like and make sure I’m using what I consider to be the newest stable set. However, I recently stumbled across the fact that on older versions of Supermicro IPMI firmware the system will just give you the admin password.

The problem

IPMI is a standard remote management tool typically built into server class motherboards. This means you can remotely:

  • Power cycle the unit
  • Change some setup/BIOS options
  • Monitor sensors (temp, fan levels etc)
  • Open a console as if you plugged into VGA
  • Access the machine, even if it is off

Continue reading Supermicro IPMI – password vulnerability

ZFS Basics – An introduction to understanding ZFS


If you work with storage applications or storage hardware there’s a good chance you’ve heard of ZFS. ZFS is essentially a software implementation of RAID but in my experience the most reliable it’s software RAID I’ve worked with.

Traidtion RAID instead of ZFS

Comparison to standard RAID

Over the years I’ve worked with several implementations of hardware RAID and for the most part they are pretty equal. However, most hardware RAID implementations I’ve seen — mine included — aren’t really done well. Before I move on to ZFS RAID I’m going to cover the basic problems I’ve come across with Hardware RAID setups which contributed to my switch to ZFS. In this list below RAID = “Hardware RAID”

  1. RAID controllers are typically more expensive than HBAs
  2. Many RAID users do not properly set their cache settings on top of the fact that most cards do not come with a BBU. Lots of admins get frustrated with throughput and force write-back without a BBU
  3. RAID controllers rarely keep up with drive capacity
  4. Sometimes the implementation is proprietary which can make your setup less scalable (limited RAID sets, inability to mix/max nested raid or difficult to expand existing sets)
  5. Most user interfaces I have worked with for hardware RAID were poor; i.e. option ROMs on the card that can’t see full disk names or OS specific utilities that are buggy or available to select OS installs only
  6. I’ve yet to see a RAID card that allows you to perform a scan for errors like the ZFS scrub. I’m not saying they don’t exist, just haven’t see them

Continue reading ZFS Basics – An introduction to understanding ZFS