At work I have put together a couple ZFS builds using external storage enclosures. Recently, we had a smaller need where we wanted to make an easy to manage all-in-one box, while still maintaining high enough I/O for virtualization.

We cut our build down from our standard 44-drive 6U setup to a 20-drive 2U setup (plus 4 drives for O/S, and cache).

The initial build consisted of:

Overall, the build was very straight forward. I wanted direct disk access (no expanders), and a minimum of 8 vdevs (though I ended up with 10, and can support 11 if I internally mount the SSD). OpenIndiana (Build 151a7 at time of install) has fantastic driver support for all of the hardware above, all directly at install time.

The install process was very straight forward, though I add a few extras to have a little redundancy for some things:

  1. Install OpenIndiana (via bootable USB).
  2. Disable NWAM.
  3. Configure Management Network LAG.
  4. Configure Storage Area Network LAG.
  5. Set static IPs.
  6. Restart Network.
  7. Install Napp-It.
  8. Create Pool (10x mirror vdevs).
  9. Done.

All of these I’ve previously written about, so there isn’t much to show. The process is amazingly quick, all-in-all with an assembled server I can have a box up and running in under an hour.


7 Responses to “New Build: OpenIndiana + Napp-It”

  1. Mike on January 18th, 2013 3:48 pm

    If I had to do it all over again, and had a slightly larger budget, about the only changes I would make is adding a bit more RAM, 8x 8GB (64GB total), and since the board had spare PCIe slots, possibly look at splitting the Intel Quad-Port Gigabit Ethernet Adapter into two Dual-Port Gigabit Ethernet Adapters .

  2. Josh on February 13th, 2013 3:10 am

    Quick question on this build, when you say RAID1 and RAID 10, did you actually set those up as RAID sets, or did you do the ZFS equivalent (i.e. ZFS mirror and RAID-Z)? I’m starting to experiment with building our own storage, and I’ve been reading a lot about ZFS. Everything I’ve read says it’s better to leave the drives as JBODs and let ZFS handle everything. Sounds like you’ve done a lot of storage work, so I’d be interested in your take on this.

  3. mike on February 13th, 2013 12:01 pm


    Good question, and the answer is… both.

    I use hardware RAID1 (2 disks) for the operating system itself, which is not part of the ZFS storage pool. Then I leave the remaining disks as JBOD and use ZFS’s implementation of RAID10 for all other disks (20 disks, mirrored vdevs), giving ZFS direct access to all of the disks in the ZFS storage pool.

    NOTE: It is possible to mirror the rpool (root pool), but it is a lot more complicated — and so far I am not overly satisfied with the way it is done.

    So, you are correct — it is best to let ZFS handle everything, but I also prefer to have the extra security of hardware RAID on the O/S drive(s) only.

    I have been very happy with this configurations, and consider it “the best of both worlds”.

  4. Josh on February 14th, 2013 12:05 am

    Thanks! That’s good to know. I just finished building my first OpenIndiana storage box today (a small one – only 6TB raw) to test with. I’ll probably end up replacing our aging Dell SAN with an OpenIndiana based solution.

    I opted to go with the mirrored rpool, so we’ll see how that works out. Thanks for the insights and the advice! It’s been a long time since I used Solaris. It’s nice to be getting back into it.

  5. mike on February 15th, 2013 5:04 pm

    I’m not sure what Dell SAN you have, but I have seen significantly better speed in my OI build above than the average PS6000 series Dell storage unit.

    It’s nice to know that for a fraction of the price I can get leaps and bounds better performance.

  6. Stephen on May 12th, 2014 5:59 pm

    Just a quick question regarding the rpool build. I’m working on a OI151a8 / Napp-it build on a Dell 1950/MD1000 array with Western Digital RE3 1TB drives in the array as a JBOD. It seems that I will be able to rpool the system drive.

    As a beginner to building this type of array, what are the main advantages of using the built-in mirroring over the ZFS rpool mirror? Or is that really an apples to apples comparison considering the Dell hardware that I’m using?

    My goal is to create a few more arrays in the future based on SuperMicro builds (such as yours) if this proves successful. FAR cheaper than the Equalogic SANs that we have to buy and pay for maintenance.

    It’s been a fun project so far. I have two Micron SSDs to use for write-caching (although I haven’t yet figured out how to set that up completely yet). I can’t wait to see what the performance difference is between this build and our PS4000 SANs!

  7. mike on May 22nd, 2014 11:22 am

    @Stephen There is little advantage to mirroring the root pool using ZFS versus the built-in RAID on your Dell Server. The important bit is to have your primary disk mirrored. If you’re using hardware RAID or ZFS, both are great.

    I typically like hardware RAID for my boot disk because with most hardware RAID controllers if one disk goes bad, you simply pull the disk and put in a new one and the array will start rebuilding. No additional interaction required.

    As for performance, I replaced a Dell PS6000E with a this SuperMicro build for Virtual Machine storage and the improvements were massive.

    The Micron SSD’s make a great READ CACHE, but for a WRITE CACHE I would look at getting something with better throughput. Some of the newer Intel drives, or a RAM-based drive are typically your better options. You don’t need much for size then either. A 50-100GB SLOG goes a long way.

Leave a Reply