The RAIDzilla II project

Contents

  1. Background
  2. RAIDzilla II - the next generation
  3. Hardware
  4. Software
  5. Discoveries and changes during build and test
  6. Other decisions
  7. Backups
  8. Pricing
  9. Pictures
  10. Hardware monitoring
  11. dmesg output
  12. Filesystem information
  13. Exported filesystems
  14. Benchmarks

Note: Links in this article which refer to an off-site page (such as a manufacturer) will open in a new browser tab or window (depending on which browser you are using).

2016 Update

The RAIDzilla II design described in this article has been upgraded to the RAIDzilla 2.5, which has a 128TB capacity and additional features. You can read all about it here.

Background

My original RAIDzillas were more than five years old, and I decided it was time to build replacement units. Rather than simply replacing the drives in the old RAIDzillas, I decided to implement many of the things I'd learned from the previous generation. This page describes what I've learned over the past 5+ years and how I engineered the RAIDzilla II, taking that info into consideration.

RAIDzilla II - the next generation

The original RAIDzillas provided about 4TB of usable storage each, exported as two 2TB filesystems.

First on the list of changes was "lots more disk space". The design goal was to equip each RAIDzilla II with 32TB. Other goals (beyond "use current hardware") were:

Hardware

Chassis

After discussing my requirements with CI Design, I again selected a CI Design chassis - the NSR316 - for the RAIDzilla II. Of particular importance was the "no two the same" issue I'd run into with the original RAIDzilla's SR316 chassis - two of them were a couple inches deeper than the third and they had various other internal differences as well. Also, I mentioned to them that I'd had a problem where I couldn't replace the CD-ROM drives that I purchased as part of the chassis with other DVD-ROM drives, due to the chassis cutout being too small for the DVD-ROM drive tray. CI Design assured me that all of that sort of thing was in the past and that the NSR 316 would be consistent across its production run. I decided to place an initial order for 4 chassis rather than ordering them one-at-a-time, "just in case" (no pun intended).

Motherboard, CPUs and Memory

I selected the Supermicro X8DTH-iF, as it supported the latest (at the time) Intel Xeon CPUs, the 5500/5600 family. It also has 7 PCIe x8 slots, so the limitations I encountered when positioning boards in the original RAIDzilla would not be a problem on the new generation. This board also includes Ethernet-based KVM redirection, so I wouldn't have to make do with the serial console port as I did on the original RAIDzilla. I purchased the board without the optional 8-port SAS controller, as I was going to use an add-in controller.

For the CPUs, I selected the Intel Xeon E5520 as I felt it represented the best value for the dollar at the time. It is still a cost-effective processor - there's a jump of $150 to the next one in the series, the E5530. The 5600 family was not yet available in quantity at the time I made this decision and thus was prohibitively expensive.

Unlike every other boxed Intel CPU I've purchased, the E5520 did not come with a heatsink or fan. Instead, I needed to purchase something called Intel Thermal Solution STS100C, which is a very fancy name for a fan.

As I mentioned in my original RAIDzilla article, I'm partial to Kingston memory. I selected what I felt would be the optimum Kingston part for this system, the KVR1333D3D4R9S/8GHA 8GB module. I installed 6 pieces of this memory for a total of 48GB RAM, filling exactly half of the memory slots on the motherboard. The Supermicro manual indicated that this configuration would provide maximum memory performance. The Kingston part was not listed on the Supermicro web site as a tested, compatible memory module. However, when the memory arrived from Kingston, I discovered that these modules were actually built by Hynix, with part number HMT31GR7AFR4C-H9, which is on the Supermicro tested memory list.

Expansion cards

I decided to stick with 3Ware controllers, since they have worked flawlessly in the original RAIDzillas. I wanted to use a 16-port card to conserve slot space in the chassis and so I could use a single battery backup unit rather than the dual controller / dual BBU setup in the first-generation RAIDzilla. I selected the 9650SE-16ML controller with the BBU-MODULE-04 battery backup unit. This version of the battery comes with a separate slot bracket and card to mount the battery away from the controller, along with an extension cable. This let me locate the battery in an area with excellent airflow, which will help prolong the life of the battery. Some people may wonder why I used an expensive RAID controller card and then didn't use any of its RAID features. I used the 3Ware card because of its excellent support under FreeBSD, including the 3DM2 web management interface. The combination of the 3Ware card and battery backup unit also provides additional write caching performance, as well as providing a full set of diagnostic LEDs for each disk drive. The other choices would have been some other brand of RAID controller or a "dumb" controller. Having read a number of horror stories about other controllers on the FreeBSD mailing lists, I decided to stick with a brand of controller that I knew worked well.

In order to connect a tape drive for backup, I used a Dell SAS 5/E card (which is actually a re-badged LSI 1068-series card). This card provides two external SAS connectors.

Later on in the install, I added a 256GB OCZ Z-Drive R2 P84 PCIe RAID SSD card. More on this later in the article.

Drives

Storage of user data is on 16 Western Digital WD2003FYYS 2TB enterprise drives, connected to the 3Ware controller. The operating system is kept on a mirrored pair of Western Digital WD3200BEKT 320GB 7200RPM notebook drives. I needed to use notebook drives as the space available for mounting them (above the 16 drive bays) is quite limited - enterprise-grade 2.5" drives are somewhat thicker and wouldn't fit. Also, I didn't need ultimate performance for the OS drives as there is no user data stored on them. I initially used a generic tray-load DVD drive to load the operating system, but I discovered that the chassis cover would press on the top of the DVD drive, which would cause problems when trying to open the tray. I eventually switched to a slot-load drive, the LITEON DL-8ATS.

Software

As in the original RAIDzilla, I'm using FreeBSD as the operating system. This time I started with FreeBSD 8. This version provided a reasonably up-to-date implementation of the ZFS filesystem without being on the bleeding edge of FreeBSD development (which eventually became FreeBSD 9.x in January, 2012). I've been tracking 8-STABLE (currently based on 8.3) since then, and this version achieved ZFS "feature parity" with the FreeBSD development branch some time ago.

Go into detail here about some/all ports, samba / aio module, etc.

Microsoft Windows

When I was assembling and testing the first RAIDzilla II, I needed to install Windows on it temporarily to update the firmware on the SAS 5/E card (for some reason there was no DOS-based or standalone BIOS flash utility for this card). I captured this amusing display from Windows Task Manager and thought I'd share it with you:

[Windows Task Manager]

Discoveries and changes during build and test

While building and testing the first batch of RAIDzilla II's, I discovered a number of things. This section discusses some of the more memorable items.

Drive bay numbering

As I mentioned above, the SR316 chassis used in the original RAIDzilla had a "no two the same" issue, which made subsequent builds more of an adventure than I'd expected. I had hoped that by purchasing 4 NSR 316 chassis in one order that I could avoid that on the RAIDzilla II. The first chassis went together with no big surprises, but when I started testing the second one, I found that the disk drive activity and status lights weren't working. It turns out that the intelligent disk backplane boards (one for each row of four drives) weren't preconfigured in a logical manner. The 3Ware controller uses a serial protocol called EPCT to communicate with the backplane boards and operate the LEDs. The backplane boards have a large number of switches and jumpers and the second chassis had those switches in a seemingly random pattern. I had to reconfigure the boards so that the lights matched the actual drive locations - drive 0 is at the bottom left and the numbers increase as you move to the right and then bottom to top. [This is the opposite of the backplanes in the original RAIDzilla'S SR316, which use a right-to-left numbering scheme.]

Intruder alert!

The Supermicro motherboard has a chassis intrusion connector, which connects to a switch located in the case. I didn't notice that I wasn't receiving an intrusion alert when I was testing the first system with the cover off, but when I installed the cover, I received an intruder alert. It isn't particularly useful to know when the burglars put your system back together. Fortunately, it was a simple matter to re-wire the switch in the case to reverse the alarm condition.

SATA cables - minimum order, lead time

When I built the original RAIDzilla, I ordered custom SATA cables from CS Electronics, where I'd always received great service. I contacted them and asked for a quote on some custom length SATA cables (for the OS disk drives and DVD-ROM drive). I was surprised to hear back from them that they now had a minimum order of 25 pieces for each custom length and that there would be a 6-8 week delay before receiving the cables (I assume they now outsource these to a foreign factory). I decided to rummage around in my parts box to find cables that were close enough to the lengths I needed.

Unwanted ejection of SSD modules

The OCZ Z-Drive R2 SSD cards that I used have an unpleasant issue with their flash memory modules, which mount to the board using notebook SODIMM-type connectors. Two of the 5 boards I received were DOA, with one or more of the flash modules not detected during BIOS initialization. Once these cards had been replaced, I thought things were going to be OK. Unfortunately, one of the cards in a production RAIDzilla II developed this fault after months of trouble-free operation. To make matters worse, if I moved the four flash modules from one card to another, the new card would complain that I was trying to create more RAID devices than the license allowed. This is apparently a "feature" of the LSI BIOS. Since I was using the SSD as a ZFS log device (ZIL), with the board out of commission any attempt to write to the ZFS pool would cause the system to panic. The version of ZFS I was using at the time (15) did not allow a ZIL to be detached from the pool, so there was no way to recover other than to copy all of the data to a spare RAIDzilla II and re-initialize the pool on the first RAIDzilla. Since then, FreeBSD has been updated to the latest public ZFS version (28 + feature flags) and it should now be possible to detach the ZIL. I don't know why the cards have this problem - the SODIMM connector is a proven technology. OCZ's newer designs no longer use connectors for the flash - it is soldered directly to the main circuit board.

OCZ's technical support has been excellent and I've always received replacement boards quickly. The Z-Drive R2 has reached end-of-life status and warranty replacements will be with some other model of drive. This is complicated by the fact that I'm using FreeBSD, which doesn't have a driver for most of OCZ's newer PCIe drives, which use OCZ's SuperScale controller. OCZ tech support was very accommodating and replaced the failed drive with a 320GB VeloDrive which uses the LSI SAS 2004 controller, which is supported under FreeBSD.

After tuning the performance of the ZFS pool, the pool is actually faster than the SSD - the OCZ Z-Drive R2 only has a burst write speed of 500MB/sec (sustained rate of 250MB/sec), while the disk-based ZFS pool has a sustained write speed of 600MB/sec or so. However, the system still benefits from having a separate ZIL device, so I'm leaving the SSD cards in the RAIDzillas. It does mean that a future SSD failure won't have a significant impact on the system's performance. The VeloDrive units that I've been getting as warranty replacements are about twice as fast as the Z-Drive R2 units.

Change to slot-load DVD drive

As I mentioned earlier, I discovered that the tray-load DVD-ROM drive I was using would not always open reliably once installed in the chassis. This seems to be due to the chassis cover pressing on the top of the drive and interfering with the opening mechanism. Rather than trying to solve this problem, I switched to slot-load DVD-ROM drives which avoid the problem altogether. The major difficulty was in locating a slot-load drive that came with a front bezel - the vast majority of slot-load drives go into notebooks, where a front bezel isn't used (the notebook case provides the slot cover).

glabeling of drives

Due to the order in which FreeBSD detects disk devices, removing the SSD card so I could return it for repair renumbered all 16 of the data drives. They changed from da1-da16 to da0-da15. Rather than hoping ZFS could deal with this sort of thing, I decided to use the glabel utility to place user-defined labels on each of the drives and built the ZFS pool using those labels, rather than the hardware device names. In the output of a "zpool" command later in this article, you can see that the pool is built with drives named "label/twdX" (the FreeBSD device name for the 3Ware controller is "twa", so I used "twd" to indicate that it was a 3Ware drive) and ssd0 (for the SSD-based ZIL device). Physical labels with these names are also attached to the individual drives.

Leaving some screws out of the lid

Once I had the RAIDzilla II's installed in my rackmount cabinet, I needed to replace the DVD drives as I described above. I didn't really want to pull each system out of the cabinet to do this, so I figured I'd simply remove the top cover to access the drive. Unfortunately, in addition to the convenient thumbscrew on the back of the top cover, there are two small screws, one on each side of the chassis, which also hold the cover in place. I managed to remove these using a very short ratcheting screwdriver, but decided to leave them out when I replaced the cover. They're not really necessary and just make it hard to service the equipment.

The mysterious SMBus and PMBus and their relation to I2C

As I mentioned above, I wanted to be able to perform remote monitoring of every sensor in the system. That's relatively easy for motherboard voltages and the various fan speeds, but I also wanted to collect data from the power supplies. The Supermicro motherboard manual describes a JPI2C1 connector located next to the power supply connectors and describes it as "Power Supply SMBbus I2C Header". This led me on a wild goose chase to track down both hardware and software information.

First, I had to get up to speed on the relationship between I2C, SMBus and PMBus. I2C is the physical interface that the 2 buses use. SMBus is the general system management bus for monitoring voltages, fan speeds and so on. Since the remote management feature of this motherboard supports IPMI (a standardized way to access this data) I did not need to worry about collecting and decoding this information - I could just query the remote management interface for it. PMBus was a whole other story - there is apparently no standard location for this and the Supermicro remote management has no idea how to collect the data from a CI Design (actually 3Y Power - see below) power supply. This was going to get interesting...

Next, I needed to confirm that the power supplies in the NSR316 actually supported PMBus and whether or not one of the loose power supply connectors was actually compatible with the header I'd found on the Supermicro motherboard. I called CI Design tech support and asked them, and got a "We don't know - we'll check and get back to you" answer (to be fair, it was just before closing time at CI Design and they'd have to contact their Asian branch to get the data). I thanked them and started researching things on my own. Pulling out one of the hot-swap power supplies, I determined it was actually a 3Y Power YH5821-1ACR with two YM-2821A supplies installed and was able to determine from the datasheets that the connector I'd found was indeed the PMBus connector, with the same pinout as the motherboard was expecting. Easy - plug it in and instant monitoring! Or...

The Supermicro remote management interface didn't show any PMBus information for the power supplies, even with the connector plugged in. I started searching for information in various places - the Supermicro knowledge base, Linux SMBus drivers and so on. Finally, I found an advertisement for rackmount servers from a well-known provider of FreeBSD-based hardware, and that ad listed PMBus as one of the supported items. I wrote to them and explained that I was building my own systems, but was very interested in their PMBus support and would be willing to pay to obtain the software they used to interface the PMBus to FreeBSD. I got a quick response of "We'll check and get back to you", followed a day or two later by a sheepish "We found that the hardware supports PMBus but we don't monitor it - sorry". Back to square one...

Armed with enough information to be dangerous, I started using ipmitool to poke around the SMBus to try to find the power supplies. During this experimentation, I managed to hang the system, turn off the fans, and generally cause havoc on the SMBus. Fortunately, a hard power cycle would clear up any of these. I finally located the power supplies and was able to collect some product data from them, as shown here:

ipmitool i2c bus=3 0xa0 0x40 0x0
 01 00 00 00 01 08 00 f6 01 07 00 d3 33 59 20 50
 4f 57 45 52 c8 59 4d 2d 32 38 32 31 41 d1 4f 4d
 45 47 41 38 32 31 41 4d 50 32 30 30 52 31 34 07
 d9 30 30 d2 30 30 30 32 30 30 30 35 30 38 30 39

starting at 0x0c - "3Y POWER?YM-2821AMP200R14??00?000200050809"

ipmitool i2c bus=3 0xa0 0x40 0x40
 30 30 30 33 31 38 c0 c0 c1 9d 00 02 1e 94 4c d0
 03 03 e8 3c 0a 28 23 90 33 50 46 20 67 2f 3f 12
 1e e8 49 34 03 00 01 01 02 13 04 e6 01 b0 04 8c
 04 d4 04 96 00 d0 07 a4 ce 01 02 13 ab 3f 80 f4

"000318"
I'd located the fixed product data, but not the live monitoring data, when I noticed that Supermicro had released a Java-based utility named SMCIPMITool, which (among other things) claimed to support PMBus. Experimenting with it, I found that (like ipmitool) I had to specify some arcane values in order for it to find my power supplies. It turns out that the commands are "pminfo 7 b0" for the first power supply and "pminfo 7 b2" for the second power supply. With those commands, I am able to retrieve a reasonable amount of information from the power supplies, for example:
 Item                           |                Value 
 ----                           |                ----- 
 Status                         |     [STATUS OK](00h) 
 AC Input Voltage               |                0.0 V 
 AC Input Current               |                0.0 A 
 DC 12V Output Voltage          |              12.13 V 
 DC 12V Output Current          |               15.5 A 
 Temperature 1                  |             47C/117F 
 Temperature 2                  |              23C/73F 
 Fan 1                          |             4200 RPM 
 Fan 2                          |             4200 RPM 
 DC 12V Output Power            |                184 W 
 AC Input Power                 |                196 W 
 PMBus Revision                 |               0x0011 
 PWS Serial Number              |      
 PWS Module Number              |                      
 PWS Revision                   |                
Note that some of the data reports as "0.0" and some simply isn't present. This is probably due to the SMCIPMITool utility expecting a Supermicro power supply.

A tale of two motherboards

I did my initial testing with an X8DTH-iF motherboard and a pair of 2GB memory modules (to keep costs down). When I was satisfied that this board would work for the RAIDzilla II, I ordered more motherboards as well as the 8GB memory modules shown in the parts list.

I was surprised to see that the newer motherboard was slightly different, with some connectors unpopulated (most obvious was the 2nd serial port, COM2). I went ahead and installed it anyway, and was surprised to find that it also reported a different set of sensors than the first board. In particular, while the first board reported "Not available" for the temperature of each memory module (expected, as the test memory didn't have thermal sensors), the second board didn't even show that - it was as if the board didn't know that there might be sensors there.

I contacted Supermicro support, who were quite helpful. They told me to re-flash the IPMI (remote management) firmware and select the "reset to defaults" option when flashing. I wasn't particularly happy with needing to re-enter the whole IPMI configuration, but went along. After re-flashing, the memory temperature sensors were now being reported properly.

This does point out one problem with Supermicro's BIOS and IPMI update procedures, though. The BIOS upgrade forcibly resets the configuration to the default values. If I manually flash the BIOS instead of using the provided batch file and specify that the configuration should be retained, the system will POST under the new BIOS but then complain that the settings are corrupt and that it is loading defaults. As there are something like 12 pages of configuration options, this is quite annoying. The IPMI instructions also say to use the "reset to defaults" option. Fortunately, I haven't had to use that option once I performed the initial IPMI firmware update - subsequent updates have worked properly without needing the reset.

The actual flashing of the various components (including the 3Ware controller) is quite good overall - the IPMI and the 3Ware firmware can be updated via their respective web interfaces. The BIOS has to be flashed using a bootable USB stick with MS-DOS on it, which I plug into one of the USB ports on the front of the RAIDzilla II.

"zfs autoreplace" doesn't

One thing that I wasn't aware of when I designed the ZFS pool was the fact that the ZFS autoreplace setting is purely cosmetic, at least under FreeBSD - if a drive fails, the pool will sit there in a degraded state and not attempt to use the hot spare that was provisioned as part of the pool. There has been some discussion on the FreeBSD mailing lists regarding adding support for ZFS autoreplace to devd, the system state change monitoring process. As far as I can tell, that hasn't happened yet.

I thought about redesigning the storage pool from scratch, using 4 raidz1 vdevs with 4 drives each. In theory, that would give me the same amount of data space I have now, but 25% higher disk I/O performance. Currently there are 3 5-drive raidz1 volumes and 1 spare, giving 12 drives worth of usable space (a raidz1 is similar to RAID 5, requiring one additional drive). The new layout would also have 12 usable drives, so I'd be getting higher performance out of the same hardware, and since autoreplace doesn't do anything, not giving up any reliability. However, benchmark tests didn't show much improvement with the redesigned layout. Apparently ZFS prefers vdevs of size 2^n+l, where l is the RAIDZ level (RAIDZ1, RAIDZ2, or RAIDZ3). Thus, my 5-drive vdevs were optimal. Also, if the servers were in a remote data center where I wouldn't be able to swap a failed drive within a few hours, the current layout is better as I could SSH in to the server and manually perform a replacement of the failed drive in the pool with the hot spare without needing to physically visit the site.

Unwanted auto-spindown on WD3200BEKT drives

One annoying thing I discovered was that Western Digital ships the WD3200BEKT drives with automatic spindown enabled, so I was seeing rapidly-incrementing load cycle counters in the SMART monitoring reports. I used the wdidle3 utility from WD to stop the drives from spinning down, and that stopped the silliness with the load cycle.

Hot power supplies

An ongoing issue is the temperature of the power supplies. I feel that they are running at too high a temperature, particularly the upper power supply. This is at least partiailly due to poor airflow caused by the power supply enclosure not having enough vent slots. While there are a few holes in the power supply sleeve which allow some air into the lower power supply module, there are no substantial holes to provide air to the upper power supply module (the only hole is an apparently-unrelated ¼" hole on top of the sleeve).

I removed the power supply sleeve and wiring harness (which is a major project as it involves removing the rear sub-frame from the chassis and nearly two dozen screws) and carefully drilled a row of holes across the top of the power supply sleeve where the upper power supply has its air inlet vents. I also added holes on the right side of the sleeve to provide more airflow to the lower power supply.

While I had the power supply out, I took the opportunity to clean up the wiring harness by removing unused connectors and their associated cables from the harness. I also relocated the motherboard power and PMBus cable loops to provide better airflow, as you can see in the picture below.

The power supplies still report excessively high temperatures to my monitoring software, but they do feel much cooler to the touch than RAIDzilla II's with unmodified power supplies.

Some people may ask why I didn't just switch to a different power supply with better ventilation. Unfortunately, the wiring harness from the power supply to the rest of the components has been heavily modified by CI Design, so alternatives such as the Zippy G1W2-5860V3V won't be a drop-in replacement.

smartd problems

At some point well after the new 'zillas were placed into service, I started seeing smartd reporting things like "Read Summary SMART Error Log Failed" at random intervals. This was apparently the 3Ware controller's way of saying "Not now, Cato!" when it was busy. I'm not sure if a controller firmware or smartd update started causing this problem. I made some changes to smartd 5.43 to silently ignore these errors until a retry limit was hit, at which point it would revert to the previous behavior. This worked, as the next time it polled the drive it would normally be successful. After updating both the controller firmware and smartd to the latest versions (released a few months ago) the problem no longer happens.

Other decisions

Here I'll briefly discuss some other design decisions I made, most of the form "why didn't you..."

Why not use a Backblaze Pod?

They're targeted at two completely different things - the Backblaze Pod isn't concerned with speed (CPU, disk drive or network), since the Pod users are at "the other end of the Internet" - if somebody has a 2.5Mbit/sec cable Internet connection, they simply don't care if the Pod can process data faster than that. The RAIDzilla II was designed to have excellent in-box performance (the continuous I/O rate for either reads, writes, or a mixture is well over 500MB/second) and very good ("wire speed") Gigabit Ethernet performance.

I once joked to a friend of mine that while the Backblaze Pod philosophy was "how little can you spend", the RAIDzilla II philosophy was "how much can you spend". While that isn't completely true (the current cost of a drive-less 'zilla II is only a little above $3000 as of February, 2013), the Pod definitely packs more storage into a given space than the RAIDzilla II, and thus represents an ideal answer to Backblaze's needs.

I wrote up a more detailed comparison which you can read here.

Why so much CPU and memory?

Two reasons. First, I do a reasonably large amount of processing on the RAIDzilla II itself. There's various media encoding processes, local checksumming of the entire contents, and so on. Since local disk performance is at least 400% faster than network performance (assuming 500MB/sec for local access and 125MB/sec for Gigabit Ethernet), doing this processing locally instead of over the network yields substantial performance gains.

Second, I wanted to be able to re-use the entire RAIDzilla (except for the drives) for many years to come. There's no guarantee that there will be upgrade CPUs in the E5500 / E5600 series available in the future, nor this type of memory. So the idea was to build a system in the maximum configuration and be able to use it for at least 10 years, with a mid-life replacement of the 2TB drives wth 4TB or 6TB models. This is approximately twice the usable life I obtained from the original RAIDzillas, though all 3 of those are still operating, providing excellent service [and low storage capacity!] for a friend of mine.

Why not 10Gigabit Ethernet?

Again, two reasons. The first is cost - 10Gb cards are expensive, and FreeBSD doesn't support that many different models. On top of that I'd need special cables and an Ethernet switch with multiple 10Gbit ports. Second, what will the RAIDzilla II talk to at 10Gbit/sec? Certainly not the desktop PCs that are the main providers and consumers of data on the 'zillas. The only benefit would be for mirroring one 'zilla to another. That mostly happens during the initial synchronization - it is very rare for the nightly sync job to update more than 100GB.

I do expect the price of 10Gb Ethernet cards to come down over the next few years (this has already been happening since I built the first RAIDzilla II), so I may re-visit this in a few years.

As of April, 2013 it is possible to purchase the Intel X540-T1 10GbE adapter (1 copper RJ-45 port) for around $350, and the Netgear XS708E 8-port 10GbE switch for a little over $800. I am in the process of converting my RAIDzilla II's to 10GbE and will update this article with performance results at some point. Simple tests with iperf are encouraging, showing over 4Gbit/sec with a single connection and 9.88Gbit/sec using three connections.

Have you seen The Register's Unitrends appliance article?

Yes, I have. [Article here.] This is very interesting - it looks like a RAIDzilla II in a different case, but with lots of bells and whistles added in software. Looking at the article and one of the pictures, I see some very interesting similarities - same motherboard, same E5520 CPUs, same 48GB of RAM (in the same configuration) and the same 9650SE-16ML disk controller in the same motherboard slot. There are differences, aside from the software - they're using the less-expensive STS100A heatsink instead of the STS100C I use in the 'zillas, they're apparently using a simple single LED per drive bay instead of the EPCT bus for multiple LEDS, and I don't think the case is as good as the NSR316.

This is probably a case of "great minds think alike", but it would be amusing if instead it was "imitation is the sincerest form of flattery", particularly since the Unitrends box sells for $45,000 (according to the article in The Register).

Backups

It is always important to have multiple backup copies of your data, in separate locations. Note that the first floor and second floor of the same building do not count as different locations. If your data isn't worth backing up, why are you storing it in the first place? There's an (older) interesting article on SmallNetBuilder entitled "RAID Is Not Backup" which is definitely worth reading. In the discussion of that article, I posted a reply mentioning the (at the time) recent data loss that blu-ray.com experienced. Here is an excerpt from their initial post (since updated as they were able to perform data recovery on much of their data):
Originally Posted by www.blu-ray.com

We are extremely sad to let you know that we've experienced 7 weeks of database loss. 7 weeks ago we moved to a new much improved server, but unfortunately earlier today the hard drives of the database crashed (was using RAID). On the old server we did daily backups, but since we changed server and setup, the old server backup solution didn't work anymore. We have been discussing the new backup system on a daily basis, but hadn't yet implemented it, so the timing couldn't have been worse.

What is missing the last 7 weeks

DVD database additions/updates
All products except Blu-ray movies
Price trackings
Collection updates
HT Gallery updates
Movie ratings
Movie reviews
All forum activity
New user registrations
Cast & crew
Any other database data other than mentioned below

Fortunately, they were eventually able to recover almost all of their data (apparently the forum posts were one of the unrecoverable items). But it took them two weeks to recover to that point. If this was your business and not an enthusiast site, do you think you would still be in business after two weeks of not being able to access your vital business information?

Please Note: I am NOT trying to single out or pick on blu-ray.com here - they were just honest and forthcoming enough to put their experiences up on their site for the world to read and learn from. This sort of thing happens all the time to all sorts of organizations and is often not disclosed to the public, instead ending up in a footnote to a Form 10-Q SEC report.

The Ostrich syndrome

In which the ostrich puts its head in the sand* and thinks it is hidden. This seems to be the approach many organizations take regarding their backups. I didn't want to be one of them - I fully appreciate the need for good backups.

* Note: This isn't actually what the ostrich does - it puts its head and neck on the sand and then sits down, the idea being that it will thus look like a bush and be able to hide.

Hardware

I had been using a pair of Quantum SuperLoader 3 libraries for my previous-generation RAIDzillas. These each had a SDLT600 drive and 16 slots, giving a total uncompressed capacity of 9.6TB - perfect for backing up a pair of 4TB 'zillas. I needed to look at something with either a higher-capacity drive, more slots, or both. Due to some extremely bad experiences with Quantum support (their "lifetime firmware updates" moved behind a paywall, they declined to honor the lifetime media warranty, and so forth), I selected a Dell PowerVault TL4000 which has an LTO-4 (800GB uncompressed) drive and 48 slots for media. This unit (with options, controller, and 100 blank tapes) had a list price of around $15,000, although I can tell you I didn't pay nearly that much for it. I also sent the quote to the people I'd been dealing with at Quantum, with a snarky comment. Quantum eventually restored access to the firmware updates, though I have still had bad experiences trying to get media replaced under warranty. I strongly doubt my buying a competitor's product had anything to do with their making firmware available again, but it may be that enough of their customers made the same decision and they eventually noticed.

This particular TL4000 has a usable capacity of 35.2TB (uncompressed). It dedicates one slot to a cleaning cartridge and three other slots to loading / unloading media, so there are 44 usable slots, each holding an 800GB tape. Unlike some other drive types, modern LTO drives have a very good compression policy - if the data doesn't get smaller when compressed, it writes it out as uncompressed. This choice is made on a (tape) block-by-block basis, so you get the best possible tape capacity. Some competing formats will actually use more tape when storing uncompressable data when compression is enabled, due to the overhead of the decompression table.

I also ordered a bunch of Imation DataGuard tape storage / transport cases which hold 20 LTO tapes per case, used to transport and store tapes for offsite backup, as well as the barcode labels needed for the library to identify which tape is loaded.

Out of site, out of mind

I'm fortunate to have some dark (private) fiber between here and an office building around 4 blocks away. Even better, that office building is served from a different electrical substation than here, so a power failure might only affect one site or the other.

I installed a RAIDzilla II at that offsite location and synchronize with it nightly over dedicated Gigabit Ethernet fiber, so the synchronization happens just as rapidly as a local synchronization on the same LAN would. For historical reasons I use rdiff-backup to synchronize the systems. The way I have it configured, I can instantly access a copy of the data as of the last nightly synchronization run, and also have access to all data that changed (added, deleted, modified) over the past 30 days. It meets my needs, so I never really investigated alternatives. If I were starting from scratch, I'd investigate the FreeBSD HAST (Highly Available STorage) facility.

Tape backup software, or whaddya mean, 'Unknown file system'?!?!)

Armed with a tape drive with lots of tapes and a half-full RAIDzilla II, I typed the following command line into the console to start a backup:
dump -0uaL -C 32 -b 32 -f /dev/nsa0 /data
But instead of getting data moving happily to tape, I was greeted with the error message:
dump: /data: unknown file system
How unpleasant. On thinking it over, I realized that dump only works on UFS / UFS2 filesystems. Thus started the search for a dump replacement that supported ZFS.

My first thought was to ask other FreeBSD users what they were using. I started a discussion thread titled "Backup solution for ginormous ZFS pool?" and waited for responses. What I got were mostly comments that ZFS didn't need to be backed up, or that a copy of the data on another ZFS server (which I was already doing with my offsite system) would be sufficient.

I was being stubborn, since I already had the tape library, drives and storage cases. One of the possible solutions mentioned was the AMANDA backup software, which has a pair of desirable features: it is free, and it runs on FreeBSD. Unfortunately, it has a very complicated configuration - it is mostly targeted at an environment where multiple systems do backups to a central storage host and then trigger (or wait for) an AMANDA backup job to run. Since I was actually trying to use less functionality, I figured it would be easier. It was still quite complicated, and rather than continue fighting with it, I decided to ask a number of the companies providing commercial AMANDA support for a quote for some consulting help in setting it up. The responses I got fell into 3 categories:

  1. No response at all
  2. Saying it was possible, but they didn't have the time to undertake the project
  3. Telling me that they'd be glad to do the work (for about as much as the tape library cost me in the first place!), but that it required a Linux system as the central backup host
Building another RAIDzilla II and running Linux on it simply to stage the data seemed like a waste of resources. The tape drive runs at a higher data rate than Gigabit Ethernet, so something like an NFS mount wouldn't work as the drive would drop out of streaming each time it went to fetch more data.

Good old tar

The default tar (tape archive) utility provided with FreeBSD is bsdtar, which doesn't support multi-tape archives. Fortunately, GNU tar (gtar) is available in the FreeBSD Ports Collection, so it was easy to install. It does support multi-tape backups, and with a simple command line, I was able to perform a backup:
gtar cvMbf 8192 /dev/nsa0 /data
I might investigate alternatives in the future, but for now gtar is working fine. The only features it is missing that I care about are automatic tape loading of the next tape and being able to restart from the beginning of a tape in case of an I/O error.

Once the backup is complete, I remove its tapes from the library and package them up in a DataGuard case (the biggest backup so far has been 17 tapes which fits nicely in a 20-tape case), place a tamper-evident seal on the case and schedule a pickup to take the case to a secure offsite storage facility in another state.

Pricing

The following table shows the pricing data for the RAIDzilla II at various times. As you can see, the price has been falling steadily. Almost all of the items are still available from sellers (not discontinued) except for the OCZ SSD and the particular DVD drive used.

  Apr 2010 Jul 2011 Feb 2013
Part Number Manufacturer Qty. Price (each) Price (total) Note(s) Price (each) Price (total) Note(s) Price (each) Price (total) Note(s)
NSR 316 CI Design 1 $920 $920   $920 $920 [1] $920 $920 [1]
X8DTH-iF Supermicro 1 $464 $464   $450 $450   $245 $245 [2]
E5520 Intel 2 $380 $760   $400 $800   $75 $150 [3]
STS100C Intel 2 $35 $70   $35 $70   $32 $64 [2]
KVR1333D3D4R9S/8GHA Kingston 6 $501 $3006   $170 $1020   $68 $408 [2]
OCZSSDPX-ZD2P84256G OCZ Technology 1 $0 $0 [4] $1200 $1200   $1200 $1200 [1]
9650SE-16ML 3Ware 1 $900 $900   $800 $800   $780 $780  
BBU-MODULE-04 3Ware 1 $60 $60   $100 $100   $124 $124 [2]
CBL-SFF8087-05M 3Ware 4 $15 $60   $15 $60   $10 $40 [2]
DL-8ATS LITEON 1 $40 $40 [5] $50 $50   $50 $50 [1]
SAS 5/E Dell 1 $85 $85 [3] $75 $75 [2] $50 $50 [2]
WD3200BEKT Western Digital 2 $73 $146   $55 $110   $60 $120  
WD2003FYYS Western Digital 16 $373 $5968   $220 $3520   $195 $3120  
Miscellaneous Cables / labels / etc. 1 $50 $50   $50 $50   $50 $50  
Total Cost $12529   $9225   $7321  

Table Notes

  1. Item from inventory, hence unchanged price
  2. Item from eBay seller, purchased as a "New" item
  3. Item from eBay seller, purchased as a "Used" item
  4. Not included in the initial build
  5. Price is for tray-load DS-8A4S
Currently (February 2013), a RAIDzilla II with no data drives and without the SSD card costs $3001, which is extremely competitive with high-end 16-bay "BYOD" (Bring Your Own Drive) units from manufacturers such as QNAP ( TS-EC1679U-RP), Thecus (N16000) and so on. Those products add a fancy user interface and support some additional protocols / features I'm not particularly interested in (MacOS clients, iSCSI, etc.) But the RAIDzilla II uses faster CPUs and has far more memory than those commercial units.

Enough talk, let's see the pictures!

Each of the images is clickable to display a higher-resolution version.


[Rack-mounted RAIDzilla II's]

A pair of RAIDzilla II's, rack-mounted. The gray "ears" on either side are reducers, as the RAIDzilla II is a 19" device while the cabinet is a 23" (telco standard) rack. The green LEDs on each hot-swap drive carrier indicate "installed and no errors", while the blue LEDs indicate drive activity (the top right drive doesn't have its blue LED lit as it is the "hot" spare). The LEDs to the right of the blue ones (which appear blue in this photograph) are red, and only illuminate when there is a drive error, or when the drives are reset as part of the controller's BIOS initialization.


[Internal view]

This is an annotated view of the internals of the RAIDzilla II. The major components are numbered as follows:

Keeping cable clutter to a minimum is essential for good airflow. This is especially important in the hot-swap drive bay. Air is pulled across the disk drives by the 3 cooling fans (item 12). The CPU coolers (item 9) also move air from the front to the back, where it is finally exhausted by the two rear exhaust fans (item 13). Each expansion card bracket (and filler plate in an empty slot) is perforated to allow airclow across the expansion card.

In a room with an ambient temperature of 72°F, the disk drives have temperatures ranging from 76°F to 78°F. The RAID controller battery backup module is 77°F. The CPUs and memory modules run somewhat warmer; the memory temperatures range from 82°F to 95°F - still well within the normal operating range.


[Modified cooling]

This is a close-up view of the modified power supply cooling and cabling layout. Vent holes have been drilled on the top of the power supply sleeve (and on the side, not visible in this picture) to allow the upper power supply's fans to actually pull in cool air. The metal stripes you can see on the right of the holes are the air inlet vents on the upper power supply module. Though it's hard to make out in this picture, there's a gap of about ¼" between the circuit board on the left of the holes and the top of the power supply sleeve.

Also, the wiring has been neatened substantially - a large bundle of unused connectors was disconnected from the harness and removed, and the slack loops for the motherboard power cables and PMBus cables have been relocated to further improve airflow.


Hardware monitoring

As I mentioned earlier, I wanted to be able to monitor all of the RAIDzilla sensors from a central management station. The following section shows a snapshot of the data as displayed on my management station. The data shown here is static and dates from the time this web page was first created. Additionally, the "Links" section has had the hotlinks (to the live system) removed and only the daily graphs are shown. The actual management station has additional graphs for weekly, monthly and yearly data.

rz1.glaver.org Management Data

Links

OOB Interface
3DM2 Interface

Sensor Data

Sensor name        Value        Type         Stat    Low Irr.    Low Crit    Low Warn    High Warn   High Crit   High Irr.
----------------   ----------   ----------   -----   ---------   --------    --------    ---------   ---------   ---------
System Temp      | 28.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 75.000    | 77.000    | 79.000    
CPU1 Vcore       | 1.064      | Volts      | ok    | 0.808     | 0.816     | 0.824     | 1.352     | 1.360     | 1.368     
CPU2 Vcore       | 1.048      | Volts      | ok    | 0.808     | 0.816     | 0.824     | 1.352     | 1.360     | 1.368     
CPU1 VTT         | 1.104      | Volts      | ok    | 0.808     | 0.816     | 0.824     | 1.512     | 1.520     | 1.528     
CPU2 VTT         | 1.128      | Volts      | ok    | 0.808     | 0.816     | 0.824     | 1.512     | 1.520     | 1.528     
CPU1 DIMM        | 1.512      | Volts      | ok    | 1.288     | 1.296     | 1.304     | 1.656     | 1.664     | 1.672     
CPU2 DIMM        | 1.512      | Volts      | ok    | 1.288     | 1.296     | 1.304     | 1.656     | 1.664     | 1.672     
+1.5V            | 1.512      | Volts      | ok    | 1.320     | 1.328     | 1.336     | 1.656     | 1.664     | 1.672     
+1.8V            | 1.824      | Volts      | ok    | 1.592     | 1.600     | 1.608     | 1.976     | 1.984     | 1.992     
+5V              | 5.056      | Volts      | ok    | 4.416     | 4.448     | 4.480     | 5.536     | 5.568     | 5.600     
+12V             | 12.137     | Volts      | ok    | 10.600    | 10.653    | 10.706    | 13.250    | 13.303    | 13.356    
+1.1V            | 1.104      | Volts      | ok    | 0.960     | 0.968     | 0.976     | 1.216     | 1.224     | 1.232     
+3.3V            | 3.240      | Volts      | ok    | 2.880     | 2.904     | 2.928     | 3.648     | 3.672     | 3.696     
+3.3VSB          | 3.288      | Volts      | ok    | 2.880     | 2.904     | 2.928     | 3.648     | 3.672     | 3.696     
VBAT             | 3.264      | Volts      | ok    | 2.880     | 2.904     | 2.928     | 3.648     | 3.672     | 3.696     
Fan1             | na         | RPM        | na    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan2             | 4860.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan3             | 4860.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan4             | 4860.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan5             | 6480.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan6             | 6480.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan7             | 4185.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
Fan8             | 4320.000   | RPM        | ok    | 405.000   | 540.000   | 675.000   | 34155.000 | 34290.000 | 34425.000 
P1-DIMM1A Temp   | 28.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 80.000    | 85.000    | 90.000    
P1-DIMM2A Temp   | 29.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 80.000    | 85.000    | 90.000    
P1-DIMM3A Temp   | 30.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 80.000    | 85.000    | 90.000    
P2-DIMM1A Temp   | 33.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 80.000    | 85.000    | 90.000    
P2-DIMM2A Temp   | 35.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 80.000    | 85.000    | 90.000    
P2-DIMM3A Temp   | 35.000     | degrees C  | ok    | -9.000    | -7.000    | -5.000    | 80.000    | 85.000    | 90.000    

Power Supply Status

PS A Status
 
 Item                           |                Value 
 ----                           |                ----- 
 DC 12V Output Voltage          |              11.85 V 
 DC 12V Output Current          |               16.5 A 
 Temperature 1                  |             46C/115F 
 Temperature 2                  |              29C/84F 
 Fan 1                          |             5000 RPM 
 Fan 2                          |             4896 RPM 
 DC 12V Output Power            |                195 W 
 AC Input Power                 |                235 W 
 PMBus Revision                 |               0x0011 
 
PS B Status
 
 Item                           |                Value 
 ----                           |                ----- 
 DC 12V Output Voltage          |               12.1 V 
 DC 12V Output Current          |               13.5 A 
 Temperature 1                  |             66C/151F 
 Temperature 2                  |              29C/84F 
 Fan 1                          |             4600 RPM 
 Fan 2                          |             4600 RPM 
 DC 12V Output Power            |                163 W 
 AC Input Power                 |                201 W 
 PMBus Revision                 |               0x0011 
 

System Event Log

 No.   Date         Time       Message
----   ----------   --------   ----------------------------------------------------------------------------------------------
SEL has no entries

Temperature

Daily Temperature

Fan speed

Daily Fans

Voltage

Daily Fans

dmesg output

For the computer geeks out there, this is a "dmesg" output of the system booting up, listing the installed hardware:

Copyright (c) 1992-2013 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
	The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 8.4-PRERELEASE #0 r250981M: Sat May 25 09:48:16 EDT 2013
    terry@rz1.glaver.org:/usr/obj/usr/src/sys/RAIDZILLA2 amd64
gcc version 4.2.1 20070831 patched [FreeBSD]
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz (2275.82-MHz K8-class CPU)
  Origin = "GenuineIntel"  Id = 0x106a5  Family = 6  Model = 1a  Stepping = 5
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x9ce3bd<SSE3,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,DCA,SSE4.1,SSE4.2,POPCNT>
  AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  TSC: P-state invariant
real memory  = 51543801856 (49156 MB)
avail memory = 49690439680 (47388 MB)
ACPI APIC Table: <SUPERM APIC1635>
FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs
FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 SMT threads
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1
 cpu2 (AP): APIC ID:  2
 cpu3 (AP): APIC ID:  3
 cpu4 (AP): APIC ID:  4
 cpu5 (AP): APIC ID:  5
 cpu6 (AP): APIC ID:  6
 cpu7 (AP): APIC ID:  7
 cpu8 (AP): APIC ID: 16
 cpu9 (AP): APIC ID: 17
 cpu10 (AP): APIC ID: 18
 cpu11 (AP): APIC ID: 19
 cpu12 (AP): APIC ID: 20
 cpu13 (AP): APIC ID: 21
 cpu14 (AP): APIC ID: 22
 cpu15 (AP): APIC ID: 23
ioapic0 <Version 2.0> irqs 0-23 on motherboard
ioapic1 <Version 2.0> irqs 24-47 on motherboard
ioapic2 <Version 2.0> irqs 48-71 on motherboard
ichwd module loaded
kbd1 at kbdmux0
acpi0: <SMCI > on motherboard
acpi0: Overriding SCI Interrupt from IRQ 9 to IRQ 20
acpi0: [ITHREAD]
acpi0: Power Button (fixed)
acpi0: reservation of 400, 100 (3) failed
Timecounter "ACPI-safe" frequency 3579545 Hz quality 850
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0
cpu0: <ACPI CPU> on acpi0
cpu1: <ACPI CPU> on acpi0
cpu2: <ACPI CPU> on acpi0
cpu3: <ACPI CPU> on acpi0
cpu4: <ACPI CPU> on acpi0
cpu5: <ACPI CPU> on acpi0
cpu6: <ACPI CPU> on acpi0
cpu7: <ACPI CPU> on acpi0
cpu8: <ACPI CPU> on acpi0
cpu9: <ACPI CPU> on acpi0
cpu10: <ACPI CPU> on acpi0
cpu11: <ACPI CPU> on acpi0
cpu12: <ACPI CPU> on acpi0
cpu13: <ACPI CPU> on acpi0
cpu14: <ACPI CPU> on acpi0
cpu15: <ACPI CPU> on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
pcib1: <ACPI PCI-PCI bridge> at device 1.0 on pci0
pci1: <ACPI PCI bus> on pcib1
igb0: <Intel(R) PRO/1000 Network Connection version - 2.3.9 - 8> port 0xcc00-0xcc1f mem 0xfaee0000-0xfaefffff,0xfaec0000-0xfaedffff,0xfae9c000-0xfae9ffff irq 28 at device 0.0 on pci1
igb0: Using MSIX interrupts with 9 vectors
igb0: Ethernet address: 00:25:90:01:25:70
igb0: [ITHREAD]
igb0: Bound queue 0 to cpu 0
igb0: [ITHREAD]
igb0: Bound queue 1 to cpu 1
igb0: [ITHREAD]
igb0: Bound queue 2 to cpu 2
igb0: [ITHREAD]
igb0: Bound queue 3 to cpu 3
igb0: [ITHREAD]
igb0: Bound queue 4 to cpu 4
igb0: [ITHREAD]
igb0: Bound queue 5 to cpu 5
igb0: [ITHREAD]
igb0: Bound queue 6 to cpu 6
igb0: [ITHREAD]
igb0: Bound queue 7 to cpu 7
igb0: [ITHREAD]
igb1: <Intel(R) PRO/1000 Network Connection version - 2.3.9 - 8> port 0xc800-0xc81f mem 0xfae20000-0xfae3ffff,0xfae00000-0xfae1ffff,0xfaddc000-0xfaddffff irq 40 at device 0.1 on pci1
igb1: Using MSIX interrupts with 9 vectors
igb1: Ethernet address: 00:25:90:01:25:71
igb1: [ITHREAD]
igb1: Bound queue 0 to cpu 8
igb1: [ITHREAD]
igb1: Bound queue 1 to cpu 9
igb1: [ITHREAD]
igb1: Bound queue 2 to cpu 10
igb1: [ITHREAD]
igb1: Bound queue 3 to cpu 11
igb1: [ITHREAD]
igb1: Bound queue 4 to cpu 12
igb1: [ITHREAD]
igb1: Bound queue 5 to cpu 13
igb1: [ITHREAD]
igb1: Bound queue 6 to cpu 14
igb1: [ITHREAD]
igb1: Bound queue 7 to cpu 15
igb1: [ITHREAD]
pcib2: <ACPI PCI-PCI bridge> at device 3.0 on pci0
pci3: <ACPI PCI bus> on pcib2
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.8> mem 0xf8e00000-0xf8ffffff,0xf8dfc000-0xf8dfffff irq 24 at device 0.0 on pci3
ix0: Using MSIX interrupts with 9 vectors
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: Ethernet address: a0:36:9f:1d:65:c2
ix0: PCI Express Bus: Speed 5.0Gb/s Width x8
ix0: link state changed to UP
pcib3: <ACPI PCI-PCI bridge> at device 5.0 on pci0
pci5: <ACPI PCI bus> on pcib3
pcib4: <ACPI PCI-PCI bridge> at device 7.0 on pci0
pci6: <ACPI PCI bus> on pcib4
mps0: <LSI SAS2004> port 0xd800-0xd8ff mem 0xfbcf0000-0xfbcfffff irq 30 at device 0.0 on pci6
mps0: Firmware: 16.00.00.00, Driver: 14.00.00.01-fbsd
mps0: IOCCapabilities: 185c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,IR>
mps0: [ITHREAD]
pcib5: <ACPI PCI-PCI bridge> at device 9.0 on pci0
pci7: <ACPI PCI bus> on pcib5
pci0: <base peripheral, interrupt controller> at device 20.0 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.1 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.2 (no driver attached)
pci0: <base peripheral, interrupt controller> at device 20.3 (no driver attached)
pci0: <base peripheral> at device 22.0 (no driver attached)
pci0: <base peripheral> at device 22.1 (no driver attached)
pci0: <base peripheral> at device 22.2 (no driver attached)
pci0: <base peripheral> at device 22.3 (no driver attached)
pci0: <base peripheral> at device 22.4 (no driver attached)
pci0: <base peripheral> at device 22.5 (no driver attached)
pci0: <base peripheral> at device 22.6 (no driver attached)
pci0: <base peripheral> at device 22.7 (no driver attached)
uhci0: <Intel 82801JI (ICH10) USB controller USB-D> port 0xaf80-0xaf9f irq 16 at device 26.0 on pci0
uhci0: [ITHREAD]
uhci0: LegSup = 0x2f00
usbus0 on uhci0
uhci1: <Intel 82801JI (ICH10) USB controller USB-E> port 0xaf40-0xaf5f irq 21 at device 26.1 on pci0
uhci1: [ITHREAD]
uhci1: LegSup = 0x2f00
usbus1 on uhci1
uhci2: <Intel 82801JI (ICH10) USB controller USB-F> port 0xaf20-0xaf3f irq 19 at device 26.2 on pci0
uhci2: [ITHREAD]
uhci2: LegSup = 0x2f00
usbus2 on uhci2
ehci0: <Intel 82801JI (ICH10) USB 2.0 controller USB-B> mem 0xfbeda000-0xfbeda3ff irq 18 at device 26.7 on pci0
ehci0: [ITHREAD]
usbus3: EHCI version 1.0
usbus3 on ehci0
uhci3: <Intel 82801JI (ICH10) USB controller USB-A> port 0xaf00-0xaf1f irq 23 at device 29.0 on pci0
uhci3: [ITHREAD]
uhci3: LegSup = 0x2f00
usbus4 on uhci3
uhci4: <Intel 82801JI (ICH10) USB controller USB-B> port 0xaec0-0xaedf irq 19 at device 29.1 on pci0
uhci4: [ITHREAD]
uhci4: LegSup = 0x2f00
usbus5 on uhci4
uhci5: <Intel 82801JI (ICH10) USB controller USB-C> port 0xaea0-0xaebf irq 18 at device 29.2 on pci0
uhci5: [ITHREAD]
uhci5: LegSup = 0x2f00
usbus6 on uhci5
ehci1: <Intel 82801JI (ICH10) USB 2.0 controller USB-A> mem 0xfbed8000-0xfbed83ff irq 23 at device 29.7 on pci0
ehci1: [ITHREAD]
usbus7: EHCI version 1.0
usbus7 on ehci1
pcib6: <ACPI PCI-PCI bridge> at device 30.0 on pci0
pci8: <ACPI PCI bus> on pcib6
vgapci0: <VGA-compatible display> mem 0xf9000000-0xf9ffffff,0xfaffc000-0xfaffffff,0xfb000000-0xfb7fffff irq 16 at device 4.0 on pci8
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
ahci0: <Intel ICH10 AHCI SATA controller> port 0xaff0-0xaff7,0xafac-0xafaf,0xafe0-0xafe7,0xafa8-0xafab,0xae80-0xae9f mem 0xfbed6000-0xfbed67ff irq 19 at device 31.2 on pci0
ahci0: [ITHREAD]
ahci0: AHCI v1.20 with 6 3Gbps ports, Port Multiplier not supported
ahcich0: <AHCI channel> at channel 0 on ahci0
ahcich0: [ITHREAD]
ahcich1: <AHCI channel> at channel 1 on ahci0
ahcich1: [ITHREAD]
ahcich2: <AHCI channel> at channel 2 on ahci0
ahcich2: [ITHREAD]
ahcich3: <AHCI channel> at channel 3 on ahci0
ahcich3: [ITHREAD]
ahcich4: <AHCI channel> at channel 4 on ahci0
ahcich4: [ITHREAD]
ahcich5: <AHCI channel> at channel 5 on ahci0
ahcich5: [ITHREAD]
ichsmb0: <Intel 82801JI (ICH10) SMBus controller> port 0x400-0x41f mem 0xfbed4000-0xfbed40ff irq 18 at device 31.3 on pci0
ichsmb0: [ITHREAD]
smbus0: <System Management Bus> on ichsmb0
smb0: <SMBus generic I/O> on smbus0
pcib7: <ACPI Host-PCI bridge> on acpi0
pci128: <ACPI PCI bus> on pcib7
pcib8: <PCI-PCI bridge> at device 0.0 on pci128
pci129: <PCI bus> on pcib8
pcib9: <ACPI PCI-PCI bridge> at device 1.0 on pci128
pci130: <ACPI PCI bus> on pcib9
pcib10: <ACPI PCI-PCI bridge> at device 3.0 on pci128
pci131: <ACPI PCI bus> on pcib10
pcib11: <PCI-PCI bridge> at device 0.0 on pci131
pci132: <PCI bus> on pcib11
mpt0: <LSILogic SAS/SATA Adapter> port 0xe800-0xe8ff mem 0xf7dec000-0xf7deffff,0xf7df0000-0xf7dfffff irq 48 at device 8.0 on pci132
mpt0: [ITHREAD]
mpt0: MPI Version=1.5.13.0
pcib12: <ACPI PCI-PCI bridge> at device 5.0 on pci128
pci133: <ACPI PCI bus> on pcib12
pcib13: <ACPI PCI-PCI bridge> at device 7.0 on pci128
pci134: <ACPI PCI bus> on pcib13
3ware device driver for 9000 series storage controllers, version: 3.80.06.003
twa0: <3ware 9000 series Storage Controller> port 0xfc00-0xfcff mem 0xf4000000-0xf5ffffff,0xf7fde000-0xf7fdefff irq 54 at device 0.0 on pci134
twa0: [ITHREAD]
twa0: INFO: (0x15: 0x1300): Controller details:: Model 9650SE-16ML, 16 ports, Firmware FE9X 4.10.00.027, BIOS BE9X 4.08.00.004
pcib14: <ACPI PCI-PCI bridge> at device 9.0 on pci128
pci135: <ACPI PCI bus> on pcib14
pci128: <base peripheral, interrupt controller> at device 20.0 (no driver attached)
pci128: <base peripheral, interrupt controller> at device 20.1 (no driver attached)
pci128: <base peripheral, interrupt controller> at device 20.2 (no driver attached)
pci128: <base peripheral, interrupt controller> at device 20.3 (no driver attached)
pci128: <base peripheral> at device 22.0 (no driver attached)
pci128: <base peripheral> at device 22.1 (no driver attached)
pci128: <base peripheral> at device 22.2 (no driver attached)
pci128: <base peripheral> at device 22.3 (no driver attached)
pci128: <base peripheral> at device 22.4 (no driver attached)
pci128: <base peripheral> at device 22.5 (no driver attached)
pci128: <base peripheral> at device 22.6 (no driver attached)
pci128: <base peripheral> at device 22.7 (no driver attached)
acpi_button0: <Power Button> on acpi0
ipmi0: <IPMI System Interface> port 0xca2-0xca3 on acpi0
ipmi0: KCS mode found at io 0xca2 on acpi
atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0
atkbdc0: <Keyboard controller (i8042)> port 0x60,0x64 irq 1 on acpi0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbd0: [ITHREAD]
psm0: <PS/2 Mouse> irq 12 on atkbdc0
psm0: [GIANT-LOCKED]
psm0: [ITHREAD]
psm0: model IntelliMouse Explorer, device ID 4
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
uart0: [FILTER]
uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0
uart1: [FILTER]
acpi_hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 900
qpi0: <QPI system bus> on motherboard
pcib15: <QPI Host-PCI bridge> pcibus 255 on qpi0
pci255: <PCI bus> on pcib15
pcib16: <QPI Host-PCI bridge> pcibus 254 on qpi0
pci254: <PCI bus> on pcib16
ichwd0 on isa0
ichwd0: ICH WDT present but disabled in BIOS or hardware
device_attach: ichwd0 attach returned 6
ipmi1: <IPMI System Interface> on isa0
device_attach: ipmi1 attach returned 16
ichwd0 at port 0x830-0x837,0x860-0x87f on isa0
ichwd0: ICH WDT present but disabled in BIOS or hardware
device_attach: ichwd0 attach returned 6
ipmi1: <IPMI System Interface> on isa0
device_attach: ipmi1 attach returned 16
orm0: <ISA Option ROMs> at iomem 0xc0000-0xc7fff,0xd1000-0xd1fff,0xd2000-0xd3fff,0xd4000-0xd47ff on isa0
sc0: <System console> at flags 0x100 on isa0
sc0: VGA <16 virtual consoles, flags=0x300>
vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
coretemp0: <CPU On-Die Thermal Sensors> on cpu0
est0: <Enhanced SpeedStep Frequency Control> on cpu0
p4tcc0: <CPU Frequency Thermal Control> on cpu0
coretemp1: <CPU On-Die Thermal Sensors> on cpu1
est1: <Enhanced SpeedStep Frequency Control> on cpu1
p4tcc1: <CPU Frequency Thermal Control> on cpu1
coretemp2: <CPU On-Die Thermal Sensors> on cpu2
est2: <Enhanced SpeedStep Frequency Control> on cpu2
p4tcc2: <CPU Frequency Thermal Control> on cpu2
coretemp3: <CPU On-Die Thermal Sensors> on cpu3
est3: <Enhanced SpeedStep Frequency Control> on cpu3
p4tcc3: <CPU Frequency Thermal Control> on cpu3
coretemp4: <CPU On-Die Thermal Sensors> on cpu4
est4: <Enhanced SpeedStep Frequency Control> on cpu4
p4tcc4: <CPU Frequency Thermal Control> on cpu4
coretemp5: <CPU On-Die Thermal Sensors> on cpu5
est5: <Enhanced SpeedStep Frequency Control> on cpu5
p4tcc5: <CPU Frequency Thermal Control> on cpu5
coretemp6: <CPU On-Die Thermal Sensors> on cpu6
est6: <Enhanced SpeedStep Frequency Control> on cpu6
p4tcc6: <CPU Frequency Thermal Control> on cpu6
coretemp7: <CPU On-Die Thermal Sensors> on cpu7
est7: <Enhanced SpeedStep Frequency Control> on cpu7
p4tcc7: <CPU Frequency Thermal Control> on cpu7
coretemp8: <CPU On-Die Thermal Sensors> on cpu8
est8: <Enhanced SpeedStep Frequency Control> on cpu8
p4tcc8: <CPU Frequency Thermal Control> on cpu8
coretemp9: <CPU On-Die Thermal Sensors> on cpu9
est9: <Enhanced SpeedStep Frequency Control> on cpu9
p4tcc9: <CPU Frequency Thermal Control> on cpu9
coretemp10: <CPU On-Die Thermal Sensors> on cpu10
est10: <Enhanced SpeedStep Frequency Control> on cpu10
p4tcc10: <CPU Frequency Thermal Control> on cpu10
coretemp11: <CPU On-Die Thermal Sensors> on cpu11
est11: <Enhanced SpeedStep Frequency Control> on cpu11
p4tcc11: <CPU Frequency Thermal Control> on cpu11
coretemp12: <CPU On-Die Thermal Sensors> on cpu12
est12: <Enhanced SpeedStep Frequency Control> on cpu12
p4tcc12: <CPU Frequency Thermal Control> on cpu12
coretemp13: <CPU On-Die Thermal Sensors> on cpu13
est13: <Enhanced SpeedStep Frequency Control> on cpu13
p4tcc13: <CPU Frequency Thermal Control> on cpu13
coretemp14: <CPU On-Die Thermal Sensors> on cpu14
est14: <Enhanced SpeedStep Frequency Control> on cpu14
p4tcc14: <CPU Frequency Thermal Control> on cpu14
coretemp15: <CPU On-Die Thermal Sensors> on cpu15
est15: <Enhanced SpeedStep Frequency Control> on cpu15
p4tcc15: <CPU Frequency Thermal Control> on cpu15
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Timecounters tick every 1.000 msec
usbus0: 12Mbps Full Speed USB v1.0
usbus1: 12Mbps Full Speed USB v1.0
usbus2: 12Mbps Full Speed USB v1.0
usbus3: 480Mbps High Speed USB v2.0
usbus4: 12Mbps Full Speed USB v1.0
usbus5: 12Mbps Full Speed USB v1.0
usbus6: 12Mbps Full Speed USB v1.0
usbus7: 480Mbps High Speed USB v2.0
ugen0.1: <Intel> at usbus0
uhub0: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0
ugen1.1: <Intel> at usbus1
uhub1: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus1
ugen2.1: <Intel> at usbus2
uhub2: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus2
ugen3.1: <Intel> at usbus3
uhub3: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus3
ugen4.1: <Intel> at usbus4
uhub4: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus4
ugen5.1: <Intel> at usbus5
uhub5: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus5
ugen6.1: <Intel> at usbus6
uhub6: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus6
ugen7.1: <Intel> at usbus7
uhub7: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus7
ipmi0: IPMI device rev. 1, firmware rev. 2.08, version 2.0
ipmi0: Number of channels 2
ipmi0: Attached watchdog
uhub0: 2 ports with 2 removable, self powered
uhub1: 2 ports with 2 removable, self powered
uhub2: 2 ports with 2 removable, self powered
uhub4: 2 ports with 2 removable, self powered
uhub5: 2 ports with 2 removable, self powered
uhub6: 2 ports with 2 removable, self powered
uhub3: 6 ports with 6 removable, self powered
uhub7: 6 ports with 6 removable, self powered
cd0 at ahcich1 bus 0 scbus2 target 0 lun 0
cd0: <Slimtype DVD A  DL8ATS XP59> Removable CD-ROM SCSI-0 device 
cd0: 150.000MB/s transfers (SATA 1.x, UDMA5, ATAPI 12bytes, PIO 8192bytes)
cd0: Attempt to query device size failed: NOT READY, Medium not present - tray open
sa0 at mpt0 bus 0 scbus7 target 0 lun 0
sa0: <IBM ULT3580-HH4 C7Q1> Removable Sequential Access SCSI-3 device 
sa0: 300.000MB/s transfers
sa0: Command Queueing enabled
ch0 at mpt0 bus 0 scbus7 target 0 lun 1
ch0: <IBM 3573-TL B.60> Removable Changer SCSI-5 device 
ch0: 300.000MB/s transfers
ch0: Command Queueing enabled
ch0: 44 slots, 1 drive, 1 picker, 3 portals
ada0 at ahcich0 bus 0 scbus1 target 0 lun 0
ada0: <WDC WD3200BEKT-60V5T1 12.01A12> ATA-8 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 305245MB (625142448 512 byte sectors: 16H 63S/T 16383C)
ada1 at ahcich2 bus 0 scbus3 target 0 lun 0
ada1: <WDC WD3200BEKT-60V5T1 12.01A12> ATA-8 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 305245MB (625142448 512 byte sectors: 16H 63S/T 16383C)
da0 at mps0 bus 0 scbus0 target 0 lun 0
da0: <LSI Logical Volume 3000> Fixed Direct Access SCSI-6 device 
da0: 150.000MB/s transfers
da0: Command Queueing enabled
da0: 301360MB (617185280 512 byte sectors: 255H 63S/T 38418C)
da1 at twa0 bus 0 scbus8 target 0 lun 0
da1: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da1: 100.000MB/s transfers
da1: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da2 at twa0 bus 0 scbus8 target 1 lun 0
da2: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da2: 100.000MB/s transfers
da2: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da3 at twa0 bus 0 scbus8 target 2 lun 0
da3: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da3: 100.000MB/s transfers
da3: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da4 at twa0 bus 0 scbus8 target 3 lun 0
da4: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da4: 100.000MB/s transfers
da4: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da5 at twa0 bus 0 scbus8 target 4 lun 0
da5: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da5: 100.000MB/s transfers
da5: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da6 at twa0 bus 0 scbus8 target 5 lun 0
da6: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da6: 100.000MB/s transfers
da6: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da7 at twa0 bus 0 scbus8 target 6 lun 0
da7: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da7: 100.000MB/s transfers
da7: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da8 at twa0 bus 0 scbus8 target 7 lun 0
da8: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da8: 100.000MB/s transfers
da8: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da9 at twa0 bus 0 scbus8 target 8 lun 0
da9: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da9: 100.000MB/s transfers
da9: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da10 at twa0 bus 0 scbus8 target 9 lun 0
da10: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da10: 100.000MB/s transfers
da10: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da11 at twa0 bus 0 scbus8 target 10 lun 0
da11: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da11: 100.000MB/s transfers
da11: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da12 at twa0 bus 0 scbus8 target 11 lun 0
da12: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da12: 100.000MB/s transfers
da12: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da13 at twa0 bus 0 scbus8 target 12 lun 0
da13: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da13: 100.000MB/s transfers
da13: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da14 at twa0 bus 0 scbus8 target 13 lun 0
da14: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da14: 100.000MB/s transfers
da14: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da15 at twa0 bus 0 scbus8 target 14 lun 0
da15: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da15: 100.000MB/s transfers
da15: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
da16 at twa0 bus 0 scbus8 target 15 lun 0
da16: <AMCC 9650SE-16M DISK 4.10> Fixed Direct Access SCSI-5 device 
da16: 100.000MB/s transfers
da16: 1907338MB (3906228224 512 byte sectors: 255H 63S/T 243151C)
SMP: AP CPU #1 Launched!
SMP: AP CPU #2 Launched!
SMP: AP CPU #3 Launched!
SMP: AP CPU #4 Launched!
SMP: AP CPU #5 Launched!
SMP: AP CPU #6 Launched!
SMP: AP CPU #7 Launched!
SMP: AP CPU #8 Launched!
SMP: AP CPU #9 Launched!
SMP: AP CPU #10 Launched!
SMP: AP CPU #11 Launched!
SMP: AP CPU #12 Launched!
SMP: AP CPU #13 Launched!
SMP: AP CPU #14 Launched!
SMP: AP CPU #15 Launched!
GEOM_MIRROR: Device mirror/gm0 launched (2/2).
Trying to mount root from ufs:/dev/mirror/gm0s1a

Filesystem information

This df command shows the active filesystems. The /data filesystem is exported via NFS (to other FreeBSD systems) and via SAMBA (to PC clients). The zpool status and zpool list commands show the state of the ZFS pool:

[0:1] rz1:~> df
Filesystem         1K-blocks          Used      Avail Capacity  Mounted on
/dev/mirror/gm0s1a     8122126      550078    6922278     7%    /
devfs                        1           1          0   100%    /dev
/dev/mirror/gm0s1d   132109852      174074  121366990     0%    /var
/dev/mirror/gm0s1e    32494668           6   29895090     0%    /var/crash
/dev/mirror/gm0s1f    32494668     4628180   25266916    15%    /usr
/dev/mirror/gm0s1g    32494668          80   29895016     0%    /tmp
/dev/mirror/gm0s1h    32494668           4   29895092     0%    /spare
procfs                       4           4          0   100%    /proc
data               22955143746 13989950655 8965193090    61%    /data

[0:2] rz1:~> zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0 in 7h23m with 0 errors on Thu Feb  7 04:44:32 2013
config:

        NAME             STATE     READ WRITE CKSUM
        data             ONLINE       0     0     0
          raidz1-0       ONLINE       0     0     0
            label/twd0   ONLINE       0     0     0
            label/twd1   ONLINE       0     0     0
            label/twd2   ONLINE       0     0     0
            label/twd3   ONLINE       0     0     0
            label/twd4   ONLINE       0     0     0
          raidz1-1       ONLINE       0     0     0
            label/twd5   ONLINE       0     0     0
            label/twd6   ONLINE       0     0     0
            label/twd7   ONLINE       0     0     0
            label/twd8   ONLINE       0     0     0
            label/twd9   ONLINE       0     0     0
          raidz1-2       ONLINE       0     0     0
            label/twd10  ONLINE       0     0     0
            label/twd11  ONLINE       0     0     0
            label/twd12  ONLINE       0     0     0
            label/twd13  ONLINE       0     0     0
            label/twd14  ONLINE       0     0     0
        logs
          label/ssd0     ONLINE       0     0     0
        spares
          label/twd15    AVAIL   

errors: No known data errors

[0:3] rz1:~> zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
data  27.2T  16.3T  10.9T    59%  1.00x  ONLINE  -
zpool maintenance performance is quite good - a scrub of the whole 16+TB of data runs at slightly under 700MB/second:
[0:4] rz1:~> zpool status
  pool: data
 state: ONLINE
  scan: scrub in progress since Thu Feb  7 18:43:17 2013
        15.4T scanned out of 16.4T at 673M/s, 0h25m to go
        0 repaired, 94.04% done
One thing that will completely kill scrub performance under FreeBSD is enabling ZFS deduplication. I don't know why this is, as the RAIDzillas have lots of free memory and the disks in the pool only report about a 10% I/O utilization (compared with 90%+ when performing a scrub when deduplication isn't configured). Instead of completing in 6 to 7 hours, it is more like 6 to 7 days. I discussed the issue with a number of FreeBSD developers and tried the various suggestions that they made, but nothing increased the scrub performance on pools with deduplication enabled. Fortunately (?), the data on my RAIDzillas doesn't lend itself to deduplication, providing only about a 1.02x (2%) gain compared with not doing deduplication, so I was able to rebuild the pools without deduplication to get my performance back.

Exported filesystems

Here is a Windows 7 view showing (among other systems) a pair of RAIDzilla II's (Y: and Z:) with 21TB storage on each. Please note that all of these systems are firewalled from the Internet and don't bother trying to come visit. It will only annoy me and cause me to make unhappy noises at your ISP.

[Windows view of two RAIDzilla II's]

Benchmarks

Here are some quick benchmarks I ran. Note that this is only testing one of the [...]
[Sample iozone graph showing 500MB/sec writes]

Note: URLs on this page were verified as of February, 2013. External web sites tend to restructure their data at random times. If you notice a broken link, please report it via the "Contact Us" link below so we can fix it.