[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Saturday's SILUG meeting



On Fri, 2014-01-24 at 17:47 -0600, Robert G. (Doc) Savage wrote:
On Sun, 2014-01-19 at 16:57 -0600, Robert G. (Doc) Savage wrote:
> I'm now zeroizing all five drives in the new /dev/md1 array and
> have written the zfs pool creation script. It took 18.5 hours to zeroize
> nine 300G drives on a system with 8 cores. The pod has 4 cores, so five
> 4T drives may take several days to zeroize at a 10.xx load factor.

It took just over four days to zeroize all five of them.

Final update:

I had to zeroize four drives from the original /dev/md0 array. I needed to swap the fifth with a replacement Seagate sent me for another that was DOA. For some reason CentOS kept trying to create a /dev/md127 array from the metadata partitions of three drives from /dev/md0. To get it to stop that I had to remount /dev/sda1 as read write and comment out the /dev/md0 mount command in /etc/fstab.

With all that prep behind me, with ten 4T drives I was able to create a very large ZFS pool:
# zfs create -f pod raidz2 <list of SATA drives from /dev/disks/by-id/>
# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
pod   36.2T  1.66M  36.2T     0%  1.00x  ONLINE  -
"raidz2" in ZFS should be equivalent to RAID6 in a more conventional software array, so I'm a bit surprised to see 36.2T free space. With about 3.6T of formatted space on each drive, I expected to see about (10 - 2) x 3.6T = 28.8T. When fully populated with three ranks of 15 drives, the total space may be almost 163T. By the time I'm ready to start populating the second and third ranks, I might be able to buy drives larger than 4T. :-)

Moving terabytes from one machine to another takes an annoyingly long time even with Gigabit Ethernet, so I'm now in the market for affordable 10GE NICs and switches.

--Doc