[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Preparing for zfs filesystem
On Mon, 2013-06-17 at 00:44 -0500, Robert G. (Doc) Savage wrote:
> Failing to get any response, I'm falling back to my old standby: dd:
>
> # dd if=/dev/zero of=/dev/sdb bs=1M
>
> I have nine xterm windows open running this for /dev/sdb thru /dev/sdj.
> Not surprisingly, this is a H-U-G-E i/o load on the system. top says the
> load factor is ~30. For an 8-core system with 32GB of RAM, it's
> positively *groaning* under that load. I have no idea how long it will
> take.
Well, it took about 18-1/2 hours altogether. That's with a hardware RAID
card with four 15K SCSI3 drives on one channel and five on the other.
Now they all look like:
Disk /dev/sdb: 300.0 GB, 299965284352 bytes
255 heads, 63 sectors/track, 36468 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Before doing anything, I needed to change SELinux's setting from
enforcing to permissive in /dev/selinux/config. This is necessary until
Red Hat makes ZFS a supported filesystem. To make this change effective
I had to reboot.
Because I have two SCSI3 channels, and one of those 300GB hard drives
(/dev/sda) is boot & root, I have to explicitly declare the zfs pool by
extracting the info from /dev/disk/by-id/. Dynamically created at boot
time, these are soft links to /dev/sd[b-j]. Note that when I have a SATA
drive in the eSATA external "toaster" adapter, it's detected as /dev/sda
and all the SCSI drives get bumped down one drive letter (/dev/sd[c-k]).
That's not a problem because the /dev drive mapped to each of these
links gets bumped too.
# zpool create -f pub raidz2 \
/dev/disk/by-id/scsi-SAdaptec_0-1_4CABCE14 \
/dev/disk/by-id/scsi-SAdaptec_0-2_660FCE14 \
/dev/disk/by-id/scsi-SAdaptec_0-3_5A13DE14 \
/dev/disk/by-id/scsi-SAdaptec_0-4_DEA3EE14 \
/dev/disk/by-id/scsi-SAdaptec_1-0_92C3FE14 \
/dev/disk/by-id/scsi-SAdaptec_1-1_551BFE14 \
/dev/disk/by-id/scsi-SAdaptec_1-2_62E00E14 \
/dev/disk/by-id/scsi-SAdaptec_1-3_49941E14 \
/dev/disk/by-id/scsi-SAdaptec_1-4_5B0C1E14
I'd no sooner hit "Enter" than I had a 1.9T array mounted at /pub. Or
did I? I pulled up 'man zfs' again and looked at 'zfs create' again.
Explicitly turning dedup=on sounded like a good idea. So did making it
shareable via NFS.
# zfs create -p -o dedup=on -o sharenfs=on pub
That's it. I hope. I'm using rsync to migrate my repos from their temp
home on the 4T drive in /dev/sda1 mounted at /mnt. The drive lights on
the RAID array are blinking like mad. :-)
--Doc
-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.