[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Creating a zfs filesystem in CentOS 6.4



Also I have no idea on your .ko issue, and I am nowhere near a terminal to try and check for you

On May 13, 2013 7:18 PM, <dsavage@peaknet.net> wrote:
I knew I should have taken notes at the recent SILUG meeting when Steve
was showing us how this was done... :-(  I want to convert a RAID5 array
from ext4 to zfs.

If you don't want to read a bunch of background info, jump directly to my
questions above my signature block.

----------------------------------------------------------------

I'm using a SuperMicro dual quad-Athlon motherboard with an Adaptec
zero-channel RAID card. I would like to use the on-board RAID firmware,
but it's old and limited to volumes under 2G. Instead I use software RAID
to merge nine 300GB SCSI3 drives on the Adaptec's two ports into a single
meta-drive (md127p1).

At the software level each drive looks like this:

# fdisk -l /dev/sdb

        Disk /dev/sdb: 300.0 GB, 299965284352 bytes
        255 heads, 63 sectors/track, 36468 cylinders
        Units = cylinders of 16065 * 512 = 8225280 bytes
        Sector size (logical/physical): 512 bytes / 512 bytes
        I/O size (minimum/optimal): 512 bytes / 512 bytes
        Disk identifier: 0x0005204c

           Device Boot      Start         End      Blocks   Id  System
        /dev/sdb1               1       36468   292929178+  83  Linux

At the hardware level they look like this:

md/raid:md127: device sdg operational as raid disk 5
        md/raid:md127: device sdf operational as raid disk 4
        md/raid:md127: device sdh operational as raid disk 6
        md/raid:md127: device sdj operational as raid disk 8
        md/raid:md127: device sdi operational as raid disk 7
        md/raid:md127: device sdc operational as raid disk 1
        md/raid:md127: device sdb operational as raid disk 0
        md/raid:md127: device sde operational as raid disk 3
        md/raid:md127: device sdd operational as raid disk 2
        md/raid:md127: allocated 9574kB
        md/raid:md127: raid level 5 active with 9 out of 9 devices,
algorithm 2
        RAID conf printout:
         --- level:5 rd:9 wd:9
         disk 0, o:1, dev:sdb
         disk 1, o:1, dev:sdc
         disk 2, o:1, dev:sdd
         disk 3, o:1, dev:sde
         disk 4, o:1, dev:sdg
         disk 5, o:1, dev:sdg
         disk 6, o:1, dev:sdh
         disk 7, o:1, dev:sdi
         disk 8, o:1, dev:sdj
        md127: detected capacity change from 0 to 2399712313344
         md127: p1

        I mention this because zfs' documentation seems to think the
number of SCSI channels is important. That said, I downloaded the
dkms-enabled ZFS package like Steve did and installed it on my
CentOS 6.4 server.

# yum localinstall --nogpgcheck
http://archive.zfsonlinux.org/epel/zfs-release-1-2.el6.noarch.rpm
        # yum -y install zfs

Then I tried something simple (and intuitive):

# mkfs zfs /dev/md127p1
        mk32fs 1.41.12 (17-May-2010)
        mkfs.ext2: invalid blocks count - /dev/md127p1

Oops! I forgot that zfs isn't integrated into mkfs (yet). I seem to recall
a 'create' command. So I tried this:

# zfs create /dev/md127p1
        Failed to load ZFS module stack.
        Load the module manually by running 'insmod <location>/zfs.ko' as
root.
        Failed to load ZFS module stack.
        Load the module manually by running 'insmod <location>/zfs.ko' as
root.

That's promising, but there's no zfs.ko anywhere on my disk. Do I have to
make one. Something to do with dkms? Who knows?

--Doc Savage
  Fairview Heights, IL

-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.