[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Happiness is...
On Mon, 2013-09-16 at 09:07 -0500, Kevin Thomas wrote:
> Thanks for the info. I noticed in a previous email, you said " It'd
> be a lot better if/when Red Hat
> integrates it into the mainstream." Do you think this will eventually
> happen, or do you think Red Hat wants nothing to do with supporting
> zfs? I've considered using zfs on my NAS at home (currently running
> Scientific Linux 6.4), but I don't really want to rebuild it and I
> only have 4 GB of memory in it and from what I understand, you need
> lots of RAM to get the full benefit of zfs. Right now, I have the OS
> on one disk and then I have four 320 GB disks that are in a RAID 0
> that are storing the data and I have a USB 2.0 1 TB external hard
> drive attached that stores the backups that rsynch handles for me.
> All my partitions are formatted as ext4.
Kevin,
With "only" 4GB of RAM you can only have a 500GB ZFS array with
deduplication. That's the only real benefit of ZFS compared to soft RAID
arrays built with ext4 and mdadm.
Right now the only safe way to do ZFS deduplication is to use on-board
ECC registered memory -- 8GB for every 1T of storage. That means a
server motherboard with lotsa slots. Mine has two CPU sockets and
sixteen DIMM slots. They're filled with 2GB memory parts for a total of
32GB. I have a 2TB array which needs 16GB of RAM to cache the block
numbers that are hashes of the block contents. The other 16GB is used by
CentOS 6.4 and its buffers.
I think the most practical way to get away from using on-board RAM for
this cache would be to dedicate an uber-fast SSD of the same capacity
for that purpose. A fully-populated 140TB Backblaze storage pod would
need a 1.2TB SSD for ZFS with full deduplication.
The only way this could ever happen would be for Red Hat to apply its
considerable filesystem and kernel developer resources.
--Doc
-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.