• 38 Posts
  • 310 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle




















  • ZFS is fine with external disks and indirect access to hardware in my limited experience. Performance would not be as optimal as it could be but data integrity shouldn’t be a problem. If I didn’t need the space and this was my primary storage array, I’d probably opt for the increased reliability of 2 mirror vdevs. I’ve done something similar to what OP is suggesting with LVM combining multiple disks on my off-site backup though. I combined 1T+3T+4T disks into a single 8TB volume. Deliciously bastardized, not a single integrity issue, no hardware failures either over the several years it ran.


  • The simplest way to exactly what you want is to use LVM to create a linear volume (equivalent to JBOD) from the two 6TB disks. Then create a zpool with a single RAIDz1 vdev with that along with the other 2 12TB disks. You could use mdraid to do a RAID0 as you suggested too. The result would be similar.

    You could also do it all with ZFS albeit with more lost space. You could create a zpool with 2 vdevs. One with a 6TB mirror comprised of the 2 6TB drives. The other a 12TB mirror. The redundancy in ZFS is at the vdev level. A zpool contains one or more vdevs. It combines their space like a JBOD. You can mix and match the size and type of the vdevs. You can have mirrors with RAIDz, just mirrors, just RAIDz, etc. My suggestion for having two mirrors, 6TB and 12TB results in 18TB usable space. This is straightforward, easy to manage and easy to expand. You just add another vdev to the pool with whatever topology you like. If you want to maximize the space with what you got, you can do your idea instead. It’s got a bit more setup and a bit less redundancy but it’ll work fine.




  • So you’ve listed some important cons. I don’t see the why outweighing those cons. If the why is “I really wanna play with this.” then perhaps that outweighs the cons.

    BTW on production servers we often don’t do updates at all. That’s because updates could break, beyond what’s expected. Instead we apply updates on the base OS in a preproduction environment, then we build an image out of it, test it and send that image to the data centers where our production servers are. Test it some more in a staging environment. Then the update becomes - spin up new VMs in the production environment from the new image and destroy the old VMs.




  • You’re right, raidz expansion is brand new and I probably wouldn’t use it for a few years. I was referring to adding new redundant vdevs to an existing pool which has always been supported as far as I know. E.g. if you have an existing raidz or mirror, you can add another raidz or mirror vdev to the pool. The pool size grows with the usable size of the new vdev. It’s just zpool add thepool mirror disk1 disk2 as far as I know. The downside being it results in less usable space - e.g. two raidz1 vdevs remove 2 disks from the usable space, whereas Unraid-raid would remove 1. For example if you have 3x 3TB and 3x 4TB disks, you’d end up with 14TB usable space with ZFS and 17TB with Unraid. On the flip side, the two raidz1 vdevs would have higher reliability since you can have one disk die in each vdev.

    just clicking a button on a webUI.

    No question. I think TrueNAS offers this too.

    ZFS also recently had a major data loss bug so I’m not sure safer is accurate.

    Imagine how many of those would be found in Unraid-raid if it was used as widely and for similar loads as ZFS. My argument isn’t that there aren’t bugs in storage systems. There are, and the more eyes have seen the code and the more users have lost data for more years, the fewer bugs would remain. Assuming similar competence of the system developers, ZFS being much older and ran for production loads makes it more likely to contain fewer data eating bugs than Unraid.




  • This is what I meant. ☝️ If they had merely wrapped LVM/mdraid or ZFS in a nice packaging my argument wouldn’t stand. They would have had equivalent data reliability to TrueNAS.

    As a software developer (who’s looked at ZFS’ source to chase a bug,) I would not dare to write my own redundant storage system. I feel like storage is a complex area with tons of hard-learned gotchas, and similar to cryptography, a best practice is to not roll your own unless truly necessary. This is not your run-of-the-mill web app and mistakes eat data. Potentially data with bite marks that gets backed up, eventually fully replacing the original before it’s caught. I don’t have data for this but I bet the proportion of Unraid users with eaten data from the total Unraid userbase is significantly higher than the equivalent for solutions using industry standard systems. The average web UI user probably isn’t browsing through their ods/xlsx files regularly to check whether some 5 became a 13.




  • Yeah, I guess that’s the niche. I would still not trust their homegrown raid scheme though. Making storage systems that don’t eat data is hard. Making it without bugs is impossible. Bugs are found by having someone’s data eaten and fixed over time scaled by the size of the userbase. As a result industry standard systems like mdraid, LVM, ZFS, and more recently Btrfs used in data centers and production applications are statistically guaranteed to eat less data than Unraid’s homegrown solution. I’ve heard it now supports those systems too so if I had to use Unraid, I’d probably be using ZFS for the storage.


  • Yeah, I’ve read about that but I couldn’t buy it because you could achieve similar results with LVM, ZFS etc. albeit with a bit more thought. For example I used to have a mirror (RAID1) comprised of 1TB, 3TB, 4TB and an 8TB disks. The 1, 3 and 4TB disks were concatenated in an 8TB linear volume (JBOD) and then that was mirrored with the 8TB disk (RAID1). All using standard battle tested software - LVM, mdraid and Ext4. I got 8TB usable from it. I’d have gotten the same in Unraid. The redundancy was equivalent too. With ZFS things are even simpler. Build whatever redundant scheme you have disks for. Use whatever redundancy scheme makes sense for those disks. You could combine multiple schemes. E.g. 1TB + 1TB mirror and a RAIDz1 with 3x 3TB disks, all adding to 7TB of nice contiguous usable space with all the data integrity guarantees of ZFS. Heck if you need to do some 3-disks-in-a-trenchcoat trickery to utilize your obsolete hardware like I did, you can use LVM for that and give it to ZFS to use. When you’re ready to expand, buy disks for whatever redundancy scheme you like and just add it to your ZFS pool. No fuss. You like living dangerously? Add disks without redundancy. Can’t afford redundancy now but you’d like it later? Add disks without redundancy now, add redundancy later.


  • I’m sure there are reasons for using Unraid but the original funky raid alternative they marketed has always struck me as extremely fishy. The kind of solution developed by folks who didn’t know enough about the best practices in storage and decided to roll their own. I guess people like web interfaces too. Personally I’d never use it. Get Debian Stable or Ubuntu LTS, learn some Docker, Ansible and Prometheus, deploy and never touch until you break it or the hardware breaks. Throw Webmin on it if you like dancing bears too.


  • Agreed. Or at least I hope so. There’s plenty of time for people to become aware that inflation has decreased, pharmacare and dental care to get introduced, etc. If they’d strengthen labor too, that’ll be great.

    If they do something marketable around housing and maybe smack some corpo household names a bit, that’ll be even more amazing. Love me some Galen Weston and Mirko Bibic tears. I won’t hold my breath on that one but I can dream.