Message from @Tervy

Discord ID: 539085765217353728


2019-01-27 13:28:23 UTC  

its not that great

2019-01-27 13:28:41 UTC  

dunno if other raid modes work differently on ZFS

2019-01-27 13:34:56 UTC  

oh yeah i meant to send this to you day or two ago @porco https://www.indiegogo.com/projects/turris-mox-modular-open-source-router#/ seen that yet ?

2019-01-27 13:35:19 UTC  

lettuce in your router?

2019-01-27 13:35:48 UTC  

their advert video is hilarious

2019-01-27 14:10:46 UTC  

@Tervy should I just run 4 disk RAID10 <:thonkang:327933449597878312>

2019-01-27 14:11:02 UTC  

ask /r/datahoarder

2019-01-27 14:11:03 UTC  

:P

2019-01-27 14:11:22 UTC  

and tbh its up to what you need as features,etc

2019-01-27 14:11:37 UTC  

i use mostly off-site systems these days

2019-01-27 14:11:59 UTC  

i only have learned not to run raid5

2019-01-27 14:12:02 UTC  

and never built more than 2 larger nas setups

2019-01-27 14:12:22 UTC  

where someone else did most of the "operating system" work

2019-01-27 14:12:29 UTC  

I have two disks and could increase that to 3-4 right now

2019-01-27 14:12:38 UTC  

but i have 8 bays and want to expand in the future

2019-01-27 14:12:48 UTC  

this one person said MDADM+LVM

2019-01-27 14:12:52 UTC  

i have no idea what that is lmao

2019-01-27 14:13:41 UTC  

mdadm is raid management and monitoring software what is qutie nice

2019-01-27 14:14:13 UTC  

he pretty much suggested software based raid system

2019-01-27 14:14:22 UTC  

not hardware based

2019-01-27 14:14:24 UTC  

gay

2019-01-27 14:19:38 UTC  

then ofc there is option for raid 6 madness what has most fault-tolerance :D

2019-01-27 14:19:45 UTC  

and requires 4 drives minimn

2019-01-27 14:19:57 UTC  

well, raid 6 is pretty equal to raid10

2019-01-27 14:20:03 UTC  

and but yeah im not a person to answer this question with enough knowledge

2019-01-27 14:20:08 UTC  

raid5/6 has a lot of write overhead because of parity

2019-01-27 14:20:16 UTC  

and they're slower to restore because of parity calculations

2019-01-27 14:20:17 UTC  

hence the 4 drives

2019-01-27 14:20:26 UTC  

4head

2019-01-27 14:20:34 UTC  

raid5 has a high chance of failure when one disk fails and you try to rebuild the matrix

2019-01-27 14:20:54 UTC  

that's a meme anyways

2019-01-27 14:21:03 UTC  

people quote the absolute maximum error rate for that

2019-01-27 14:21:14 UTC  

if your disk is at that point it was already dead anyways

2019-01-27 14:22:31 UTC  

doesnt the rebuild fail completely if one error occurs?

2019-01-27 14:22:55 UTC  

yes

2019-01-27 14:23:03 UTC  

sucks

2019-01-27 14:23:09 UTC  

not gonna risk it then

2019-01-27 14:23:20 UTC  

but think about it

2019-01-27 14:23:21 UTC  

that means

2019-01-27 14:23:39 UTC  

where i work i'm not too sure how it works, but we run weekly full scans of all data for integrity