Replacing a failed drive in NAS

Posted by: PeterE on 30 January 2017

I have a QNAP TS-412 NAS comprising 2x2TB hard drives operating as a mirrored pair.  One of the drives appears to be failed or failing from what I can make out from QFinder Pro. 

To replace the failed/failing drive is it simply a matter of taking out the old drive, putting in a new drive and sitting back whilst the system does its thing?  Or is it more techy than that? Can someone please help?

Posted on: 30 January 2017 by David Hendon
PeterE posted:

I have a QNAP TS-412 NAS comprising 2x2TB hard drives operating as a mirrored pair.  One of the drives appears to be failed or failing from what I can make out from QFinder Pro. 

To replace the failed/failing drive is it simply a matter of taking out the old drive, putting in a new drive and sitting back whilst the system does its thing?  Or is it more techy than that? Can someone please help?

Google is your friend in cases like this. I googled QNAP NAS replace failed drive and there it all is.  I think you can just plug in the new drive. Turn it on again and leave it to rebuild, but it does depend on your exact configuration which is why I suggest googling the answer and checking out your exact setup.

best

David

Posted on: 30 January 2017 by Cbr600

Suggest you ensue you get replace the drive matched in model and spec, to ensure correct future system operation.

buying "similar" drives might be troublesome

Posted on: 30 January 2017 by Peter Dinh

Just the same size (not smaller) is fine, no need for the same exact model, the mirroring mechanism does not care which harddrive it is.

Posted on: 30 January 2017 by Bart

I would strongly suggest that you rely on the QNAP documentation.  Have you read it?

Posted on: 30 January 2017 by trickydickie

I'd also ensure I had an up to date backup before doing anything

Posted on: 31 January 2017 by PeterE

Bart - I spent a few hours in the QNAP website trying to find something relevant to my situation.  It looks like replacing a failed drive is known in the trade as rebuilding the RAID, and my drive arrangement is RAID 1.  According to QNAP rebuilding the RAID is just a matter of taking one drive out, putting a new one in and then waiting for the system to do its thing.

Trickydickie - I understand (from Wikipedia) that in my 2-drive RAID 1 drive arrangement each drive acts as backup for the other.

Posted on: 31 January 2017 by trickydickie
PeterE posted:

Trickydickie - I understand (from Wikipedia) that in my 2-drive RAID 1 drive arrangement each drive acts as backup for the other.

RAID allows the device to continue running one a disk has failed but it is not a backup.  

A change such as replacing a drive has the potential to go wrong so best practice would be to have a recovery plan in case of need and a full backup would be that recovery plan.

It is more than worthwhile to backup as a matter of course, you never know how the NAS drive could become corrupted, and this could happen to both disks.

Posted on: 31 January 2017 by intothevoid
PeterE posted:

Bart - I spent a few hours in the QNAP website trying to find something relevant to my situation.  It looks like replacing a failed drive is known in the trade as rebuilding the RAID, and my drive arrangement is RAID 1.  According to QNAP rebuilding the RAID is just a matter of taking one drive out, putting a new one in and then waiting for the system to do its thing.

yes, that's exactly it. Don't worry about the make of the drive but do ensure the new drive is at least the size of the faulty disk.

Rebuilding RAID takes a while, depending on the size of your disk, so be patient. And don't interrupt it.

I once replaced a 1.5Tb disk with a 2Tb disk configured as RAID5 and it took nigh on 24 hours to complete.

 

Posted on: 31 January 2017 by jon h

First question -- do you have a backup of your QNAP? It is not unknown for raid controllers to 'spit the dummy" when asked to do a rebuild and to trash the entire raid. Dell controllers did this for a while

So although your raid is degraded in that it is now a single disc, risk analysis says you should ensure you have a full backup of that QNAP *before* to attempt to rebuild the QNAP raid.

Posted on: 31 January 2017 by Cbr600
Peter Dinh posted:

Just the same size (not smaller) is fine, no need for the same exact model, the mirroring mechanism does not care which harddrive it is.

Surely you have to ensure Same spec such as drives spinning at same speed, etc

Posted on: 31 January 2017 by jon h

Nope. I have a two disc synology with 3tb drives. One failed. Replaced by 6.tb drive. Rebuilt mirror. Pulled second 3tb and replaced with second 6tb. Rebuilt. Then grew the volume to 6tb

Posted on: 31 January 2017 by banzai
Cbr600 posted:
Peter Dinh posted:

Just the same size (not smaller) is fine, no need for the same exact model, the mirroring mechanism does not care which harddrive it is.

Surely you have to ensure Same spec such as drives spinning at same speed, etc

If you know how Linux system writes to disk, perhaps you understand what Peter means.

Replacing a failed hard drive for Raid 1 is really simple, and the procedure is well proven, it is designed for mass consumer. Otherwise, nobody would buy a NAS.

Posted on: 31 January 2017 by Hmack
jon honeyball posted:

Nope. I have a two disc synology with 3tb drives. One failed. Replaced by 6.tb drive. Rebuilt mirror. Pulled second 3tb and replaced with second 6tb. Rebuilt. Then grew the volume to 6tb

Not directly relevant to this post, because I have never had a drive fail on me. However, I have had a NAS motherboard failure (Synology NAS), and I was able to simply slot my mirrored twin drives into a new (different model) Synology NAS and the Synology software got them up-and-running pretty much seamlessly. 

I did of course approach the Synology Support Team for advice (the chaps I spoke to were excellent) before going down that route, and I would advise you do the same with QNAP if anything like this occurs to you at any time.  

Posted on: 31 January 2017 by jon h

I have had 6 drives fail in the last 6 months. But then I have got over 300Tb of storage.

Posted on: 31 January 2017 by Cbr600
banzai posted:
Cbr600 posted:
Peter Dinh posted:

Just the same size (not smaller) is fine, no need for the same exact model, the mirroring mechanism does not care which harddrive it is.

Surely you have to ensure Same spec such as drives spinning at same speed, etc

If you know how Linux system writes to disk, perhaps you understand what Peter means.

Replacing a failed hard drive for Raid 1 is really simple, and the procedure is well proven, it is designed for mass consumer. Otherwise, nobody would buy a NAS.

Banzai, I have a few nas drives including a 6 bay 12tb and a 6 bay 18tb units, both working on raid 5. 

I also have smaller nas 4 bay units, one with a failed drive and the unit is a 7200 speed disk. My point is that if I bought a same size drive at 5200 speed, surely that would cause problems with the nas?

Posted on: 31 January 2017 by Cbr600
jon honeyball posted:

I have had 6 drives fail in the last 6 months. But then I have got over 300Tb of storage.

Hello big boy !!��

Posted on: 31 January 2017 by intothevoid
jon honeyball posted:

.... But then I have got over 300Tb of storage.

  

 

Posted on: 31 January 2017 by David Hendon
jon honeyball posted:

I have had 6 drives fail in the last 6 months. But then I have got over 300Tb of storage.

Well one does need to have a bit of choice when one wants to listen to some music doesn't one? And one loves each and every one of those half a million ripped CDs....

Posted on: 31 January 2017 by Ian_S

Personally I don't think RAID-5/6 is a great idea in consumer gear. It causes the data spread on the disks to look almost random, and random I/O is the worst I/O for SATA drives. They really don't do it very well. They get hot, overheat, and heat is the killer of all disk drives. In enterprise disk arrays, SATA drives are often throttled back when used heavily to reduce likelihood of failure. Performance at that point is ****.

What SATA drives are good for is streaming... sequential access to big files. This is because they are generally high density, which lends itself to these workloads. 

If you have a 2 bay NAS then there's not much choice, but in 4-bay plus then there is. Personally I would recommend sticking with RAID-1 (mirrored) drive sets and then split the load by media type. For example, keep video rips and music on different drive sets, especially if multiple people are going to stream stuff at the same time. 

Of course if you can afford to stuff your NAS full of SSD then it's all different. At that point you want to level out usage across drives as evenly as possible to balance out the write wear, Things like random I/O doesn't matter as much as write wear. Then RAID 5/6 make more sense and you get the maximum amount of space too. 

As for keeping SATA drives matching spec, well in theory it shouldn't matter but in practice it may. Regardless all drives should be on the NAS drive approved list, else they may not behave well (don't power down, or power down too often which can decrease drive life). If you have a slower drive in an array then it will slow the whole array. In a RAID-1 array it may mean the quicker drive does more read work, as if the NAS sends reads to the least busy drive, the slower one does less, so again wear may be skewed to the quicker drive. In RAID 5/6 it just slows all things down as the data is spread amongst drives instead of both being a copy.

Does any of this make SQ differences? Well I guess that depends on whether you believe that increased use of drive actuators creates more EMF which in turn might leak through to critical analog components.

Posted on: 31 January 2017 by banzai
Cbr600 posted:
banzai posted:
Cbr600 posted:
Peter Dinh posted:

Just the same size (not smaller) is fine, no need for the same exact model, the mirroring mechanism does not care which harddrive it is.

Surely you have to ensure Same spec such as drives spinning at same speed, etc

If you know how Linux system writes to disk, perhaps you understand what Peter means.

Replacing a failed hard drive for Raid 1 is really simple, and the procedure is well proven, it is designed for mass consumer. Otherwise, nobody would buy a NAS.

Banzai, I have a few nas drives including a 6 bay 12tb and a 6 bay 18tb units, both working on raid 5. 

I also have smaller nas 4 bay units, one with a failed drive and the unit is a 7200 speed disk. My point is that if I bought a same size drive at 5200 speed, surely that would cause problems with the nas?

Qnap have told us not to worry about different rpm and cache. If they are on the Qnap disk compatibility list, they should work together.

Posted on: 31 January 2017 by jon h

Adding another 120tb later this week. It is staggering how cheap this stuff is

Posted on: 01 February 2017 by Ravenswood10
jon honeyball posted:

Adding another 120tb later this week. It is staggering how cheap this stuff is

Blimey, that's enough for the British Library! I had on fail in a two drive raid configuration and just swapped in a new drive of the same make and size as the failed one. Just slotted the new one in and the NAS just took over and did its thing. I also have another NAS drive hidden away elsewhere in the house just in case some light fingered person walks off with my QNAP.

Posted on: 01 February 2017 by Bart
Ravenswood10 posted:
 I also have another NAS drive hidden away elsewhere in the house just in case some light fingered person walks off with my QNAP.
 

Of all the things I might worry about a visitor stealing, my nas is not high on the list.  Thieves unlikely to target it too.

Posted on: 01 February 2017 by Guy007
jon honeyball posted:

I have had 6 drives fail in the last 6 months. But then I have got over 300Tb of storage.

Were they all the same disk make / size that failed ? What % of your total drive count were they ? How long had they been running before failure ? What RAID setup do you use ? Which NAS shell/s ?

Thanks in advance :-)

Posted on: 01 February 2017 by Peter Dinh
Guy007 posted:
jon honeyball posted:

I have had 6 drives fail in the last 6 months. But then I have got over 300Tb of storage.

Were they all the same disk make / size that failed ? What % of your total drive count were they ? How long had they been running before failure ? What RAID setup do you use ? Which NAS shell/s ?

Thanks in advance :-)

Yes, this is really an exception. I have never had a hard drive that fails me and this is going back to late 80's.