5 Replies Latest reply on Sep 17, 2017 11:53 PM by bgks

    RAID 5 disk became disconnected from array

    dunagh

      I have created a RAID 5 with four disks and one of them has fallen off the array. All the HDs are new and the RAID is only two days old. I can no longer boot Windows 10 (so I can't use the Windows application) and I am trying to figure out how to rebuild my array through the EFI shell or the Legacy ROM. I don't see any options to select the drive that has fallen off and re attach it. Can someone point me in the right direction?

       

      The RAID utility in the EFI shell is AMD RAID utility Rev. 1.0.0.49

       

      Thanks!

        • Re: RAID 5 disk became disconnected from array
          yurtesen

          If one disk is off, you should be able to boot windows 10 if you disconnect that disk. (make sure to disconnect the right disk)

          I have a problem also now and thats why I am here. When I reconnect the disk, windows 10 crashes at boot. If I reconnect the disk after windows 10 boots, and go to raidxpert I have option to rebuild the array. But if I try to rebuild the array, then machine freezes and nothing happens. I tried different SATA cable and different port etc. and nothing worked. Let me know if you manage to rebuild your array.

          • Re: RAID 5 disk became disconnected from array
            bgks

            dunagh:

             

            0.) RAID5 is not the best place for a Windows system ("Boot Partition", "Boot Volume").

             

            0.1) RAID5 not performs well for random access writes, while Windows does quite a lot of random access writes. Ready Boost might help against this. Tough, the main reason for Windows to deny being installed onto a Dynamic Disk software RAID5 is not for considerations of performance. Installation onto Dynamic Disk software RAID1, that not suffers this issue, is also denied. Anyway, afaik Windows 10 not provides Dynamic Disk RAID5.

             

            0.2.1) With a certain propability future Windows 10 upgrades will purge the required RAID drivers from the installation: you won't be able to boot the system as the system is not capable to recognise the RAID as a RAID. IIRC the behavior is even more confusing: Windows boots once into some migration status and will continuously work fine as long as it is (re)booted from some energy saving status. The second full reboot fails because of the drivers being lost. Maybe a manual reinstallation of the drivers immediately upon the first boot after the upgrade helps against this destructive behavior. If Windows is installed from a installation medium that is not up to date the destructive upgrade might be run by the installation process or quite immediately after the installation. The desctructivity might depend on the very version of the "floppy disk" drivers that were installed during the installation process. For a complete mess you'd boot Windows from a VHD (or VHDx) on the RAID: up to now (1703) the upgrade routine not supports VHD-Installations; for an upgrade you'd need to attach and boot the VHD in Hyper-V, but Hyper-V considers the RAID drivers junk ...

             

            0.2.2) Up to now (1703) the repair system not provides any means to load or reload the RAID drivers. However, I didn't get any of the provided repair mechanisms to effect worse than to fail to repair anything. You can detect missing RAID drivers as the cause of the boot failure by not booting (from installation media) into the repair branch of the setup procedure, but into the custom installation branch. Don't be afraid by the label "Install Now", but take care to select the option "Custom". Quit the installation procedure after the disk configuration step, before the actual installation would start. The disk setup of the installation routine allows to load the RAID drivers and would reassemble the RAID regularly. Afterwards a (secret) Shift+F10 will provide you with a command shell and read (as well as write) access to the RAID as a RAID. This helps for backup purposes. It might also help to patch the missing RAID-Drivers into the upgraded System. I wasn't able to accomplish that, though.

             

            1.) A RAID5 with one single physical disk lost should signal a "cricital" status, but stay fully operational -- including: bootable. If you cannot boot the system on your RAID5, I'd rather check for the drivers (0.2.2).

             

            2.1) With access through the RAIDXpert you'd see a tab "Rebuild" in the Logical Drive View. In case of one lost drive it would provide an option to rebuild the RAID, but warn that the rebuild would destroy all data. This warning is disturbing as it is impossible to (re)build a RAID5 from 2 drives, while data is not destroyed when a missing 3rd drive is replaced for a rebuild. I never tried what really happens. Just push the help button ...

             

            2.2) The regular way to (safely) rebuild the RAID5 after one physical drive was lost is:

             

            * Locate which physical drive has been lost (this is easy if you have no other disks connected than those the RAID was made of).

             

            * Assure that the respective drive is (physically) available. You'd see the drive virtually twice: once as a missing member of the RAID, once as an unassigned physical drive. Physical drives being assigned to the RAID are not listed as physical drives by the RAID Utility (this is different from the RAIDXpert View that uses color codes in the "Graphic View").

             

            * Make the respective drive a "Spare Drive". Afterwards the rebuild (actually a resynchronisation) should start automatically. Note: the resync will collide with backups being run in parallel at the same time. As the resync stresses the disks the propability of a further disk failure -- now the fatal one -- increases. Run the backup first, then trigger the resync.