Raid 5, 6 & 10 Recovery

RAID 5 Data Recovery

No Fix - No Fee!

Our experts have extensive experience recovering data from RAID servers. With 25 years experience in the data recovery industry, we can help you securely recover your data.
Raid 5, 6 & 10 Recovery

Software Fault From £495

2-4 Days

Mechanical FaultFrom £895

2-4 Days

Critical Service From £995

1-2 Days

Need help recovering your data?

Call us on 0114 3392028 or use the form below to make an enquiry.
Chat with us
Monday-Friday: 9am-6pm

RAID 5 Data Recovery Specialists (25+ years)

We provide engineering-grade recovery for hardware & software RAIDs across desktops, servers, DAS and NAS— from small offices to multi-bay racks up to 32 disks. We work for home users, SMEs, large enterprises, and public sector.

How to submit your array:
Label each disk with its slot/position (and enclosure/controller model), place each drive in an anti-static bag, cushion in a padded envelope or small box, include your contact details, and post or drop it in. We clone every member first behind hardware write-blockers and reconstruct the array virtually; originals are never written to.


UK NAS brands we frequently see (with representative popular models)

  1. Synology – DS224+, DS423+, DS723+, DS923+, DS1522+; RS1221+, RS4021xs+

  2. QNAP – TS-464, TS-473A, TS-873A, TVS-h674; rack TS-x53DU/x73AU

  3. Western Digital (WD) – My Cloud EX2 Ultra, EX4100, PR4100

  4. Seagate / LaCie – LaCie 2big/6big/12big; Seagate IronWolf NAS enclosures

  5. Buffalo – TeraStation 3420/5420/6420; LinkStation 220/520

  6. NETGEAR – ReadyNAS RN424, RN524X, RN628X

  7. TerraMaster – F2-423, F4-423, F5-422; U-series racks

  8. ASUSTOR – AS5304T, AS6604T, LOCKERSTOR 4/8

  9. Thecus – N5810PRO, N7710-G

  10. Drobo (legacy) – 5N2, B810n

  11. iXsystems (TrueNAS) – Mini X/XL, R-series

  12. LenovoEMC (legacy) – ix2/ix4, px4

  13. Zyxel – NAS326, NAS542

  14. D-Link – DNS-320L, DNS-340L

  15. Promise – VTrak/Vess NAS series

Rack servers commonly deployed with RAID-5 / RAID-10 (representative)

  1. Dell EMC PowerEdge – R540/R640/R650/R750; T440/T640

  2. HPE ProLiant – DL360/DL380 Gen10/Gen11; ML350

  3. Lenovo ThinkSystem – SR630/SR650; ST550

  4. Supermicro SuperServer – 1029/2029/AS-series

  5. Fujitsu PRIMERGY – RX2540; TX2550

  6. Cisco UCS C-Series – C220/C240

  7. NetApp / HPE StoreEasy (Windows NAS servers) – StoreEasy 1460/1660/1860

  8. Dell Precision – 5820/7820/7920 workstations (scratch RAID-10/5)

  9. HP Z-Series – Z4/Z6/Z8 workstations

  10. Areca – ARC-1883/1886 series controllers

  11. Adaptec by Microchip – ASR-71605/8405/8805

  12. Promise VTrak – E/J DAS sets

  13. Infortrend EonStor – GS/GX/Gene (DAS/SAN)

  14. HPE MSA / Dell ME4 – entry SANs with RAID-5/6 LUNs

  15. HighPoint NVMe RAID – workstation striping/mirroring


Why RAID-5 / RAID-6 / RAID-10 fails — and our approach

  • RAID-5: single-parity (rotating P). Tolerates one member failure; UREs during rebuild are common.

  • RAID-6: dual parity P+Q (Reed-Solomon in GF(2⁸)). Tolerates two simultaneous failures; rebuilds are heavier.

  • RAID-10: mirrors of stripes (or stripes of mirrors). Tolerates one failure per mirror set; two failures in the same mirror set cause data loss.

Recovery requires discovering exact geometry (member order, stripe unit size, parity rotation for 5/6, start offsets, sector size 512e/4Kn), then building a read-only virtual array over clones before any filesystem work.


Professional RAID workflow (what we actually do)

  1. Forensic intake – serials/slots, controller metadata export (LSI/Adaptec/Areca), NAS mdadm/LVM info; SMART baselines.

  2. Clone all members – hardware imagers (PC-3000/Atola/DDI) with unstable-media profiles: head-select, zone priority, reverse passes, adaptive timeouts; error maps per member.

  3. Geometry & parity analysis – infer order/stripe/offsets; for RAID-6 compute P/Q candidates, validate with headers and parity equations.

  4. Virtual assembly – build a read-only array over the clones; normalise 512e/4Kn; trim to smallest common size; for RAID-10 identify mirror pairs and intra-pair ordering.

  5. Filesystem / LUN repair – NTFS/ReFS, APFS/HFS+, EXT/XFS/ZFS/Btrfs; journal/intent replay, B-tree & bitmap repair, orphan grafting.

  6. Encryption layers – BitLocker/FileVault/LUKS/SED decrypted on the image with client keys; otherwise recover plaintext remnants only.

  7. Verification & delivery – SHA-256 manifests, sample-open testing (DB/video/mail), secure handover.


Top 75 RAID-5/6/10 errors we recover — with the lab process

Geometry, parity & configuration (RAID-5/6/10)

  1. Unknown disk order → parity/consistency scans and header alignment; pick order with highest validity score.

  2. Unknown stripe unit → test 16–1024 KB; select size that maximises header alignment and parity correctness.

  3. Unknown start offsets → detect GPT/superblocks; adjust member offsets until superblocks align.

  4. Mixed 512e/4Kn → translate LBAs during build; re-align partitions.

  5. Foreign configs conflicting (LSI/Adaptec/Areca) → parse generations; choose last coherent epoch; ignore stale sets.

  6. Accidental re-initialisation → discard new headers; reconstruct old map from deep scans.

  7. Disk reorder after maintenance → correlation of stripe tails/heads; validate with filesystem signatures.

  8. Stripe size changed mid-life → segment array by epoch; export from coherent spans.

  9. Wrong RAID level detected by controller → manual virtualisation as true 5/6/10 using content analysis.

  10. RAID-10 mirror-pair map lost → detect mirrored pairs via block similarity; assign pair order.

Member/media failures (HDD/SSD/NVMe)

  1. Head crash on a member → donor HSA; per-head imaging; fill holes with parity (5/6) or mirror (10).

  2. Two members failed (RAID-6) → clone both; solve P/Q to reconstruct missing stripes.

  3. Two members failed (RAID-5) → clone all; salvage partials via carving; parity cannot fully restore.

  4. Multiple weak members → targeted small-block cloning; build union of best sectors; parity/mirror completes.

  5. Translator corruption (HDD) → rebuild from P/G-lists; clone; repair array from there.

  6. Spindle seizure/stiction → mechanical intervention; image good zones; parity/mirror fills gaps.

  7. SSD controller lock → SAFE/ROM mode; extract FTL (L2P) map; clone namespace.

  8. NAND wear / ECC maxed → majority-vote reads; per-channel isolation; accept residual holes filled by parity/mirror.

  9. NVMe namespace hidden → enumerate via admin commands; clone each namespace.

  10. Intermittent timeouts → thermal control, reduced QD; stabilise for imaging.

Controller / cache / firmware

  1. BBU failure (write-back lost) → write-hole stripes; select stripe versions using FS logs/journals.

  2. Controller firmware bug → detect inconsistent stripes; roll back to pre-bug epoch.

  3. Foreign import overwrote headers → recover old map from member content; ignore new metadata.

  4. Intel RST fakeraid mis-assembly → rebuild manually from raw members.

  5. Areca/Promise metadata mismatch → parse binary headers; pick consistent generation.

Rebuild & expansion problems

  1. Rebuild aborted on URE (RAID-5) → image failing member using tiny blocks; parity completes remaining sectors.

  2. Rebuild from stale member → journals/USN/APFS snapshots to pick the newest consistent source; redo virtually.

  3. Expand/reshape interrupted → locate pre/post-expand maps; export from last consistent.

  4. Wrong-size replacement drive → virtual member trimmed to common size; export below min capacity.

  5. Hot-spare promoted at wrong time → detect epoch; exclude inconsistent writes; re-assemble.

Parity math (RAID-5/6)

  1. Unknown parity rotation → classify left-/right-, symmetric/asymmetric by testing stripe equations.

  2. Parity disk stale → recompute parity from data members; trust majority.

  3. Q parity corrupt (RAID-6) → recompute in GF(2⁸); select consistent P/Q pair per stripe.

  4. Write-hole after power loss → per-stripe decision using FS logs and majority content.

  5. Parity placed at wrong offset → detect repeating misalignment; re-map rotation.

NAS stacks (Synology/QNAP/…)

  1. Synology mdadm + LVM layers damaged → assemble md on clones; reattach LVM; mount EXT4/Btrfs readonly.

  2. QNAP mdraid metadata conflict → choose coherent superblocks; reconstruct LVM2; export.

  3. Btrfs metadata duplication corrupt → use surviving metadata mirror to rebuild root; extract data.

  4. DSM/QTS update rewrote md headers → select pre-update generation; mount on image.

  5. iSCSI LUN on NAS broken → reconstruct target mapping; recover LUN; fix FS within.

Filesystems on top of RAID

  1. NTFS $MFT/$MFTMirr divergence → reconcile; replay $LogFile; repair $Bitmap.

  2. ReFS CoW inconsistency → roll back via shadow copies; export stable view.

  3. APFS object map/spacemap damage → rebuild OMAP/spacemap; enumerate snapshots; export.

  4. HFS+ catalog/extent B-tree corruption → rebuild trees; graft orphans.

  5. EXT superblock/journal damage → use backup superblocks; fsck with replay on the image.

  6. XFS log corruptionxfs_repair with logzero when indicated; salvage.

  7. ZFS pool with missing vdev → import readonly; zdb map blocks; copy files.

Partitioning & boot metadata

  1. GPT header lost → rebuild from secondary GPT & FS signatures.

  2. Hybrid MBR/GPT mismatch → normalise to GPT; re-align partitions.

  3. Protective MBR only → recover GPT from backup; validate boundaries.

Encryption layers

  1. BitLocker over RAID → use recovery key; decrypt LUN image; then repair FS.

  2. FileVault/APFS → decrypt clone with passphrase/recovery key; rebuild container.

  3. LUKS/dm-crypt → header/backup header; PBKDF; decrypt; mount readonly.

  4. SED (self-encrypting drives) → unlock with credentials; otherwise carve plaintext remnants only.

Administrative & human errors

  1. Deleted array / “factory reset” → deep scan members; restore prior metadata; assemble virtually.

  2. Disk pulled live then reinserted wrongly → fix order via parity/content; rebuild virtual set.

  3. Initialized a member by mistake → ignore new label; recover old partitions from deep scan.

  4. Converted level mid-fault → discard new meta; revert to last stable 5/6/10 layout.

  5. Cloned wrong way during replacement → determine source/target via timestamps; choose correct side.

I/O path & environment

  1. SATA/SAS CRC storms → stabilise cables/backplane; re-image; discard inconsistent writes.

  2. Expander link flaps → lock lanes; rehost if needed; clone.

  3. PSU brownouts → bench PSU; safe rails; image.

  4. Thermal throttling (NVMe arrays) → active cooling; reduced QD; staged imaging.

  5. USB/UASP DAS resets → fall back to BOT; powered hub; image.

Virtualisation & datastores on RAID

  1. VMFS header damage → rebuild VMFS structures; stitch VMDKs; mount guests.

  2. VMDK descriptor missing → regenerate from flat; reattach.

  3. Hyper-V AVHDX chain broken → fix parent IDs; merge diffs on image.

  4. Thin-provision confusion → honour allocation maps; ignore zero-fills.

  5. iSCSI multipath confusion → rebuild target map; mount LUN on image.

  6. CSV (Cluster Shared Volume) corruption → NTFS/ReFS repair on image; recover VMs.

Application/data layer

  1. Exchange/Outlook PST/OST corruption → low-level store repair; export to new PSTs.

  2. SQL/SQLite with good WAL → replay WAL; export consistent DB.

  3. Large MP4/MOV unplayable → rebuild moov/mdat; GOP re-index; re-mux.

  4. Photo libraries (Lightroom/Photos/FCPX) → repair catalogs; relink originals.

  5. Time Machine sparsebundle on RAID → reconstruct band map; mount DMG; restore.


Why choose Sheffield Data Recovery

  • 25+ years of RAID engineering across controllers (LSI/Avago, Adaptec, Areca, Promise, Intel RST) and NAS stacks (mdadm/LVM/Btrfs/ZFS).

  • Clone-first, read-only process; advanced imagers and donor inventory for HDD/SSD/NVMe.

  • Deep expertise from parity math (P/Q in GF(2⁸)) to filesystem internals and hypervisor datastores.

  • Forensic discipline: SHA-256 manifests, sample-open verification, documented results.

  • Free diagnostics with clear, evidence-based options before work begins.

Contact our Sheffield RAID engineers today for a free diagnostic. Package the disks safely (anti-static + padded envelope/small box, labelled by slot/order) and post or drop them in—we’ll take it from there.

Contact Us

Tell us about your issue and we'll get back to you.