RAID 5 Data Recovery Specialists (25+ years)
We provide engineering-grade recovery for hardware & software RAIDs across desktops, servers, DAS and NAS— from small offices to multi-bay racks up to 32 disks. We work for home users, SMEs, large enterprises, and public sector.
How to submit your array:
Label each disk with its slot/position (and enclosure/controller model), place each drive in an anti-static bag, cushion in a padded envelope or small box, include your contact details, and post or drop it in. We clone every member first behind hardware write-blockers and reconstruct the array virtually; originals are never written to.
UK NAS brands we frequently see (with representative popular models)
-
Synology – DS224+, DS423+, DS723+, DS923+, DS1522+; RS1221+, RS4021xs+
-
QNAP – TS-464, TS-473A, TS-873A, TVS-h674; rack TS-x53DU/x73AU
-
Western Digital (WD) – My Cloud EX2 Ultra, EX4100, PR4100
-
Seagate / LaCie – LaCie 2big/6big/12big; Seagate IronWolf NAS enclosures
-
Buffalo – TeraStation 3420/5420/6420; LinkStation 220/520
-
NETGEAR – ReadyNAS RN424, RN524X, RN628X
-
TerraMaster – F2-423, F4-423, F5-422; U-series racks
-
ASUSTOR – AS5304T, AS6604T, LOCKERSTOR 4/8
-
Thecus – N5810PRO, N7710-G
-
Drobo (legacy) – 5N2, B810n
-
iXsystems (TrueNAS) – Mini X/XL, R-series
-
LenovoEMC (legacy) – ix2/ix4, px4
-
Zyxel – NAS326, NAS542
-
D-Link – DNS-320L, DNS-340L
-
Promise – VTrak/Vess NAS series
Rack servers commonly deployed with RAID-5 / RAID-10 (representative)
-
Dell EMC PowerEdge – R540/R640/R650/R750; T440/T640
-
HPE ProLiant – DL360/DL380 Gen10/Gen11; ML350
-
Lenovo ThinkSystem – SR630/SR650; ST550
-
Supermicro SuperServer – 1029/2029/AS-series
-
Fujitsu PRIMERGY – RX2540; TX2550
-
Cisco UCS C-Series – C220/C240
-
NetApp / HPE StoreEasy (Windows NAS servers) – StoreEasy 1460/1660/1860
-
Dell Precision – 5820/7820/7920 workstations (scratch RAID-10/5)
-
HP Z-Series – Z4/Z6/Z8 workstations
-
Areca – ARC-1883/1886 series controllers
-
Adaptec by Microchip – ASR-71605/8405/8805
-
Promise VTrak – E/J DAS sets
-
Infortrend EonStor – GS/GX/Gene (DAS/SAN)
-
HPE MSA / Dell ME4 – entry SANs with RAID-5/6 LUNs
-
HighPoint NVMe RAID – workstation striping/mirroring
Why RAID-5 / RAID-6 / RAID-10 fails — and our approach
-
RAID-5: single-parity (rotating P). Tolerates one member failure; UREs during rebuild are common.
-
RAID-6: dual parity P+Q (Reed-Solomon in GF(2⁸)). Tolerates two simultaneous failures; rebuilds are heavier.
-
RAID-10: mirrors of stripes (or stripes of mirrors). Tolerates one failure per mirror set; two failures in the same mirror set cause data loss.
Recovery requires discovering exact geometry (member order, stripe unit size, parity rotation for 5/6, start offsets, sector size 512e/4Kn), then building a read-only virtual array over clones before any filesystem work.
Professional RAID workflow (what we actually do)
-
Forensic intake – serials/slots, controller metadata export (LSI/Adaptec/Areca), NAS mdadm/LVM info; SMART baselines.
-
Clone all members – hardware imagers (PC-3000/Atola/DDI) with unstable-media profiles: head-select, zone priority, reverse passes, adaptive timeouts; error maps per member.
-
Geometry & parity analysis – infer order/stripe/offsets; for RAID-6 compute P/Q candidates, validate with headers and parity equations.
-
Virtual assembly – build a read-only array over the clones; normalise 512e/4Kn; trim to smallest common size; for RAID-10 identify mirror pairs and intra-pair ordering.
-
Filesystem / LUN repair – NTFS/ReFS, APFS/HFS+, EXT/XFS/ZFS/Btrfs; journal/intent replay, B-tree & bitmap repair, orphan grafting.
-
Encryption layers – BitLocker/FileVault/LUKS/SED decrypted on the image with client keys; otherwise recover plaintext remnants only.
-
Verification & delivery – SHA-256 manifests, sample-open testing (DB/video/mail), secure handover.
Top 75 RAID-5/6/10 errors we recover — with the lab process
Geometry, parity & configuration (RAID-5/6/10)
-
Unknown disk order → parity/consistency scans and header alignment; pick order with highest validity score.
-
Unknown stripe unit → test 16–1024 KB; select size that maximises header alignment and parity correctness.
-
Unknown start offsets → detect GPT/superblocks; adjust member offsets until superblocks align.
-
Mixed 512e/4Kn → translate LBAs during build; re-align partitions.
-
Foreign configs conflicting (LSI/Adaptec/Areca) → parse generations; choose last coherent epoch; ignore stale sets.
-
Accidental re-initialisation → discard new headers; reconstruct old map from deep scans.
-
Disk reorder after maintenance → correlation of stripe tails/heads; validate with filesystem signatures.
-
Stripe size changed mid-life → segment array by epoch; export from coherent spans.
-
Wrong RAID level detected by controller → manual virtualisation as true 5/6/10 using content analysis.
-
RAID-10 mirror-pair map lost → detect mirrored pairs via block similarity; assign pair order.
Member/media failures (HDD/SSD/NVMe)
-
Head crash on a member → donor HSA; per-head imaging; fill holes with parity (5/6) or mirror (10).
-
Two members failed (RAID-6) → clone both; solve P/Q to reconstruct missing stripes.
-
Two members failed (RAID-5) → clone all; salvage partials via carving; parity cannot fully restore.
-
Multiple weak members → targeted small-block cloning; build union of best sectors; parity/mirror completes.
-
Translator corruption (HDD) → rebuild from P/G-lists; clone; repair array from there.
-
Spindle seizure/stiction → mechanical intervention; image good zones; parity/mirror fills gaps.
-
SSD controller lock → SAFE/ROM mode; extract FTL (L2P) map; clone namespace.
-
NAND wear / ECC maxed → majority-vote reads; per-channel isolation; accept residual holes filled by parity/mirror.
-
NVMe namespace hidden → enumerate via admin commands; clone each namespace.
-
Intermittent timeouts → thermal control, reduced QD; stabilise for imaging.
Controller / cache / firmware
-
BBU failure (write-back lost) → write-hole stripes; select stripe versions using FS logs/journals.
-
Controller firmware bug → detect inconsistent stripes; roll back to pre-bug epoch.
-
Foreign import overwrote headers → recover old map from member content; ignore new metadata.
-
Intel RST fakeraid mis-assembly → rebuild manually from raw members.
-
Areca/Promise metadata mismatch → parse binary headers; pick consistent generation.
Rebuild & expansion problems
-
Rebuild aborted on URE (RAID-5) → image failing member using tiny blocks; parity completes remaining sectors.
-
Rebuild from stale member → journals/USN/APFS snapshots to pick the newest consistent source; redo virtually.
-
Expand/reshape interrupted → locate pre/post-expand maps; export from last consistent.
-
Wrong-size replacement drive → virtual member trimmed to common size; export below min capacity.
-
Hot-spare promoted at wrong time → detect epoch; exclude inconsistent writes; re-assemble.
Parity math (RAID-5/6)
-
Unknown parity rotation → classify left-/right-, symmetric/asymmetric by testing stripe equations.
-
Parity disk stale → recompute parity from data members; trust majority.
-
Q parity corrupt (RAID-6) → recompute in GF(2⁸); select consistent P/Q pair per stripe.
-
Write-hole after power loss → per-stripe decision using FS logs and majority content.
-
Parity placed at wrong offset → detect repeating misalignment; re-map rotation.
NAS stacks (Synology/QNAP/…)
-
Synology mdadm + LVM layers damaged → assemble md on clones; reattach LVM; mount EXT4/Btrfs readonly.
-
QNAP mdraid metadata conflict → choose coherent superblocks; reconstruct LVM2; export.
-
Btrfs metadata duplication corrupt → use surviving metadata mirror to rebuild root; extract data.
-
DSM/QTS update rewrote md headers → select pre-update generation; mount on image.
-
iSCSI LUN on NAS broken → reconstruct target mapping; recover LUN; fix FS within.
Filesystems on top of RAID
-
NTFS $MFT/$MFTMirr divergence → reconcile; replay $LogFile; repair $Bitmap.
-
ReFS CoW inconsistency → roll back via shadow copies; export stable view.
-
APFS object map/spacemap damage → rebuild OMAP/spacemap; enumerate snapshots; export.
-
HFS+ catalog/extent B-tree corruption → rebuild trees; graft orphans.
-
EXT superblock/journal damage → use backup superblocks; fsck with replay on the image.
-
XFS log corruption →
xfs_repairwith logzero when indicated; salvage. -
ZFS pool with missing vdev → import readonly;
zdbmap blocks; copy files.
Partitioning & boot metadata
-
GPT header lost → rebuild from secondary GPT & FS signatures.
-
Hybrid MBR/GPT mismatch → normalise to GPT; re-align partitions.
-
Protective MBR only → recover GPT from backup; validate boundaries.
Encryption layers
-
BitLocker over RAID → use recovery key; decrypt LUN image; then repair FS.
-
FileVault/APFS → decrypt clone with passphrase/recovery key; rebuild container.
-
LUKS/dm-crypt → header/backup header; PBKDF; decrypt; mount readonly.
-
SED (self-encrypting drives) → unlock with credentials; otherwise carve plaintext remnants only.
Administrative & human errors
-
Deleted array / “factory reset” → deep scan members; restore prior metadata; assemble virtually.
-
Disk pulled live then reinserted wrongly → fix order via parity/content; rebuild virtual set.
-
Initialized a member by mistake → ignore new label; recover old partitions from deep scan.
-
Converted level mid-fault → discard new meta; revert to last stable 5/6/10 layout.
-
Cloned wrong way during replacement → determine source/target via timestamps; choose correct side.
I/O path & environment
-
SATA/SAS CRC storms → stabilise cables/backplane; re-image; discard inconsistent writes.
-
Expander link flaps → lock lanes; rehost if needed; clone.
-
PSU brownouts → bench PSU; safe rails; image.
-
Thermal throttling (NVMe arrays) → active cooling; reduced QD; staged imaging.
-
USB/UASP DAS resets → fall back to BOT; powered hub; image.
Virtualisation & datastores on RAID
-
VMFS header damage → rebuild VMFS structures; stitch VMDKs; mount guests.
-
VMDK descriptor missing → regenerate from flat; reattach.
-
Hyper-V AVHDX chain broken → fix parent IDs; merge diffs on image.
-
Thin-provision confusion → honour allocation maps; ignore zero-fills.
-
iSCSI multipath confusion → rebuild target map; mount LUN on image.
-
CSV (Cluster Shared Volume) corruption → NTFS/ReFS repair on image; recover VMs.
Application/data layer
-
Exchange/Outlook PST/OST corruption → low-level store repair; export to new PSTs.
-
SQL/SQLite with good WAL → replay WAL; export consistent DB.
-
Large MP4/MOV unplayable → rebuild
moov/mdat; GOP re-index; re-mux. -
Photo libraries (Lightroom/Photos/FCPX) → repair catalogs; relink originals.
-
Time Machine sparsebundle on RAID → reconstruct band map; mount DMG; restore.
Why choose Sheffield Data Recovery
-
25+ years of RAID engineering across controllers (LSI/Avago, Adaptec, Areca, Promise, Intel RST) and NAS stacks (mdadm/LVM/Btrfs/ZFS).
-
Clone-first, read-only process; advanced imagers and donor inventory for HDD/SSD/NVMe.
-
Deep expertise from parity math (P/Q in GF(2⁸)) to filesystem internals and hypervisor datastores.
-
Forensic discipline: SHA-256 manifests, sample-open verification, documented results.
-
Free diagnostics with clear, evidence-based options before work begins.
Contact our Sheffield RAID engineers today for a free diagnostic. Package the disks safely (anti-static + padded envelope/small box, labelled by slot/order) and post or drop them in—we’ll take it from there.




