Fixed the ‘Crashed’ Basic volume of Synology NAS

NeilZhang
NeilZhang
管理员
140
文章
106.8千
浏览
Life Linux评论9,375字数 777阅读2分35秒阅读模式

I did not know why my NAS said one of the Basic volumes as 'Crashed' and I could not write files to it. At the beginning I though it should be easy task so I logged the system through SSH directly and umounted the filesystem and run a 'e2fsck' on it. It got some errors and fixed them then I could mount it without any issue, and of course I could write it without any problem.

It was fixed, right? Yes, until I rebooted the NAS.

After the reboot, I found this volume still was marked as 'Crashed' and the filesystem became readonly again. And this time I wanted to fix it permanently.

When I tried to assembly the raid array, I got below error message:

  1. dsm> mdadm -Av /dev/md3 /dev/sdd3
  2. mdadm: looking for devices for /dev/md3
  3. mdadm: /dev/sdd3 is identified as a member of /dev/md3, slot 0.
  4. mdadm: device 0 in /dev/md3 has wrong state in superblock, but /dev/sdd3 seems ok
  5. mdadm: added /dev/sdd3 to /dev/md3 as 0
  6. mdadm: /dev/md3 has been started with 1 drive.
  7. dsm> e2fsck /dev/md3
  8. e2fsck 1.42.6 (21-Sep-2012)
  9. 1.42.6-5644: is cleanly umounted, 809/91193344 files, 359249805/364756736 blocks

I checked the raid array and the disk partition, while could not find any issue:

  1. dsm> mdadm -D /dev/md3
  2. /dev/md3:
  3. Version : 1.2
  4. Creation Time : Thu Feb 4 15:03:34 2016
  5. Raid Level : raid1
  6. Array Size : 1459026944 (1391.44 GiB 1494.04 GB)
  7. Used Dev Size : 1459026944 (1391.44 GiB 1494.04 GB)
  8. Raid Devices : 1
  9. Total Devices : 1
  10. Persistence : Superblock is persistent
  11.  
  12. Update Time : Sat Dec 30 16:13:47 2017
  13. State : clean
  14. Active Devices : 1
  15. Working Devices : 1
  16. Failed Devices : 0
  17. Spare Devices : 0
  18.  
  19. Name : Gen8:3
  20. UUID : d0796c7b:c9b0ab70:211e65c0:843891e2
  21. Events : 40
  22.  
  23. Number Major Minor RaidDevice State
  24. 0 8 51 0 active sync /dev/sdd3
  25. dsm> mdadm -E /dev/sdd3
  26. /dev/sdd3:
  27. Magic : a92b4efc
  28. Version : 1.2
  29. Feature Map : 0x0
  30. Array UUID : d0796c7b:c9b0ab70:211e65c0:843891e2
  31. Name : Gen8:3
  32. Creation Time : Thu Feb 4 15:03:34 2016
  33. Raid Level : raid1
  34. Raid Devices : 1
  35.  
  36. Avail Dev Size : 2918053888 (1391.44 GiB 1494.04 GB)
  37. Array Size : 2918053888 (1391.44 GiB 1494.04 GB)
  38. Data Offset : 2048 sectors
  39. Super Offset : 8 sectors
  40. State : clean
  41. Device UUID : a9eafb18:84f3974f:42751688:7136418d
  42.  
  43. Update Time : Sat Dec 30 16:59:19 2017
  44. Checksum : faa9b8c3 - correct
  45. Events : 40
  46.  
  47.  
  48. Device Role : Active device 0
  49. Array State : A ('A' == active, '.' == missing)

So for some unknown reason, I always got the wrong state in the /proc/mdstat:

  1. dsm> cat /proc/mdstat
  2. Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
  3. md3 : active raid1 sdd3[0](E)
  4. 1459026944 blocks super 1.2 [1/1] [E]
  5.  
  6. md2 : active raid1 sdc3[0]
  7. 17175149551 blocks super 1.2 [1/1] [U]
  8.  
  9. md1 : active raid1 sdc2[0] sdd2[1]
  10. 2097088 blocks [12/2] [UU__________]
  11.  
  12. md0 : active raid1 sdc1[0] sdd1[1]
  13. 2490176 blocks [12/2] [UU__________]
  14.  
  15. unused devices: <none>

There was an '[E]' letter and I though maybe it meant 'Error', so how to clear it?

I searched and read lots of posts, and below one save me:

recovering-a-raid-array-in-e-state-on-a-synology-nas

I found similar solutions before reaching this post, while I did not dare to try it as I had not copied all the data, but the above information gave me more confidence and I run below commands:

  1. dsm> mdadm -Cf /dev/md3 -e1.2 -n1 -l1 /dev/sdd3 -ud0796c7b:c9b0ab70:211e65c0:843891e2
  2. mdadm: /dev/sdd3 appears to be part of a raid array:
  3. level=raid1 devices=1 ctime=Thu Feb 4 15:03:34 2016
  4. Continue creating array? y
  5. mdadm: array /dev/md3 started.
  6. dsm> cat /proc/mdstat
  7. Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
  8. md3 : active raid1 sdd3[0]
  9. 1459026944 blocks super 1.2 [1/1] [U]
  10.  
  11. md2 : active raid1 sdc3[0]
  12. 17175149551 blocks super 1.2 [1/1] [U]
  13.  
  14. md1 : active raid1 sdc2[0] sdd2[1]
  15. 2097088 blocks [12/2] [UU__________]
  16.  
  17. md0 : active raid1 sdc1[0] sdd1[1]
  18. 2490176 blocks [12/2] [UU__________]
  19.  
  20. unused devices: <none>
  21. dsm> e2fsck -pvf -C0 /dev/md3
  22.  
  23. 809 inodes used (0.00%, out of 91193344)
  24. 5 non-contiguous files (0.6%)
  25. 1 non-contiguous directory (0.1%)
  26. # of inodes with ind/dind/tind blocks: 0/0/0
  27. Extent depth histogram: 613/84/104
  28. 359249805 blocks used (98.49%, out of 364756736)
  29. 0 bad blocks
  30. 180 large files
  31.  
  32. 673 regular files
  33. 127 directories
  34. 0 character device files
  35. 0 block device files
  36. 0 fifos
  37. 0 links
  38. 0 symbolic links (0 fast symbolic links)
  39. 0 sockets
  40. ------------
  41. 800 files
  42. dsm> cat /etc/fstab
  43. none /proc proc defaults 0 0
  44. /dev/root / ext4 defaults 1 1
  45. /dev/md3 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl 0 0
  46. /dev/md2 /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl 0 0
  47. dsm> mount /dev/md3
  48. dsm> df -h
  49. Filesystem Size Used Avail Use% Mounted on
  50. /dev/root 2.3G 676M 1.6G 31% /
  51. /tmp 2.0G 120K 2.0G 1% /tmp
  52. /run 2.0G 2.5M 2.0G 1% /run
  53. /dev/shm 2.0G 0 2.0G 0% /dev/shm
  54. none 4.0K 0 4.0K 0% /sys/fs/cgroup
  55. /dev/bus/usb 2.0G 4.0K 2.0G 1% /proc/bus/usb
  56. /dev/md2 16T 14T 2.5T 85% /volume1
  57. /dev/md3 1.4T 1.4T 21G 99% /volume2
  58. dsm> ls /volume2
  59. @eaDir @tmp aquota.group aquota.user downloads lost+found synoquota.db

After that I checked the volume state again and the annoying 'Crashed' became 'Normal'. Fixed the ‘Crashed’ Basic volume of Synology NAS

 
  • 本文由 NeilZhang 发表于30/12/2017 23:13:20
  • Repost please keep this link: https://www.dbcloudsvc.com/blogs/life/fixed-the-crashed-basic-volume-of-synology-nas/
匿名

发表评论

匿名网友
:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:
确定