Add space to the Synology NAS in the VM

NeilZhang
NeilZhang
管理员
140
文章
106.8千
浏览
Life Linux评论427字数 1793阅读5分58秒阅读模式

I have finished similar tasks more than three times, so I though I would better write down it so in the future I could have a reference and just need to follow it step by step.

Below steps only tested on DSM 5.X (5.2 for me), and I will not upgrade it to DSM 6.X, so for 6.X system maybe I will write one blog for it.

My NAS environment:

HP Microserver Gen8 + P212 Smart Array + Highpoint X4 external enclosure, so at most I could add 8 disks to the raid array and this time I added the seventh one.

ESXi 6.0 is running on the Gen8 and Synology NAS is one of the VMs.

A new hard disk will arrive next month and I will convert it to RAID6.

Before the task, we should know it contains below several steps:

  1. Add the disk to the raid array
  2. Expand/extend the logical drive in the array
  3. Extend the datastore and the VMDK file in the ESXi
  4. Extend the volume of the Synology NAS

Another way is to add another basic volume to the Synology NAS while I prefer to place all my files on one file system as I do not want to copy or move files in the future.

The first step is really time consuming -- in fact this time it lasted for about 4-5 days!

Below are the detail commands for every step:

  1. Add the disk to the raid array
    1. [root@esxi:/opt/hp/hpssacli/bin] ./hpssacli controller all show config
    2.  
    3. Smart Array P212 in Slot 1 (sn: PACCP9SZ2DQB )
    4.  
    5.  
    6. Port Name: 1I
    7.  
    8. Port Name: 2E
    9. array A (SATA, Unused Space: 0 MB)
    10.  
    11.  
    12. logicaldrive 1 (13.6 TB, RAID 5, OK)
    13.  
    14. physicaldrive 1I:0:1 (port 1I:box 0:bay 1, SATA, 3 TB, OK)
    15. physicaldrive 1I:0:2 (port 1I:box 0:bay 2, SATA, 3 TB, OK)
    16. physicaldrive 1I:0:3 (port 1I:box 0:bay 3, SATA, 3 TB, OK)
    17. physicaldrive 1I:0:4 (port 1I:box 0:bay 4, SATA, 3 TB, OK)
    18. physicaldrive 2E:0:5 (port 2E:box 0:bay 5, SATA, 3 TB, OK)
    19. physicaldrive 2E:0:6 (port 2E:box 0:bay 6, SATA, 3 TB, OK)
    20.  
    21. unassigned
    22.  
    23. physicaldrive 2E:0:7 (port 2E:box 0:bay 7, SATA, 3 TB, OK)
    24.  
    25. SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143800976949F)
    26. [root@esxi:/opt/hp/hpssacli/bin] ./hpssacli controller slot=1 array A add drives=2E:0:7
    27. [root@esxi:/opt/hp/hpssacli/bin] ./hpssacli controller slot=1 modify expandpriority=high
    28. [root@esxi:/opt/hp/hpssacli/bin] date; ./hpssacli controller all show config detail|grep Transforming
    29. Sat Dec 23 11:39:39 UTC 2017
    30. Status: Transforming, 0.58% complete

    This step will last several days depending on your array size, and after that a Parity Initialization would start automatically:

    1. [root@esxi:~] /opt/hp/hpssacli/bin/hpssacli controller all show config detail|grep Progress
    2. Parity Initialization Status: In Progress
    3. Parity Initialization Progress: 0% complete

    After several hours we could extend the logical drive.

  2. Expand/extend the logical drive in the array
    1. [root@esxi:~] /opt/hp/hpssacli/bin/hpssacli controller slot=1 logicaldrive 1 modify size=max forced
    2. [root@esxi:~] /opt/hp/hpssacli/bin/hpssacli controller all show config detail|grep Progress
    3. Parity Initialization Status: In Progress
    4. Parity Initialization Progress: 5% complete

    When finished, the size of the logical drive would have been increased.

    1. [root@esxi:~] /opt/hp/hpssacli/bin/hpssacli controller all show config
    2.  
    3. Smart Array P212 in Slot 1 (sn: PACCP9SZ2DQB )
    4.  
    5.  
    6. Port Name: 1I
    7.  
    8. Port Name: 2E
    9. array A (SATA, Unused Space: 0 MB)
    10.  
    11.  
    12. logicaldrive 1 (16.4 TB, RAID 5, OK)
    13.  
    14. physicaldrive 1I:0:1 (port 1I:box 0:bay 1, SATA, 3 TB, OK)
    15. physicaldrive 1I:0:2 (port 1I:box 0:bay 2, SATA, 3 TB, OK)
    16. physicaldrive 1I:0:3 (port 1I:box 0:bay 3, SATA, 3 TB, OK)
    17. physicaldrive 1I:0:4 (port 1I:box 0:bay 4, SATA, 3 TB, OK)
    18. physicaldrive 2E:0:5 (port 2E:box 0:bay 5, SATA, 3 TB, OK)
    19. physicaldrive 2E:0:6 (port 2E:box 0:bay 6, SATA, 3 TB, OK)
    20. physicaldrive 2E:0:7 (port 2E:box 0:bay 7, SATA, 3 TB, OK)
    21.  
    22. SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143800976949F)
  3. Extend the datastore and the VMDK file in the ESXi
    Checked and confirmed the new size had been recognizedAdd space to the Synology NAS in the VMAdd space to the Synology NAS in the VMAdd space to the Synology NAS in the VMFrom the 'Properties' of the Datastore, we could increase the size easily.Add space to the Synology NAS in the VMAdd space to the Synology NAS in the VMSelected 'Maximum available space'Add space to the Synology NAS in the VMAnd we could increase the size of the VMDK file directly through the client.Add space to the Synology NAS in the VM
  4. Extend the volume of the Synology NAS
    To finish this step safely, I created another VM and managed this soft raid array, then detached it and opened it in the NAS system.

    1. [root@esxi:~] vim-cmd vmsvc/getallvms
    2. Vmid Name File Guest OS Version Annotation
    3. 10 ora12c [FRAID5] ora12c/ora12c.vmx rhel7_64Guest vmx-11
    4. 11 dsm [FRAID5] dsm/dsm.vmx other26xLinux64Guest vmx-11
    5. 12 rac-node2 [FRAID5] rac12-node2/rac12-node2.vmx rhel6_64Guest vmx-11
    6. 13 oracc12c [SSD] oracc12c/oracc12c.vmx rhel6_64Guest vmx-11
    7. 15 rac12-leaf1 [FRAID5] rac12-leaf1/rac12-leaf1.vmx rhel6_64Guest vmx-11
    8. 16 rac12-leaf2 [FRAID5] rac12-leaf2/rac12-leaf2.vmx rhel6_64Guest vmx-11
    9. 18 ddns [FRAID5] ddns/ddns.vmx rhel6_64Guest vmx-11
    10. 20 RController [SSD] RController/RController.vmx winNetEnterpriseGuest vmx-11
    11. 9 rac-node1 [FRAID5] rac12-node1/rac12-node1.vmx rhel6_64Guest vmx-11

    Got the Vmid of all the VMs as I could not attach so large disk to the VM directly, and added it to the VM named ora12c:

    1. [root@esxi:~] vim-cmd vmsvc/device.diskaddexisting 10 /vmfs/volumes/FRAID5/dsm/dsm.vmdk 0 5

    I already had five disks (0-4) on the SCSI channel 0, so assigned the number 5 of the vmdk file. Please make sure the NAS system is stopped before following commands.

    1. [root@ora12c ~]# fdisk -l
    2. ....................
    3. WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    4.  
    5. Disk /dev/sdf: 17592.2 GB, 17592186044416 bytes, 34359738368 sectors
    6. Units = sectors of 1 * 512 = 512 bytes
    7. Sector size (logical/physical): 512 bytes / 512 bytes
    8. I/O size (minimum/optimal): 512 bytes / 512 bytes
    9. Disk label type: gpt
    10.  
    11.  
    12. # Start End Size Type Name
    13. 1 2048 4982527 2.4G Linux RAID
    14. 2 4982528 9176831 2G Linux RAID
    15. 3 9437184 30408703966 14.2T Linux RAID Linux RAID
    16. ......................
    17. Disk /dev/md127: 15564.4 GB, 15564423544320 bytes, 30399264735 sectors
    18. Units = sectors of 1 * 512 = 512 bytes
    19. Sector size (logical/physical): 512 bytes / 512 bytes
    20. I/O size (minimum/optimal): 512 bytes / 512 bytes
    21. ......................
    22. [root@ora12c ~]# gdisk -l /dev/sdf
    23. GPT fdisk (gdisk) version 0.8.6
    24.  
    25. Partition table scan:
    26. MBR: protective
    27. BSD: not present
    28. APM: not present
    29. GPT: present
    30.  
    31. Found valid GPT with protective MBR; using GPT.
    32. Disk /dev/sdf: 34359738368 sectors, 16.0 TiB
    33. Logical sector size: 512 bytes
    34. Disk identifier (GUID): 3FA697A2-5A88-45AD-89B3-70C227AF71AE
    35. Partition table holds up to 128 entries
    36. First usable sector is 34, last usable sector is 34359738334
    37. Partitions will be aligned on 256-sector boundaries
    38. Total free space is 3951296734 sectors (1.8 TiB)
    39.  
    40. Number Start (sector) End (sector) Size Code Name
    41. 1 2048 4982527 2.4 GiB FD00
    42. 2 4982528 9176831 2.0 GiB FD00
    43. 3 9437184 30408703966 14.2 TiB FD00 Linux RAID
    44. [root@ora12c ~]# mdadm --detail /dev/md127
    45. /dev/md127:
    46. Version : 1.2
    47. Creation Time : Tue Jan 12 00:23:11 2016
    48. Raid Level : raid1
    49. Array Size : 15199632367 (14495.50 GiB 15564.42 GB)
    50. Used Dev Size : 15199632367 (14495.50 GiB 15564.42 GB)
    51. Raid Devices : 1
    52. Total Devices : 1
    53. Persistence : Superblock is persistent
    54.  
    55. Update Time : Fri Dec 29 22:15:58 2017
    56. State : clean
    57. Active Devices : 1
    58. Working Devices : 1
    59. Failed Devices : 0
    60. Spare Devices : 0
    61.  
    62. Name : Gen8:2
    63. UUID : e3b94737:7549dd5b:afe0a119:b9080857
    64. Events : 54
    65.  
    66. Number Major Minor RaidDevice State
    67. 0 8 83 0 active sync /dev/sdf3
    68. [root@ora12c ~]# mdadm -S /dev/md127
    69. mdadm: stopped /dev/md127
    70. [root@ora12c ~]# gdisk /dev/sdf
    71. GPT fdisk (gdisk) version 0.8.6
    72.  
    73. Partition table scan:
    74. MBR: protective
    75. BSD: not present
    76. APM: not present
    77. GPT: present
    78.  
    79. Found valid GPT with protective MBR; using GPT.
    80.  
    81. Command (? for help): p
    82. Disk /dev/sdf: 34359738368 sectors, 16.0 TiB
    83. Logical sector size: 512 bytes
    84. Disk identifier (GUID): 3FA697A2-5A88-45AD-89B3-70C227AF71AE
    85. Partition table holds up to 128 entries
    86. First usable sector is 34, last usable sector is 34359738334
    87. Partitions will be aligned on 256-sector boundaries
    88. Total free space is 3951296734 sectors (1.8 TiB)
    89.  
    90. Number Start (sector) End (sector) Size Code Name
    91. 1 2048 4982527 2.4 GiB FD00
    92. 2 4982528 9176831 2.0 GiB FD00
    93. 3 9437184 30408703966 14.2 TiB FD00 Linux RAID
    94.  
    95. Command (? for help): d
    96. Partition number (1-3): 3
    97.  
    98. Command (? for help): n
    99. Partition number (3-128, default 3): 3
    100. First sector (34-34359738334, default = 9176832) or {+-}size{KMGTP}: 9437184
    101. Last sector (9437184-34359738334, default = 34359738334) or {+-}size{KMGTP}:
    102. Current type is 'Linux filesystem'
    103. Hex code or GUID (L to show codes, Enter = 8300): FD00
    104. Changed type of partition to 'Linux RAID'
    105.  
    106. Command (? for help): p
    107. Disk /dev/sdf: 34359738368 sectors, 16.0 TiB
    108. Logical sector size: 512 bytes
    109. Disk identifier (GUID): 3FA697A2-5A88-45AD-89B3-70C227AF71AE
    110. Partition table holds up to 128 entries
    111. First usable sector is 34, last usable sector is 34359738334
    112. Partitions will be aligned on 256-sector boundaries
    113. Total free space is 262366 sectors (128.1 MiB)
    114.  
    115. Number Start (sector) End (sector) Size Code Name
    116. 1 2048 4982527 2.4 GiB FD00
    117. 2 4982528 9176831 2.0 GiB FD00
    118. 3 9437184 34359738334 16.0 TiB FD00 Linux RAID
    119.  
    120. Command (? for help): w
    121.  
    122. Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    123. PARTITIONS!!
    124.  
    125. Do you want to proceed? (Y/N): y
    126. OK; writing new GUID partition table (GPT) to /dev/sdf.
    127. Warning: The kernel is still using the old partition table.
    128. The new table will be used at the next reboot.
    129. The operation has completed successfully.

    The most important thing is the start sector of the partition and it MUST be the same.

    1. [root@ora12c ~]# blockdev --rereadpt /dev/sdf
    2. blockdev: ioctl error on BLKRRPART: Device or resource busy
    3. [root@ora12c ~]# mdadm -R /dev/md127
    4. [root@ora12c ~]# mdadm -S /dev/md127
    5. mdadm: stopped /dev/md127
    6. [root@ora12c ~]# blockdev --rereadpt /dev/sdf
    7. [root@ora12c ~]# mdadm -R /dev/md127
    8. [root@ora12c ~]# mdadm --grow /dev/md127 --size=max
    9. mdadm: component size of /dev/md127 has been set to 17175149551K
    10. unfreeze
    11. [root@ora12c ~]# mdadm --detail /dev/md127
    12. /dev/md127:
    13. Version : 1.2
    14. Creation Time : Tue Jan 12 00:23:11 2016
    15. Raid Level : raid1
    16. Array Size : 17175149551 (16379.50 GiB 17587.35 GB)
    17. Used Dev Size : 17175149551 (16379.50 GiB 17587.35 GB)
    18. Raid Devices : 1
    19. Total Devices : 1
    20. Persistence : Superblock is persistent
    21.  
    22. Update Time : Fri Dec 29 22:41:22 2017
    23. State : clean
    24. Active Devices : 1
    25. Working Devices : 1
    26. Failed Devices : 0
    27. Spare Devices : 0
    28.  
    29. Name : Gen8:2
    30. UUID : e3b94737:7549dd5b:afe0a119:b9080857
    31. Events : 58
    32.  
    33. Number Major Minor RaidDevice State
    34. 0 8 83 0 active sync /dev/sdf3
    35. [root@ora12c ~]# e2fsck -f /dev/md127
    36. e2fsck 1.42.9 (28-Dec-2013)
    37. Pass 1: Checking inodes, blocks, and sizes
    38. Inode 470941892 has INDEX_FL flag set on filesystem without htree support.
    39. Clear HTree index<y>? yes
    40. Inode 557581448 has INDEX_FL flag set on filesystem without htree support.
    41. Clear HTree index<y>? yes
    42. Inode 557582191 has INDEX_FL flag set on filesystem without htree support.
    43. Clear HTree index<y>? yes
    44. Inode 557582540 has INDEX_FL flag set on filesystem without htree support.
    45. Clear HTree index<y>? yes
    46. Inode 557583296 has INDEX_FL flag set on filesystem without htree support.
    47. Clear HTree index<y>? yes
    48. Pass 2: Checking directory structure
    49. Pass 3: Checking directory connectivity
    50. Pass 4: Checking reference counts
    51. Pass 5: Checking group summary information
    52.  
    53. 1.42.6-5644: ***** FILE SYSTEM WAS MODIFIED *****
    54. 1.42.6-5644: 14818/949977088 files (3.6% non-contiguous), 3501742017/3799908091 blocks
    55. [root@ora12c ~]# resize2fs /dev/md127
    56. resize2fs 1.42.9 (28-Dec-2013)
    57. Resizing the filesystem on /dev/md127 to 4293787387 (4k) blocks.
    58. The filesystem on /dev/md127 is now 4293787387 blocks long.
    59.  
    60. [root@ora12c ~]# ls /mnt
    61. dsm iso test
    62. [root@ora12c ~]# mount /dev/md127 /mnt/dsm
    63. [root@ora12c ~]# df -h /mnt/dsm
    64. Filesystem Size Used Avail Use% Mounted on
    65. /dev/md127 16T 13T 3.0T 82% /mnt/dsm
    66. [root@ora12c ~]# umount /dev/md127

    You might not get any error when run the the e2fsck command.

Done!

Now you could enjoy the large space in the NAS system.

 
  • 本文由 NeilZhang 发表于31/12/2017 02:41:25
  • Repost please keep this link: https://www.dbcloudsvc.com/blogs/life/add-space-to-the-synology-nas-in-the-vm/
匿名

发表评论

匿名网友
:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:
确定