Корневой раздел, в четыре раза больше, чем, я указал

Я выполняю Сервер Ubuntu 18.04 с дисками на 4x2 ТБ в набеге программного обеспечения 0. На веб-странице установки панели OS я указал 4 раздела, / начальная загрузка 500 МБ, корень 15 ГБ, подкачка 4 ГБ и / домой со всем остальные. Однако корневой раздел, кажется, составляет 60 ГБ вместо 15 ГБ. Любая справка значительно ценилась бы.

$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT

NAME     SIZE FSTYPE            TYPE  MOUNTPOINT
sda      1.8T                   disk
├─sda1     1M                   part
├─sda2   500M linux_raid_member part
│ └─md2  500M ext4              raid1 /boot
├─sda3    15G linux_raid_member part
│ └─md3   60G ext4              raid0 /
├─sda4     4G linux_raid_member part
│ └─md4    4G swap              raid1 [SWAP]
└─sda5   1.8T linux_raid_member part
  └─md5  7.2T xfs               raid0 /home
sdb      1.8T                   disk
├─sdb1     1M                   part
├─sdb2   500M linux_raid_member part
│ └─md2  500M ext4              raid1 /boot
├─sdb3    15G linux_raid_member part
│ └─md3   60G ext4              raid0 /
├─sdb4     4G linux_raid_member part
│ └─md4    4G swap              raid1 [SWAP]
└─sdb5   1.8T linux_raid_member part
  └─md5  7.2T xfs               raid0 /home
sdc      1.8T                   disk
├─sdc1     1M                   part
├─sdc2   500M linux_raid_member part
│ └─md2  500M ext4              raid1 /boot
├─sdc3    15G linux_raid_member part
│ └─md3   60G ext4              raid0 /
├─sdc4     4G linux_raid_member part
│ └─md4    4G swap              raid1 [SWAP]
└─sdc5   1.8T linux_raid_member part
  └─md5  7.2T xfs               raid0 /home
sdd      1.8T                   disk
├─sdd1     1M                   part
├─sdd2   500M linux_raid_member part
│ └─md2  500M ext4              raid1 /boot
├─sdd3    15G linux_raid_member part
│ └─md3   60G ext4              raid0 /
├─sdd4     4G linux_raid_member part
│ └─md4    4G swap              raid1 [SWAP]
└─sdd5   1.8T linux_raid_member part
  └─md5  7.2T xfs               raid0 /home

──

$ cat /proc/mdstat 

Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
md5 : active raid0 sdb5[1] sdd5[0] sda5[3] sdc5[2]
      7731781632 blocks super 1.2 512k chunks

md4 : active raid1 sdb4[1] sdd4[0] sdc4[2] sda4[3]
      4189184 blocks super 1.2 [4/4] [UUUU]

md3 : active raid0 sdb3[1] sdd3[0] sdc3[2] sda3[3]
      62877696 blocks super 1.2 512k chunks

md2 : active raid1 sdb2[1] sdd2[0] sda2[3] sdc2[2]
      511936 blocks [4/4] [UUUU]

unused devices: <none>

──

$ sudo mdadm -D /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md2:
           Version : 0.90
     Creation Time : Sat Aug 25 13:13:01 2018
        Raid Level : raid1
        Array Size : 511936 (499.94 MiB 524.22 MB)
     Used Dev Size : 511936 (499.94 MiB 524.22 MB)
      Raid Devices : 4
     Total Devices : 4
   Preferred Minor : 2
       Persistence : Superblock is persistent

       Update Time : Sat Aug 25 13:24:49 2018
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              UUID : 6b099ee4:971af168:d096250b:d3276e37 (local to host 2037643)
            Events : 0.268

    Number   Major   Minor   RaidDevice State
       0       8       50        0      active sync   /dev/sdd2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8        2        3      active sync   /dev/sda2
/dev/md3:
           Version : 1.2
     Creation Time : Sat Aug 25 13:13:01 2018
        Raid Level : raid0
        Array Size : 62877696 (59.96 GiB 64.39 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Aug 25 13:13:01 2018
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : 2037643:3  (local to host 2037643)
              UUID : da299594:601a8ed6:c0f59007:8576c2d7
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync   /dev/sdd3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8        3        3      active sync   /dev/sda3
/dev/md4:
           Version : 1.2
     Creation Time : Sat Aug 25 13:13:02 2018
        Raid Level : raid1
        Array Size : 4189184 (4.00 GiB 4.29 GB)
     Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Aug 25 13:24:30 2018
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : 2037643:4  (local to host 2037643)
              UUID : 903b6b29:879bab06:0f6c92af:8b417b6d
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       52        0      active sync   /dev/sdd4
       1       8       20        1      active sync   /dev/sdb4
       2       8       36        2      active sync   /dev/sdc4
       3       8        4        3      active sync   /dev/sda4
/dev/md5:
           Version : 1.2
     Creation Time : Sat Aug 25 13:13:02 2018
        Raid Level : raid0
        Array Size : 7731781632 (7373.60 GiB 7917.34 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Aug 25 13:13:02 2018
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : 2037643:5  (local to host 2037643)
              UUID : 39165199:f7b432ff:8c4db0ec:54c5397e
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       53        0      active sync   /dev/sdd5
       1       8       21        1      active sync   /dev/sdb5
       2       8       37        2      active sync   /dev/sdc5
       3       8        5        3      active sync   /dev/sda5
0
задан 25 August 2018 в 07:44

1 ответ

у вас 4 диска и raid0, = 4 * 15 = 60 ГБ.

lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT

показывает, как распределяется /root на ваших 4 дисках.

Чтобы получить 15 ГБ, нужно выбрать 3,75 ГБ.

1
ответ дан 28 October 2019 в 02:18

Другие вопросы по тегам:

Похожие вопросы: