Resize (extend) Ceph OSDs

by Adam   Last Updated October 09, 2019 22:00 PM

On my 3-node Ceph cluster the OSDs have become too small:

# ceph -s
  cluster:
    id:     73d05f3d-4612-4a22-b9d3-97eabfc75bc5
    health: HEALTH_WARN
            3 nearfull osd(s)
            1 pool(s) nearfull
            mon NODE1 is low on available space

  services:
    mon: 3 daemons, quorum NODE2,NODE3,NODE1
    mgr: NODE1(active), standbys: NODE2, NODE3
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   1 pools, 100 pgs
    objects: 9.75k objects, 36.2GiB
    usage:   111GiB used, 14.9GiB / 126GiB avail
    pgs:     100 active+clean

  io:
    client:   7.66KiB/s rd, 1.01MiB/s wr, 0op/s rd, 75op/s wr

# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS 
 0   hdd 0.04089  1.00000 41.9GiB 37.1GiB 4.81GiB 88.53 1.00 100 
 1   ssd 0.04089  1.00000 41.9GiB 37.0GiB 4.87GiB 88.38 1.00 100 
 2   ssd 0.04089  1.00000 41.9GiB 37.0GiB 4.87GiB 88.38 1.00 100 
                    TOTAL  126GiB  111GiB 14.5GiB 88.43          
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.07

so I've extended the LVM volumes used by each OSD to 60 GB.

Unfortunately, Ceph didn't pick that change, as the output remained the same for some time.

However, after rebooting all machines (not simultaneously) the output is now:

# ceph -s
  cluster:
    id:     73d05f3d-4612-4a22-b9d3-97eabfc75bc5
    health: HEALTH_WARN
            3 backfillfull osd(s)
            1 pool(s) backfillfull
            mon NODE1 is low on available space

  services:
    mon: 3 daemons, quorum NODE2,NODE3,NODE1
    mgr: NODE3(active), standbys: NODE1, NODE2
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   1 pools, 100 pgs
    objects: 9.79k objects, 36.4GiB
    usage:   165GiB used, 14.5GiB / 180GiB avail
    pgs:     100 active+clean

  io:
    client:   210KiB/s wr, 0op/s rd, 18op/s wr

# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE   USE     AVAIL   %USE  VAR  PGS 
 0   hdd 0.04089  1.00000  60GiB 55.2GiB 4.81GiB 91.99 1.00 100 
 1   ssd 0.04089  1.00000  60GiB 55.1GiB 4.87GiB 91.89 1.00 100 
 2   ssd 0.04089  1.00000  60GiB 55.1GiB 4.87GiB 91.89 1.00 100 
                    TOTAL 180GiB  165GiB 14.5GiB 91.92          
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.05

So it picked the new size, but the available space remains unchanged, which makes no sense.

Can anyone explain to me what's going on here, and how do I fix it?

Or, can I at least safely reduce the LVM volumes back to their original size?

ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)



Related Questions


Updated August 13, 2019 11:00 AM

Updated November 26, 2015 18:00 PM

Updated July 30, 2018 12:00 PM

Updated July 12, 2018 07:00 AM

Updated October 03, 2015 18:00 PM