Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

missing stratis pool after update to Fedora 39: thin_repair out of metadata space #3520

Closed
erickj opened this issue Jan 2, 2024 · 48 comments
Assignees

Comments

@erickj
Copy link

erickj commented Jan 2, 2024

Hello, I've just updated to Fedora 39 and have noticed that 1 of 3 stratis pools has disappeared from stratis management. How can I recover this data?

Running stratis version:

stratisd-3.6.3-1.fc39.x86_64
stratis-cli-3.6.0-1.fc39.noarch

The missing pool is the net.ejjohnson.home pool listed below in the stratis report result under partially_constructed_pools

$ stratis report
{
    "name_to_pool_uuid_map": {
        "net.ejjohnson.home": "7e18ddcd-9924-4c92-b926-100a7498630b"
    },
    "partially_constructed_pools": [
        {
            "devices": [
                {
                    "device_uuid": "5335c8c3-df0e-4f29-a241-edc58f384e21",
                    "devnode": "/dev/sdc",
                    "major": 8,
                    "minor": 32,
                    "pool_uuid": "7e18ddcd-9924-4c92-b926-100a7498630b"
                }
            ],
            "pool_uuid": "7e18ddcd-9924-4c92-b926-100a7498630b"
        }
    ],
    "path_to_ids_map": {
        "/dev/sdc": [
            "7e18ddcd-9924-4c92-b926-100a7498630b",
            "5335c8c3-df0e-4f29-a241-edc58f384e21"
        ]
    },
    "pools": [
        {
            "available_actions": "fully_operational",
            "blockdevs": {
                "cachedevs": [],
                "datadevs": [
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 4096 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sdb",
                        "size": "7814037168 sectors",
                        "uuid": "5c9a2e88-cac0-4f38-a988-d72347894123"
                    },
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 4096 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sda",
                        "size": "7814037168 sectors",
                        "uuid": "ec2d0e73-64fd-437a-ac4b-f5800248f44a"
                    }
                ]
            },
            "filesystems": [
                {
                    "name": "fs_raw",
                    "size": "4294967296 sectors",
                    "size_limit": "Not set",
                    "used": "885516140544 bytes",
                    "uuid": "e8071df3-346a-4753-bda1-524c84d037f9"
                }
            ],
            "fs_limit": 100,
            "name": "io.vos",
            "uuid": "8d86f3f6-8666-490b-99b0-b5c6a5fc7986"
        },
        {
            "available_actions": "fully_operational",
            "blockdevs": {
                "cachedevs": [],
                "datadevs": [
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 512 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sdd",
                        "size": "250069680 sectors",
                        "uuid": "a3928e05-964a-4e65-8b76-5ea557ee6ff0"
                    }
                ]
            },
            "filesystems": [
                {
                    "name": "tmp",
                    "size": "2147483648 sectors",
                    "size_limit": "Not set",
                    "used": "2157969408 bytes",
                    "uuid": "8bec6004-cfe7-4820-99c3-1a827d37ba7f"
                }
            ],
            "fs_limit": 100,
            "name": "local.volatile",
            "uuid": "c70a5b86-fa15-45b3-a71e-4a9f78d1e340"
        }
    ],
    "stopped_pools": []
}

looking at the block devices the the missing pool should be on sdc:

$ lsblk
NAME                                                                                        MAJ:MIN RM   SIZE RO TYPE    MOUNTPOINTS
sda                                                                                           8:0    0   3.6T  0 disk    
└─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub                     253:12   0   7.3T  0 stratis 
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta                        253:13   0   7.4G  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata                        253:14   0   7.3T  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv                             253:16   0    16M  0 stratis 
sdb                                                                                           8:16   0   3.6T  0 disk    
└─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub                     253:12   0   7.3T  0 stratis 
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta                        253:13   0   7.4G  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata                        253:14   0   7.3T  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv                             253:16   0    16M  0 stratis 
sdc                                                                                           8:32   0   2.7T  0 disk    
└─stratis-1-private-7e18ddcd99244c92b926100a7498630b-physical-originsub                     253:3    0   2.7T  0 stratis 
  ├─stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta                        253:4    0   2.8G  0 stratis 
  └─stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmetaspare                   253:5    0    16M  0 stratis 
sdd                                                                                           8:48   0 119.2G  0 disk    
└─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub                     253:6    0 119.2G  0 stratis 
  ├─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta                        253:7    0   112M  0 stratis 
  │ └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool                      253:9    0 119.1G  0 stratis 
  │   └─stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f 253:11   0     1T  0 stratis /opt/volatile/tmp
  ├─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata                        253:8    0 119.1G  0 stratis 
  │ └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool                      253:9    0 119.1G  0 stratis 
  │   └─stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f 253:11   0     1T  0 stratis /opt/volatile/tmp
  └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv                             253:10   0    16M  0 stratis 
sde                                                                                           8:64   1  57.3G  0 disk    /run/media/erick/Sandisk-Ultra
zram0                                                                                       252:0    0     8G  0 disk    [SWAP]
nvme0n1                                                                                     259:0    0 465.8G  0 disk    
├─nvme0n1p1                                                                                 259:1    0     1G  0 part    /boot
└─nvme0n1p2                                                                                 259:2    0 464.8G  0 part    
  └─luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515                                               253:0    0 464.8G  0 crypt   
    ├─fedora-root                                                                           253:1    0   445G  0 lvm     /
    └─fedora-swap                                                                           253:2    0  15.7G  0 lvm     [SWAP]
@erickj
Copy link
Author

erickj commented Jan 2, 2024

the failure to setup the pool is logged here:

Jan 02 02:52:03 jupiter stratisd[293912]: [2024-01-02T01:52:03Z INFO  stratisd::engine::strat_engine::liminal::identify] Stratis block device with Stratis pool UUID: "7e18ddcd-9924-4c92-b926-100a7498630b", Stratis device UUID: "5335c8c3-df0e-4f29-a241-edc58f384e21", device number: "8:32", devnode: "/dev/sdc" discovered during initial search
Jan 02 02:52:03 jupiter stratisd[293912]: [2024-01-02T01:52:03Z INFO  stratisd::engine::strat_engine::liminal::identify] Stratis block device with Stratis pool UUID: "c70a5b86-fa15-45b3-a71e-4a9f78d1e340", Stratis device UUID: "a3928e05-964a-4e65-8b76-5ea557ee6ff0", device number: "8:48", devnode: "/dev/sdd" discovered during initial search
Jan 02 02:52:03 jupiter stratisd[293912]: [2024-01-02T01:52:03Z INFO  stratisd::engine::strat_engine::liminal::device_info] Device information Stratis device description: Stratis pool UUID: "7e18ddcd-9924-4c92-b926-100a7498630b", Stratis device UUID: "5335c8c3-df0e-4f29-a241-edc58f384e21", device number: "8:32", devnode: "/dev/sdc" discovered and inserted into the set for its pool UUID
Jan 02 02:52:06 jupiter stratisd[293912]: [2024-01-02T01:52:06Z WARN  stratisd::engine::strat_engine::thinpool::thinpool] Thin check failed: Command failed: cmd: "/usr/sbin/thin_check" "-q" "/dev/dm-4", exit reason: 64 stdout:  stderr:
Jan 02 02:53:04 jupiter stratisd[293912]: [2024-01-02T01:53:04Z INFO  stratisd::engine::strat_engine::liminal::liminal] Attempt to set up pool failed, but it may be possible to set up the pool later, if the situation changes: An attempt to set up pool with UUID 7e18ddcd-9924-4c92-b926-100a7498630b from the assembled devices failed; Command failed: cmd: "/usr/sbin/thin_repair" "-i" "/dev/dm-4" "-o" "/dev/dm-5", exit reason: 64 stdout:  stderr: output error: value error: out of metadata space
Jan 02 02:53:04 jupiter stratisd[293912]:     
Jan 02 02:53:04 jupiter stratisd[293912]: [2024-01-02T01:53:04Z INFO  stratisd::engine::strat_engine::liminal::device_info] Device information Stratis device description: Stratis pool UUID: "c70a5b86-fa15-45b3-a71e-4a9f78d1e340", Stratis device UUID: "a3928e05-964a-4e65-8b76-5ea557ee6ff0", device number: "8:48", devnode: "/dev/sdd" discovered and inserted into the set for its pool UUID

These errors appear only in the most recent journalctl boot log (after upgrading to Fedora 39). Prior to this boot the stratis pool is set up:

Jan 01 23:23:12 jupiter stratisd[115614]: [2024-01-01T22:23:12Z INFO  stratisd::engine::strat_engine::liminal::liminal] Pool with name "net.ejjohnson.home" and UUID "7e18ddcd-9924-4c92-b926-100a7498630b" set up

Can the error for out of metadata space be explained here? More specifically, what is out of space and can this be repaired manually?

There is very little info on the web which I can find about this:

And none of it is in the context of repairing the volume under Stratis management. Any information is appreciated.

@erickj erickj changed the title missing stratis pool after update to Fedora 39 missing stratis pool after update to Fedora 39: thin_repair out of metadata space Jan 2, 2024
@mulkieran mulkieran self-assigned this Jan 2, 2024
@mulkieran
Copy link
Member

@erickj Thanks for the information. Can you report the results of running thin_check /dev/dm-4 w/out the -q option? We would like to understand what prompted the thin_repair invocation, which is not a standard part of pool setup.

@mulkieran
Copy link
Member

mulkieran commented Jan 2, 2024

@erickj What is happening is that the thin_check call failed, and consequently a thin_repair action was initiated. The thin repair action was making use of your backup metadata device and thin_repair seems to have reported that device as too small, almost certainly because it is too small (16 M).

@erickj
Copy link
Author

erickj commented Jan 2, 2024

Can you report the results of running thin_check /dev/dm-4

$ sudo thin_check /dev/dm-4
Device or resource busy (os error 16)
15:59:14 [erick@jupiter:~] 
$ echo $?
64
15:59:25 [erick@jupiter:~] 
$ ls -l /dev/dm-4
brw-rw----. 1 root disk 253, 4 Jan  2 14:59 /dev/dm-4

device or resource busy... does something need to be unmounted before running this?

@mulkieran
Copy link
Member

mulkieran commented Jan 2, 2024

@erickj Again, thanks. Plz install the stratisd-tools package. This should make the tool stratis-dumpmetadata available. Run this tool on /dev/sdc as > stratis-dumpmetadata /dev/sdc --only pool. This should yield the pool-level metadata which determines the size of the thin metadata spare device. Plz post that. Our goal is to figure out how you can adjust that pool level metadata so that the spare metadata device is large enough to support the thin repair action. We will also investigate the "device or resource busy" error; but it is probably due to a behavior of thin_repair and may prove a side-issue.

@erickj
Copy link
Author

erickj commented Jan 2, 2024

@mulkieran thanks for following up on this with me. Here is the output

$ sudo stratis-dumpmetadata /dev/sdc --only pool
{
  "backstore": {
    "cap": {
      "allocs": [
        [
          0,
          5860524032
        ]
      ]
    },
    "data_tier": {
      "blockdev": {
        "allocs": [
          [
            {
              "length": 5860524032,
              "parent": "5335c8c3-df0e-4f29-a241-edc58f384e21",
              "start": 8192
            }
          ]
        ],
        "devs": [
          {
            "hardware_info": "0x50014ee2baf068c0",
            "uuid": "5335c8c3-df0e-4f29-a241-edc58f384e21"
          }
        ]
      }
    }
  },
  "flex_devs": {
    "meta_dev": [
      [
        1638400,
        32768
      ]
    ],
    "thin_data_dev": [
      [
        65536,
        1572864
      ],
      [
        7471104,
        5853052928
      ]
    ],
    "thin_meta_dev": [
      [
        0,
        32768
      ],
      [
        1671168,
        5799936
      ]
    ],
    "thin_meta_dev_spare": [
      [
        32768,
        32768
      ]
    ]
  },
  "name": "net.ejjohnson.home",
  "started": true,
  "thinpool_dev": {
    "data_block_size": 2048,
    "enable_overprov": true,
    "feature_args": [
      "skip_block_zeroing",
      "no_discard_passdown",
      "error_if_no_space"
    ],
    "fs_limit": 100
  }
}

@mulkieran
Copy link
Member

@erickj

Thanks again. Unfortunately, I think the first idea will not work because /dev/sdc is fully allocated. I ran a script to summarize the output, and it looks like the following:

Data Tier:
 5335c8c3-df0e-4f29-a241-edc58f384e21 8192 5860524032

Cap:
5860524032

Flex:
         0      32768      32768 thin_meta_dev
     32768      32768      65536 thin_meta_dev_spare
     65536    1572864    1638400 thin_data_dev
   1638400      32768    1671168 meta_dev
   1671168    5799936    7471104 thin_meta_dev
   7471104 5853052928 5860524032 thin_data_dev

   5832704 thin_meta_dev
     32768 thin_meta_dev_spare
5854625792 thin_data_dev
     32768 meta_dev

Flex Total: 5860524032

Notice how "5860524032" reoccurs. It looks like everything is allocated from /dev/sdc to the cap device and then everything available in the cap device has then been redistributed to the thinpool.

I would like to verify that, though.

Please post the following:
> stratis-dumpmetadata /dev/sdb

That will also show the signature buffer of the device which will contain the precise size as understood by Stratis.

Also please post the output of > dmsetup table. This will have a bunch of output but we can sort through it. It will probably confirm that the the thin meta device is set up properly. But we will certainly verify that before proceding.

If it turns out that there is no space to expand and that the thin meta device is set up properly (which is likely) then I propose that we try, first to simply restart and bring the pool up, and if that fails, to repair the thin meta device using thin tools. But it would be best if we confirm what the situation is before we move further.

@erickj
Copy link
Author

erickj commented Jan 3, 2024

I've pasted the output below

(edit: the below output was run for /dev/sdb, instead of sdc, copied from your comment above. At first I thought you asked for this for comparison, but upon rereading your comment I think sdb was a typo and I should have collected the data for sdc... I will rerun the command again when I get home from work later today)

stratis-dumpmetadata /dev/sdb

$ sudo stratis-dumpmetadata /dev/sdb
[sudo] password for erick: 
Signature block: 

Header:
StaticHeader {
    blkdev_size: BlockdevSize(
        Sectors(7814037168),
    ),
    identifiers: StratisIdentifiers {
        pool_uuid: PoolUuid(
            8d86f3f6-8666-490b-99b0-b5c6a5fc7986,
        ),
        device_uuid: DevUuid(
            5c9a2e88-cac0-4f38-a988-d72347894123,
        ),
    },
    mda_size: MDASize(
        Sectors(2032),
    ),
    reserved_size: ReservedSize(
        Sectors(6144),
    ),
    flags: 0,
    initialization_time: 2019-02-16T18:41:03Z,
}


BDA {
    header: StaticHeader {
        blkdev_size: BlockdevSize(
            Sectors(7814037168),
        ),
        identifiers: StratisIdentifiers {
            pool_uuid: PoolUuid(
                8d86f3f6-8666-490b-99b0-b5c6a5fc7986,
            ),
            device_uuid: DevUuid(
                5c9a2e88-cac0-4f38-a988-d72347894123,
            ),
        },
        mda_size: MDASize(
            Sectors(2032),
        ),
        reserved_size: ReservedSize(
            Sectors(6144),
        ),
        flags: 0,
        initialization_time: 2019-02-16T18:41:03Z,
    },
    regions: MDARegions {
        region_size: MDARegionSize(
            Sectors(508),
        ),
        mda_headers: [
            Some(
                MDAHeader {
                    last_updated: 2022-09-01T22:10:22.764622724Z,
                    used: MetaDataSize(
                        Bytes(789),
                    ),
                    data_crc: 2454305371,
                },
            ),
            Some(
                MDAHeader {
                    last_updated: 2022-06-22T22:03:05.643630287Z,
                    used: MetaDataSize(
                        Bytes(774),
                    ),
                    data_crc: 1521748339,
                },
            ),
        ],
    },
}

Pool metadata:
{
  "backstore": {
    "cap": {
      "allocs": [
        [
          0,
          15628056576
        ]
      ]
    },
    "data_tier": {
      "blockdev": {
        "allocs": [
          [
            {
              "length": 7814028976,
              "parent": "5c9a2e88-cac0-4f38-a988-d72347894123",
              "start": 8192
            },
            {
              "length": 7814027600,
              "parent": "ec2d0e73-64fd-437a-ac4b-f5800248f44a",
              "start": 8192
            }
          ]
        ],
        "devs": [
          {
            "hardware_info": "0x50014ee265a84a78",
            "uuid": "5c9a2e88-cac0-4f38-a988-d72347894123"
          },
          {
            "hardware_info": "0x50014ee21052f2c0",
            "uuid": "ec2d0e73-64fd-437a-ac4b-f5800248f44a"
          }
        ]
      }
    }
  },
  "flex_devs": {
    "meta_dev": [
      [
        1638400,
        32768
      ]
    ],
    "thin_data_dev": [
      [
        65536,
        1572864
      ],
      [
        17235968,
        15610820608
      ]
    ],
    "thin_meta_dev": [
      [
        0,
        32768
      ],
      [
        1671168,
        15564800
      ]
    ],
    "thin_meta_dev_spare": [
      [
        32768,
        32768
      ]
    ]
  },
  "name": "io.vos",
  "started": true,
  "thinpool_dev": {
    "data_block_size": 2048,
    "enable_overprov": true,
    "feature_args": [
      "skip_block_zeroing",
      "no_discard_passdown"
    ],
    "fs_limit": 100
  }
}

dmsetup table

$ sudo dmsetup table
fedora-root: 0 933232640 linear 253:0 32999424
fedora-swap: 0 32997376 linear 253:0 2048
luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515: 0 974669824 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 259:2 4096 1 allow_discards
stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9: 0 4294967296 thin 253:6 0
stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f: 0 2147483648 thin 253:15 1
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 0 32768 linear 253:9 0
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 32768 5799936 linear 253:9 1671168
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmetaspare: 0 32768 linear 253:9 32768
stratis-1-private-7e18ddcd99244c92b926100a7498630b-physical-originsub: 0 5860524032 linear 8:32 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv: 0 32768 linear 253:3 1638400
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 0 1572864 linear 253:3 65536
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 1572864 15610820608 linear 253:3 17235968
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 0 32768 linear 253:3 0
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 32768 15564800 linear 253:3 1671168
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 0 7814028976 linear 8:16 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 7814028976 7814027600 linear 8:0 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool: 0 15612393472 thin-pool 253:4 253:5 2048 7623239 2 skip_block_zeroing no_discard_passdown 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv: 0 32768 linear 253:12 1638400
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 0 1572864 linear 253:12 65536
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 1572864 248193024 linear 253:12 1867776
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 0 32768 linear 253:12 0
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 32768 196608 linear 253:12 1671168
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub: 0 250060800 linear 8:48 8192
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool: 0 249765888 thin-pool 253:13 253:14 2048 121956 2 skip_block_zeroing no_discard_passdown 

@mulkieran
Copy link
Member

@erickj

You are correct, sorry about the typo.

stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 0 32768 linear 253:9 0
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 32768 5799936 linear 253:9 1671168

is the table for the thin meta device and it certainly looks correctly set up, so that is likely not what is causing thin_check to report an error.

@erickj
Copy link
Author

erickj commented Jan 3, 2024

sorry about the typo.

Absolutely not a problem. I'm very happy for the support and to help debug.

here is the updated output for sdc

$ sudo stratis-dumpmetadata /dev/sdc
Signature block: 

Header:
StaticHeader {
    blkdev_size: BlockdevSize(
        Sectors(5860533168),
    ),
    identifiers: StratisIdentifiers {
        pool_uuid: PoolUuid(
            7e18ddcd-9924-4c92-b926-100a7498630b,
        ),
        device_uuid: DevUuid(
            5335c8c3-df0e-4f29-a241-edc58f384e21,
        ),
    },
    mda_size: MDASize(
        Sectors(2032),
    ),
    reserved_size: ReservedSize(
        Sectors(6144),
    ),
    flags: 0,
    initialization_time: 2019-08-04T14:18:01Z,
}


BDA {
    header: StaticHeader {
        blkdev_size: BlockdevSize(
            Sectors(5860533168),
        ),
        identifiers: StratisIdentifiers {
            pool_uuid: PoolUuid(
                7e18ddcd-9924-4c92-b926-100a7498630b,
            ),
            device_uuid: DevUuid(
                5335c8c3-df0e-4f29-a241-edc58f384e21,
            ),
        },
        mda_size: MDASize(
            Sectors(2032),
        ),
        reserved_size: ReservedSize(
            Sectors(6144),
        ),
        flags: 0,
        initialization_time: 2019-08-04T14:18:01Z,
    },
    regions: MDARegions {
        region_size: MDARegionSize(
            Sectors(508),
        ),
        mda_headers: [
            Some(
                MDAHeader {
                    last_updated: 2022-09-01T22:10:23.213216632Z,
                    used: MetaDataSize(
                        Bytes(629),
                    ),
                    data_crc: 1972146702,
                },
            ),
            Some(
                MDAHeader {
                    last_updated: 2023-08-27T19:04:07.507231043Z,
                    used: MetaDataSize(
                        Bytes(649),
                    ),
                    data_crc: 321540343,
                },
            ),
        ],
    },
}

Pool metadata:
{
  "backstore": {
    "cap": {
      "allocs": [
        [
          0,
          5860524032
        ]
      ]
    },
    "data_tier": {
      "blockdev": {
        "allocs": [
          [
            {
              "length": 5860524032,
              "parent": "5335c8c3-df0e-4f29-a241-edc58f384e21",
              "start": 8192
            }
          ]
        ],
        "devs": [
          {
            "hardware_info": "0x50014ee2baf068c0",
            "uuid": "5335c8c3-df0e-4f29-a241-edc58f384e21"
          }
        ]
      }
    }
  },
  "flex_devs": {
    "meta_dev": [
      [
        1638400,
        32768
      ]
    ],
    "thin_data_dev": [
      [
        65536,
        1572864
      ],
      [
        7471104,
        5853052928
      ]
    ],
    "thin_meta_dev": [
      [
        0,
        32768
      ],
      [
        1671168,
        5799936
      ]
    ],
    "thin_meta_dev_spare": [
      [
        32768,
        32768
      ]
    ]
  },
  "name": "net.ejjohnson.home",
  "started": true,
  "thinpool_dev": {
    "data_block_size": 2048,
    "enable_overprov": true,
    "feature_args": [
      "skip_block_zeroing",
      "no_discard_passdown",
      "error_if_no_space"
    ],
    "fs_limit": 100
  }
}

@drckeefe
Copy link
Member

drckeefe commented Jan 3, 2024

@erickj would you also be able to provide dmsetup status?

@erickj
Copy link
Author

erickj commented Jan 3, 2024

@drckeefe thank you for taking a look. the following is the output:

$ sudo dmsetup status
fedora-root: 0 933232640 linear 
fedora-swap: 0 32997376 linear 
luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515: 0 974669824 crypt 
stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9: 0 4294967296 thin 1729523712 4227903487
stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f: 0 2147483648 thin 4214784 2083176447
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 0 32768 linear 
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 32768 5799936 linear 
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmetaspare: 0 32768 linear 
stratis-1-private-7e18ddcd99244c92b926100a7498630b-physical-originsub: 0 5860524032 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv: 0 32768 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 0 1572864 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 1572864 15610820608 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 0 32768 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 32768 15564800 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 0 7814028976 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 7814028976 7814027600 linear 
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool: 0 15612393472 thin-pool 0 10842/1949696 844494/7623239 - rw no_discard_passdown queue_if_no_space - 1024 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv: 0 32768 linear 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 0 1572864 linear 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 1572864 248193024 linear 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 0 32768 linear 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 32768 196608 linear 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub: 0 250060800 linear 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool: 0 249765888 thin-pool 0 2899/28672 2058/121956 - rw no_discard_passdown queue_if_no_space - 1024 

@bgurney-rh
Copy link
Member

One thing that I realized is that the thin-pool device affected here won't be created, most likely because of running out of metadata space. But why did it run out?

Can you search for kernel events (i.e.: journalctl -k), with the source "kernel: device-mapper", that may correspond to the devices and time that the out-of-space event occurred?

Here is a quick example of the log event on a test system, where I created a pool on a small test device, and it resulted in a "low water mark" event:

kernel: device-mapper: thin: 253:3: reached low water mark for data device: sending event.

@erickj
Copy link
Author

erickj commented Jan 3, 2024

thanks @bgurney-rh , I've copied the output of the last 4 boots (spanning back to the upgrade to F39), I see no logs similar to what you've called out

$ journalctl -b -3 -k -g device-mapper
Jan 01 23:32:59 jupiter kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 01 23:32:59 jupiter kernel: device-mapper: uevent: version 1.0.3
Jan 01 23:32:59 jupiter kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 02 02:52:01 jupiter systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs.
23:48:33 [erick@jupiter:~] 
$ journalctl -b -2 -k -g device-mapper
Jan 02 14:58:56 jupiter kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 02 14:58:56 jupiter kernel: device-mapper: uevent: version 1.0.3
Jan 02 14:58:56 jupiter kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 02 14:59:29 jupiter systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs.
23:48:46 [erick@jupiter:~] 
$ journalctl -b -1 -k -g device-mapper
Jan 02 16:19:23 jupiter kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 02 16:19:23 jupiter kernel: device-mapper: uevent: version 1.0.3
Jan 02 16:19:23 jupiter kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 02 16:19:49 jupiter systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs.
23:48:49 [erick@jupiter:~] 
$ journalctl -b 0 -k -g device-mapper
Jan 03 20:30:40 jupiter kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 03 20:30:40 jupiter kernel: device-mapper: uevent: version 1.0.3
Jan 03 20:30:40 jupiter kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 03 20:31:08 jupiter systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs.

@mulkieran
Copy link
Member

@erickj Thanks for the further information. I've confirmed that the pool is fully allocated, so there isn't any space to allocate more room for the thin meta spare.

Since we have confirmed that the meta device is set up properly, and we know the values used in the device mapper table, you should try to see if this is just a transient problem. To do that, you can try to stop the pool and restart it again.

> stratis pool stop --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
<wait>
> stratis report
< if the pool is in the stopped pools list>
  > stratis pool start --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
  < Find out what is the status of the pool now >
<else>
  <await further instructions, may be necessary to tear it down and reset it up using dmsetup commands>

@erickj
Copy link
Author

erickj commented Jan 4, 2024

Thanks @mulkieran . I've done as you've suggested here. Running stratis pool stop ... does indeed report the pool in the stopped pools list under stratis report.

Starting the pool again fails with the same thin_repair error I've reported above: stratisd failed to perform the operation that you requested. It returned the following information via the D-Bus: ERROR: An attempt to set up pool with UUID 7e18ddcd-9924-4c92-b926-100a7498630b from the assembled devices failed; Command failed: cmd: "/usr/sbin/thin_repair" "-i" "/dev/dm-10" "-o" "/dev/dm-11", exit reason: 64 stdout: stderr: output error: value error: out of metadata space

Stratis report shows the pool still in the partially constructed state.

Is there any documentation on how to re-setup the stratis pool using dmsetup commands?

All output copied below:

$ sudo stratis pool stop --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
[sudo] password for erick: 
01:44:25 [erick@jupiter:~] 
$ stratis report
{
    "name_to_pool_uuid_map": {
        "net.ejjohnson.home": "7e18ddcd-9924-4c92-b926-100a7498630b"
    },
    "partially_constructed_pools": [],
    "path_to_ids_map": {
        "/dev/sdc": [
            "7e18ddcd-9924-4c92-b926-100a7498630b",
            "5335c8c3-df0e-4f29-a241-edc58f384e21"
        ]
    },
    "pools": [
        {
            "available_actions": "fully_operational",
            "blockdevs": {
                "cachedevs": [],
                "datadevs": [
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 4096 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sdb",
                        "size": "7814037168 sectors",
                        "uuid": "5c9a2e88-cac0-4f38-a988-d72347894123"
                    },
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 4096 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sda",
                        "size": "7814037168 sectors",
                        "uuid": "ec2d0e73-64fd-437a-ac4b-f5800248f44a"
                    }
                ]
            },
            "filesystems": [
                {
                    "name": "fs_raw",
                    "size": "4294967296 sectors",
                    "size_limit": "Not set",
                    "used": "885516140544 bytes",
                    "uuid": "e8071df3-346a-4753-bda1-524c84d037f9"
                }
            ],
            "fs_limit": 100,
            "name": "io.vos",
            "uuid": "8d86f3f6-8666-490b-99b0-b5c6a5fc7986"
        },
        {
            "available_actions": "fully_operational",
            "blockdevs": {
                "cachedevs": [],
                "datadevs": [
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 512 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sdd",
                        "size": "250069680 sectors",
                        "uuid": "a3928e05-964a-4e65-8b76-5ea557ee6ff0"
                    }
                ]
            },
            "filesystems": [
                {
                    "name": "tmp",
                    "size": "2147483648 sectors",
                    "size_limit": "Not set",
                    "used": "2157969408 bytes",
                    "uuid": "8bec6004-cfe7-4820-99c3-1a827d37ba7f"
                }
            ],
            "fs_limit": 100,
            "name": "local.volatile",
            "uuid": "c70a5b86-fa15-45b3-a71e-4a9f78d1e340"
        }
    ],
    "stopped_pools": [
        {
            "devices": [
                {
                    "device_uuid": "5335c8c3-df0e-4f29-a241-edc58f384e21",
                    "devnode": "/dev/sdc",
                    "major": 8,
                    "minor": 32,
                    "pool_uuid": "7e18ddcd-9924-4c92-b926-100a7498630b"
                }
            ],
            "pool_uuid": "7e18ddcd-9924-4c92-b926-100a7498630b"
        }
    ]
}
01:44:31 [erick@jupiter:~] 
$ sudo stratis pool start --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
Execution failed:
stratisd failed to perform the operation that you requested. It returned the following information via the D-Bus: ERROR: An attempt to set up pool with UUID 7e18ddcd-9924-4c92-b926-100a7498630b from the assembled devices failed; Command failed: cmd: "/usr/sbin/thin_repair" "-i" "/dev/dm-10" "-o" "/dev/dm-11", exit reason: 64 stdout:  stderr: output error: value error: out of metadata space
. 

01:45:56 [erick@jupiter:~] 
$ sudo stratis report
{
    "name_to_pool_uuid_map": {
        "net.ejjohnson.home": "7e18ddcd-9924-4c92-b926-100a7498630b"
    },
    "partially_constructed_pools": [
        {
            "devices": [
                {
                    "device_uuid": "5335c8c3-df0e-4f29-a241-edc58f384e21",
                    "devnode": "/dev/sdc",
                    "major": 8,
                    "minor": 32,
                    "pool_uuid": "7e18ddcd-9924-4c92-b926-100a7498630b"
                }
            ],
            "pool_uuid": "7e18ddcd-9924-4c92-b926-100a7498630b"
        }
    ],
    "path_to_ids_map": {
        "/dev/sdc": [
            "7e18ddcd-9924-4c92-b926-100a7498630b",
            "5335c8c3-df0e-4f29-a241-edc58f384e21"
        ]
    },
    "pools": [
        {
            "available_actions": "fully_operational",
            "blockdevs": {
                "cachedevs": [],
                "datadevs": [
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 512 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sdd",
                        "size": "250069680 sectors",
                        "uuid": "a3928e05-964a-4e65-8b76-5ea557ee6ff0"
                    }
                ]
            },
            "filesystems": [
                {
                    "name": "tmp",
                    "size": "2147483648 sectors",
                    "size_limit": "Not set",
                    "used": "2157969408 bytes",
                    "uuid": "8bec6004-cfe7-4820-99c3-1a827d37ba7f"
                }
            ],
            "fs_limit": 100,
            "name": "local.volatile",
            "uuid": "c70a5b86-fa15-45b3-a71e-4a9f78d1e340"
        },
        {
            "available_actions": "fully_operational",
            "blockdevs": {
                "cachedevs": [],
                "datadevs": [
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 4096 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sdb",
                        "size": "7814037168 sectors",
                        "uuid": "5c9a2e88-cac0-4f38-a988-d72347894123"
                    },
                    {
                        "blksizes": "base: BLKSSSZGET: 512 bytes, BLKPBSZGET: 4096 bytes, crypt: None",
                        "in_use": true,
                        "path": "/dev/sda",
                        "size": "7814037168 sectors",
                        "uuid": "ec2d0e73-64fd-437a-ac4b-f5800248f44a"
                    }
                ]
            },
            "filesystems": [
                {
                    "name": "fs_raw",
                    "size": "4294967296 sectors",
                    "size_limit": "Not set",
                    "used": "885516140544 bytes",
                    "uuid": "e8071df3-346a-4753-bda1-524c84d037f9"
                }
            ],
            "fs_limit": 100,
            "name": "io.vos",
            "uuid": "8d86f3f6-8666-490b-99b0-b5c6a5fc7986"
        }
    ],
    "stopped_pools": []
}

@mulkieran
Copy link
Member

@erickj That it is stoppable is good. You could set up the pool with dmsetup commands, and that is generally what I want to try. But I don't want to be precipitate and ruin the thin-meta device by moving too fast.

Please do the following one more time:

  • Post the output of lsblk.
  • Post the output of dmsetup table.
  • Stop your pool.

I know this seems redundant, but I want to make absolutely sure the device numbers are matching up properly in the output of dmsetup and of lsblk before any operations are performed.

@erickj
Copy link
Author

erickj commented Jan 4, 2024

No worries about redundancies, still very appreciative for the help.

It was unclear to me here if you wanted to see lsblk and dmsetup table output before or after stopping the pool, so I've copied the output of both below.

Order of output:

  • lsblk
  • dmsetup table
  • stopped the pool
  • dmsetup table
  • lsblk

output:

$ lsblk
NAME                                                                                        MAJ:MIN RM   SIZE RO TYPE    MOUNTPOINTS
sda                                                                                           8:0    0   3.6T  0 disk    
└─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub                     253:12   0   7.3T  0 stratis 
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta                        253:13   0   7.4G  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata                        253:14   0   7.3T  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv                             253:16   0    16M  0 stratis 
sdb                                                                                           8:16   0   3.6T  0 disk    
└─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub                     253:12   0   7.3T  0 stratis 
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta                        253:13   0   7.4G  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata                        253:14   0   7.3T  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv                             253:16   0    16M  0 stratis 
sdc                                                                                           8:32   0   2.7T  0 disk    
└─stratis-1-private-7e18ddcd99244c92b926100a7498630b-physical-originsub                     253:9    0   2.7T  0 stratis 
  ├─stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta                        253:10   0   2.8G  0 stratis 
  └─stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmetaspare                   253:11   0    16M  0 stratis 
sdd                                                                                           8:48   0 119.2G  0 disk    
└─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub                     253:3    0 119.2G  0 stratis 
  ├─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta                        253:4    0   112M  0 stratis 
  │ └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool                      253:6    0 119.1G  0 stratis 
  │   └─stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f 253:8    0     1T  0 stratis /opt/volatile/tmp
  ├─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata                        253:5    0 119.1G  0 stratis 
  │ └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool                      253:6    0 119.1G  0 stratis 
  │   └─stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f 253:8    0     1T  0 stratis /opt/volatile/tmp
  └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv                             253:7    0    16M  0 stratis 
sde                                                                                           8:64   1  57.3G  0 disk    /run/media/erick/Sandisk-Ultra
zram0                                                                                       252:0    0     8G  0 disk    [SWAP]
nvme0n1                                                                                     259:0    0 465.8G  0 disk    
├─nvme0n1p1                                                                                 259:1    0     1G  0 part    /boot
└─nvme0n1p2                                                                                 259:2    0 464.8G  0 part    
  └─luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515                                               253:0    0 464.8G  0 crypt   
    ├─fedora-root                                                                           253:1    0   445G  0 lvm     /
    └─fedora-swap                                                                           253:2    0  15.7G  0 lvm     [SWAP]
02:34:47 [erick@jupiter:~] 
$ sudo dmsetup table
[sudo] password for erick: 
fedora-root: 0 933232640 linear 253:0 32999424
fedora-swap: 0 32997376 linear 253:0 2048
luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515: 0 974669824 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 259:2 4096 1 allow_discards
stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9: 0 4294967296 thin 253:15 0
stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f: 0 2147483648 thin 253:6 1
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 0 32768 linear 253:9 0
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta: 32768 5799936 linear 253:9 1671168
stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmetaspare: 0 32768 linear 253:9 32768
stratis-1-private-7e18ddcd99244c92b926100a7498630b-physical-originsub: 0 5860524032 linear 8:32 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv: 0 32768 linear 253:12 1638400
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 0 1572864 linear 253:12 65536
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 1572864 15610820608 linear 253:12 17235968
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 0 32768 linear 253:12 0
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 32768 15564800 linear 253:12 1671168
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 0 7814028976 linear 8:16 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 7814028976 7814027600 linear 8:0 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool: 0 15612393472 thin-pool 253:13 253:14 2048 7623239 2 skip_block_zeroing no_discard_passdown 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv: 0 32768 linear 253:3 1638400
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 0 1572864 linear 253:3 65536
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 1572864 248193024 linear 253:3 1867776
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 0 32768 linear 253:3 0
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 32768 196608 linear 253:3 1671168
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub: 0 250060800 linear 8:48 8192
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool: 0 249765888 thin-pool 253:4 253:5 2048 121956 2 skip_block_zeroing no_discard_passdown 
02:34:58 [erick@jupiter:~] 
$ sudo stratis pool stop --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
02:35:06 [erick@jupiter:~] 
$ sudo dmsetup table
fedora-root: 0 933232640 linear 253:0 32999424
fedora-swap: 0 32997376 linear 253:0 2048
luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515: 0 974669824 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 259:2 4096 1 allow_discards
stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9: 0 4294967296 thin 253:15 0
stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f: 0 2147483648 thin 253:6 1
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv: 0 32768 linear 253:12 1638400
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 0 1572864 linear 253:12 65536
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata: 1572864 15610820608 linear 253:12 17235968
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 0 32768 linear 253:12 0
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta: 32768 15564800 linear 253:12 1671168
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 0 7814028976 linear 8:16 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub: 7814028976 7814027600 linear 8:0 8192
stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool: 0 15612393472 thin-pool 253:13 253:14 2048 7623239 2 skip_block_zeroing no_discard_passdown 
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv: 0 32768 linear 253:3 1638400
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 0 1572864 linear 253:3 65536
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata: 1572864 248193024 linear 253:3 1867776
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 0 32768 linear 253:3 0
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta: 32768 196608 linear 253:3 1671168
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub: 0 250060800 linear 8:48 8192
stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool: 0 249765888 thin-pool 253:4 253:5 2048 121956 2 skip_block_zeroing no_discard_passdown 
02:35:13 [erick@jupiter:~] 
$ lsblk
NAME                                                                                        MAJ:MIN RM   SIZE RO TYPE    MOUNTPOINTS
sda                                                                                           8:0    0   3.6T  0 disk    
└─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub                     253:12   0   7.3T  0 stratis 
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta                        253:13   0   7.4G  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata                        253:14   0   7.3T  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv                             253:16   0    16M  0 stratis 
sdb                                                                                           8:16   0   3.6T  0 disk    
└─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-physical-originsub                     253:12   0   7.3T  0 stratis 
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thinmeta                        253:13   0   7.4G  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  ├─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-thindata                        253:14   0   7.3T  0 stratis 
  │ └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-thinpool-pool                      253:15   0   7.3T  0 stratis 
  │   └─stratis-1-8d86f3f68666490b99b0b5c6a5fc7986-thin-fs-e8071df3346a4753bda1524c84d037f9 253:17   0     2T  0 stratis /mnt/io.vos_raw
  └─stratis-1-private-8d86f3f68666490b99b0b5c6a5fc7986-flex-mdv                             253:16   0    16M  0 stratis 
sdc                                                                                           8:32   0   2.7T  0 disk    
sdd                                                                                           8:48   0 119.2G  0 disk    
└─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-physical-originsub                     253:3    0 119.2G  0 stratis 
  ├─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thinmeta                        253:4    0   112M  0 stratis 
  │ └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool                      253:6    0 119.1G  0 stratis 
  │   └─stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f 253:8    0     1T  0 stratis /opt/volatile/tmp
  ├─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-thindata                        253:5    0 119.1G  0 stratis 
  │ └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-thinpool-pool                      253:6    0 119.1G  0 stratis 
  │   └─stratis-1-c70a5b86fa1545b3a71e4a9f78d1e340-thin-fs-8bec6004cfe7482099c31a827d37ba7f 253:8    0     1T  0 stratis /opt/volatile/tmp
  └─stratis-1-private-c70a5b86fa1545b3a71e4a9f78d1e340-flex-mdv                             253:7    0    16M  0 stratis 
sde                                                                                           8:64   1  57.3G  0 disk    /run/media/erick/Sandisk-Ultra
zram0                                                                                       252:0    0     8G  0 disk    [SWAP]
nvme0n1                                                                                     259:0    0 465.8G  0 disk    
├─nvme0n1p1                                                                                 259:1    0     1G  0 part    /boot
└─nvme0n1p2                                                                                 259:2    0 464.8G  0 part    
  └─luks-5f8f4e1c-e8f2-4329-bc24-f56bf78cf515                                               253:0    0 464.8G  0 crypt   
    ├─fedora-root                                                                           253:1    0   445G  0 lvm     /
    └─fedora-swap                                                                           253:2    0  15.7G  0 lvm     [SWAP]

@mulkieran
Copy link
Member

@erickj Thanks. That's perfect. Exactly everything checks out up until the thin_repair failure. It will require many fewer steps for you if we provide you with a stratisd rpm on COPR which contains a patched version of stratisd which will expose the thin_check output when you use stratisd to start the stopped pool. Once we've gathered that information, it may be possible for us to give you another patched version of stratisd which would allow you to set up this particular pool. Please let me know if you need instructions for doing a COPR installation or any other concerns. We'll let you know when the COPR rpm is ready.

@mulkieran
Copy link
Member

@erickj
Copy link
Author

erickj commented Jan 4, 2024

thanks @mulkieran - before I install the patched build from copr I just have a question about restoring the system state to the mainline package. Will dnf history undo X be the appropriate command to revert back to the mainline stratisd packages after we're done with the patched versions? Or will there be a more involved process to revert?

@erickj
Copy link
Author

erickj commented Jan 4, 2024

I went ahead and just installed the package, I assume reverting to the previous version won't be a problem.

I've stopped/started the pool again and see the following warnings now in the journalctl logs:

Jan 04 21:08:13 jupiter stratisd[1916]: [2024-01-04T20:08:13Z WARN  stratisd::engine::strat_engine::thinpool::thinpool] Thin check failed: Command failed: cmd: "/usr/sbin/thin_check" "/dev/dm-16", exit reason: 64 stdout: TRANSACTION_ID=0
Jan 04 21:08:13 jupiter stratisd[1916]:     METADATA_FREE_BLOCKS=712163
Jan 04 21:08:13 jupiter stratisd[1916]:      stderr: Checking thin metadata
Jan 04 21:08:13 jupiter stratisd[1916]:     device details tree
Jan 04 21:08:13 jupiter stratisd[1916]:     mapping tree
Jan 04 21:08:13 jupiter stratisd[1916]:     data space map
Jan 04 21:08:13 jupiter stratisd[1916]:     metadata space map
Jan 04 21:08:13 jupiter stratisd[1916]:     metadata space map: block out of bounds
Jan 04 21:08:13 jupiter stratisd[1916]:     
Jan 04 21:09:07 jupiter stratisd[1916]: [2024-01-04T20:09:07Z INFO  stratisd::engine::strat_engine::liminal::liminal] Attempt to set up pool failed, but it may be possible to set up the pool later, if the situation changes: An attempt to set up pool with UUID 7e18ddcd-9924-4c92-b926-100a7498630b from the assembled devices failed; Command failed: cmd: "/usr/sbin/thin_repair" "-i" "/dev/dm-16" "-o" "/dev/dm-17", exit reason: 64 stdout:  stderr: output error: value error: out of metadata space
Jan 04 21:09:07 jupiter stratisd[1916]:  

if there are further steps to enable more detailed debug logging or any other details required please let me know.

@mulkieran
Copy link
Member

mulkieran commented Jan 4, 2024

@erickj The COPR package is masquerading as a pre-release of 3.6.0 3.6.3, so I believe if you update the package you will get the regular released package back. But it is quite acceptable to keep running with this test package, as, except for this change that we're taking advantage of, it is indistinguishable, in its behavior, from the regularly released package.

@mulkieran
Copy link
Member

@erickj And, if you succeed, could you run the command again with xml format specified.

> thin_dump --format xml /dev/dm-16

and store that information in a safe place for further processing?

@erickj
Copy link
Author

erickj commented Jan 5, 2024

I've added 2 files to the attached tarball:

  • sudo thin_dump --format human_readable /dev/dm-16 > thin_dump.dm-16.txt
  • sudo thin_dump --format xml /dev/dm-16 > thin_dump.dm-16.xml

thin_dump.tar.gz

@erickj
Copy link
Author

erickj commented Jan 5, 2024

I've done some archaeology on thin_check

@mulkieran thanks for this.

For some background on the update path here which may or may not prove useful. I had been previously running Fedora 37, about 1 month over its EOL.

In the same evening, on Jan 1, I updated to Fedora 38 then to Fedora 39. I quickly verified the update to F38, (this includes the journaltctl logs above which log the successful set up of the pool). Immediately after verifying the F38 upgrade, I upgraded straight away to F39 the same evening.

dnf history of the 2 back to back system-upgrade events below show the stratisd & device-mapper-persistent-data package version update history:

$ dnf history info 438 | grep -E "device-mapper-persistent-data|stratisd"
    Upgrade       device-mapper-persistent-data-1.0.6-2.fc39.x86_64                   @fedora
    Upgraded      device-mapper-persistent-data-0.9.0-10.fc38.x86_64                  @@System
    Upgrade       stratisd-3.6.3-1.fc39.x86_64                                        @updates
    Upgraded      stratisd-3.6.3-1.fc38.x86_64                                        @@System
02:17:09 [erick@jupiter:/tmp] 
$ dnf history info 437 | grep -E "device-mapper-persistent-data|stratisd"
    Upgrade       device-mapper-persistent-data-0.9.0-10.fc38.x86_64                   @fedora
    Upgraded      device-mapper-persistent-data-0.9.0-8.fc37.x86_64                    @@System
    Upgrade       stratisd-3.6.3-1.fc38.x86_64                                         @updates
    Upgraded      stratisd-3.6.2-1.fc37.x86_64                                         @@System

I've double checked the journaltctl logs again, and have verified again that the first instance of the log message indicating a failure with thin_check/thin_repair is on the first boot immediately after upgrading to to F39. The boot logs on F38 and prior all indicate the pool is set up successfully.

@mulkieran
Copy link
Member

@erickj Thanks that is quite helpful.

@mulkieran
Copy link
Member

@erickj We are continuing to work on this. My current plan is that we provide you with the ability to update your Stratis pool level metadata to carve out space for a thin meta spare. If we can demonstrate that that will work that will be easiest for you, as the pool ought to come up and, if thin_check ever turns up a problem again, stratisd will be able to run thin_repair on that pool with an appropriately sized target device. But we need to put in some more work so that we know we are able to fix up the thin pool metadata even if that does not work.

@erickj
Copy link
Author

erickj commented Jan 6, 2024

@mulkieran thank you for the continued communication on this issue. It's very much appreciated.

@mulkieran
Copy link
Member

@erickj Can you run thin_metadata_pack on the device? That will allow the structure of the metadata to be better inspected to try to understand the cause of the thin_check error.

> thin_metadata_pack --input /dev/mapper/stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta
--output thinmeta.pack

And thank you for your continued patience.

@erickj
Copy link
Author

erickj commented Jan 7, 2024

thank you for the follow up @mulkieran

thinmeta.pack.tar.gz
file is attached (I needed to tar.gz it to upload a "github" supported file type)

additionally as an aside... I see something odd with the thin_metadata_pack command. --input is documented in the man page as you've given in your example. But I needed to use the short flag form of -i

$ sudo thin_metadata_pack --input /dev/mapper/stratis-1-private-7e18ddcd99244c92b926100a7498630b-flex-thinmeta --output thinmeta.pack
[sudo] password for erick: 
error: unexpected argument '--input' found

Usage: thin_metadata_pack [OPTIONS] -i <DEV> -o <FILE>

For more information, try '--help'.

@mulkieran
Copy link
Member

Thanks for uploading the data. I filed a PR to address the missing long options upstream, thanks for mentioning that.

@mulkieran
Copy link
Member

@erickj It seems like my initial plan of adjusting the Stratis metadata may not work. There is a step that you can proceed with now that is recommended and ought to gain some ground toward the goal of getting the pool set up. The step is to run thin_repair taking the set up meta data device as your input and choosing an external device not on the pool as your output. The external device should be at least as large as the thin meta device. We expect this step to succeed. When that step is done, you should run thin_check on the external device. You should also expect thin_check to succeed without errors on the repaired metadata. Plz let me know how that goes.

@mingnus
Copy link

mingnus commented Jan 8, 2024

Hi @erickj,

I've checked the thinmeta you provided. Basically there's no problem with the metadata, except some non-zero bytes in the unused region of the index block, which is unexpected so the new thin_check v1.0.6 treats it as an error. It's unusual although not harmful, so i would like to know how it happened:

  • Which kernel/distro did you use to create the affected pool?
  • Were the two other pools created at the same time? It seems like these two are not affected, so i assume the creation time is different

@mingnus
Copy link

mingnus commented Jan 8, 2024

@erickj It seems like my initial plan of adjusting the Stratis metadata may not work. There is a step that you can proceed with now that is recommended and ought to gain some ground toward the goal of getting the pool set up. The step is to run thin_repair taking the set up meta data device as your input and choosing an external device not on the pool as your output. The external device should be at least as large as the thin meta device.

It's still okay to adopt the original plan since only the first-half of thinmeta is being used currently, so thin_repair will work in this case (i've confirmed locally). Furthermore, as we have the thinmeta backup in both packed and xml form, we can even manually rebuild the thinmeta onto the adjusted volumes, without the need of running thin_repair through stratisd.

@mulkieran
Copy link
Member

@erickj It seems like my initial plan of adjusting the Stratis metadata may not work. There is a step that you can proceed with now that is recommended and ought to gain some ground toward the goal of getting the pool set up. The step is to run thin_repair taking the set up meta data device as your input and choosing an external device not on the pool as your output. The external device should be at least as large as the thin meta device.

It's still okay to adopt the original plan since only the first-half of thinmeta is being used currently, so thin_repair will work in this case (i've confirmed locally). Furthermore, as we have the thinmeta backup in both packed and xml form, we can even manually rebuild the thinmeta onto the adjusted volumes, without the need of running thin_repair through stratisd.

@mingnus thanks for the further information!

@erickj
Copy link
Author

erickj commented Jan 8, 2024

@mingnus thanks for the questions, I hope the answers here provide some benefit:

Which kernel/distro did you use to create the affected pool?

The pool was created ~5 years ago on the same workstation as it currently runs. At the time the installed OS was Fedora ~29 (or whatever the mainline Fedora server version was in January 2019). The pool has been upgraded from Fedora ~29 to Fedora 39 through each major Fedora release over the last 5 years without issue until the F39 upgrade.

Were the two other pools created at the same time? It seems like these two are not affected, so i assume the creation time is different

The io.vos pool was created on the exact same day (back to back) as the pool that's currently having issues. It has gone through the exact same upgrade path as the problematic pool.

edit:

the following dnf history shows the exact versions the initial pool creation was made with, stratisd-1.0.2-1 and device-mapper-persistent-data-0.7.6-2 on F29. The install date would have been in Feb 2019, coincident with the stratis package install.

If there are any other packages of interest please let me know.

$ dnf history info 1 | grep -E "device-mapper-persistent-data"
    Install device-mapper-persistent-data-0.7.6-2.fc29.x86_64                 @anaconda
20:57:30 [erick@jupiter:/tmp] 
$ dnf history info 28
Transaction ID : 28
Begin time     : Sat 16 Feb 2019 07:31:44 PM CET
Begin rpmdb    : 1774:bc384c81736c6944adad86bae8eb80a089987f52
End time       : Sat 16 Feb 2019 07:31:46 PM CET (2 seconds)
End rpmdb      : 1784:ebcca33b236a505eb60e5a70f5a7992d8aeb90f1
User           : Erick <erick>
Return-Code    : Success
Releasever     : 29
Command Line   : install stratisd stratis-cli
Comment        : 
Packages Altered:
    Install stratis-cli-1.0.2-1.module_2606+e3b6bb92.x86_64     @updates-modular
    Install stratisd-1.0.2-1.module_2603+e1359ef8.x86_64        @updates-modular

@erickj
Copy link
Author

erickj commented Jan 8, 2024

@erickj It seems like my initial plan of adjusting the Stratis metadata may not work. There is a step that you can proceed with now that is recommended and ought to gain some ground toward the goal of getting the pool set up. The step is to run thin_repair taking the set up meta data device as your input and choosing an external device not on the pool as your output. The external device should be at least as large as the thin meta device. We expect this step to succeed. When that step is done, you should run thin_check on the external device. You should also expect thin_check to succeed without errors on the repaired metadata. Plz let me know how that goes.

@mulkieran thank you very much for the follow up, unfortunately I'm not quite sure the first step to take here with regards to this comments and the follow up from @mingnus above (re: adopting the original plan). Would you mind further clarifying the action to take?

@mulkieran
Copy link
Member

mulkieran commented Jan 8, 2024

IGNORE THIS COMMENT

You have two choices:

  1. I will provide you with a stratisd COPR release that just skips calling thin_check. We would then expect the pool to come up fine. You would then manually use another thin tool to restore the fixed metadata onto the existing thin device.

Pros:

  • Your pool will, likely be up very soon.

Cons:

  • You will have to do the manual restore of the thin meta data to the thin meta device. If you don't do this, and then you revert to using a regular version of stratisd your pool will fail to come up next time it has to be started.
  • Your pool will not have a spare thin meta device. If there is a similar failure sometime in the future, the situation will be the same.

Basically, this is a somewhat fragile solution, because your pool ends up not quite right, and refinements of the solution are also a bit fragile.

  1. We provide you with a tool to overwrite Stratis pool level metadata and the JSON of what to write. You do this while the pool is down and stratisd will bring up the pool by doing a thin_repair onto the thin meta spare.

Pros:

  • If it works, not much for you to do and your pool is in a permanently improved state.

Cons:

  • We need to complete and test the metadata overwrite.
  • The amount of changed behavior of stratisd is larger.

Let me know what you think and feel free to ask me for any clarification or make a request. There are a bunch of potential refinements, but that's the basics.

@mulkieran
Copy link
Member

@erickj I'm sorry, somehow I didn't properly tag you on the previous comment. But I came up with a better plan in the interim, so you should ignore the previous comment.

At present the pool is partially setup and the thin meta device is available. The error that caused the thin_check failure is innocuous. So, you should be able to proceed directly to fix the meta data on the thin meta device. The procedure should be straightforward:

  1. From the thinp tools README[1] changing the things that need to be changed.
thin_dump --repair /dev/mapper/my_metadata > repaired.xml
thin_restore -i repaired.xml -o /dev/mapper/my_metadata

I doubt that the --repair option will do anything, due to the nature of what caused thin_check to fail, but you might as well use it. You can compare it to the unrepaired dump that you got previously and will probably find that they are identical. thin_restore should leave out the strange non-zero values.

  1. Stop the pool.
  2. Start the pool. Because the thin meta data has been restored the pool should come up properly.

There is a possibility that the thin_restore operation will not work because the thin meta device appears to be in use. If that is the case then I will provide you with another version of stratisd that will just bring up the meta device. Let me know how the above steps go, first, though.

If the above succeeds then your pool will be relatively stable for a while. But there is still the problem of the too small thin metadata spare which could cause you a problem in future. We would fix that by overwriting your Stratis pool-level metadata. However, working out that will take longer to write and test, and so you may prefer to do the steps above right now.

[1] https://github.com/jthornber/thin-provisioning-tools

@mingnus
Copy link

mingnus commented Jan 9, 2024

Which kernel/distro did you use to create the affected pool?

The pool was created ~5 years ago on the same workstation as it currently runs. At the time the installed OS was Fedora ~29 (or whatever the mainline Fedora server version was in January 2019). The pool has been upgraded from Fedora ~29 to Fedora 39 through each major Fedora release over the last 5 years without issue until the F39 upgrade.

Thank you for the detailed information!

Were the two other pools created at the same time? It seems like these two are not affected, so i assume the creation time is different

The io.vos pool was created on the exact same day (back to back) as the pool that's currently having issues. It has gone through the exact same upgrade path as the problematic pool.

That's out of my expectation since the kernel driver must had zero-initialized every blocks, but now only one pool is having issues.

the following dnf history shows the exact versions the initial pool creation was made with, stratisd-1.0.2-1 and device-mapper-persistent-data-0.7.6-2 on F29. The install date would have been in Feb 2019, coincident with the stratis package install.

If there are any other packages of interest please let me know.

The formal releases of device-mapper-persistent-data are not expected to create non-zero bytes, and there's no sign that the affected thinmeta was rebuilt from thin_repair or thin_restore before, so I would like to rule out the possibilities of userland tools, unless you had tried any informal built by your own.

I think it could be sort of in-memory data degradation, and since it's not harmful, maybe I'll allow those junk bytes in further releases.

@mingnus
Copy link

mingnus commented Jan 9, 2024

@erickj I'm sorry, somehow I didn't properly tag you on the previous comment. But I came up with a better plan in the interim, so you should ignore the previous comment.

At present the pool is partially setup and the thin meta device is available. The error that caused the thin_check failure is innocuous. So, you should be able to proceed directly to fix the meta data on the thin meta device. The procedure should be straightforward:

(... skipped)

If the above succeeds then your pool will be relatively stable for a while. But there is still the problem of the too small thin metadata spare which could cause you a problem in future. We would fix that by overwriting your Stratis pool-level metadata. However, working out that will take longer to write and test, and so you may prefer to do the steps above right now.

Agree that above is the simplest & easiest way. Considering the small metadata spare that could cause inconveniences in the future, personally I would prefer overwriting the Stratis pool-level metadata to enlarge the metadata spare volume. The affected pool is just 250069680 sectors (if i'm not mistaken) so reducing the size of thinmetadata to 50% (1.4 GB) is still sufficient. Anyway, we're opening for the both options.

Applying the --repair option to thin_dump is not necessary in this case where thin_check doesn't report missing mappings. The missing mappings error looks like this:

TRANSACTION_ID=1
METADATA_FREE_BLOCKS=2479957
Checking thin metadata [                                 ] Remaining 0s, mapping tree 
Thin device 1 has 2 errors and is missing 1234 mappings, while expected 56789
Check of mappings failed

Without missing mappings, the --repair option just extends the running time but doesn't change the output results. However, if you haven't examined the thinmeta with thin_check, then running thin_dump with --repair secures the consistency of the output.

@mulkieran
Copy link
Member

@erickj Could you let us know what your status is?

@erickj
Copy link
Author

erickj commented Jan 10, 2024

@mulkieran apologies for the late reply, I was unavailable yesterday.

Good news is that your suggestions above seem to have worked.

$ sudo thin_dump /dev/dm-16 > dm-16.xml
$ sudo thin_dump --repair /dev/dm-16 > repaired.dm-16.xml
$ diff dm-16.xml repaired.dm-16.xml                        # produces no diff as you thought
$ sudo thin_restore -i repaired.dm-16.xml -o /dev/dm-16
$ sudo stratis pool stop --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
$ sudo stratis pool start --uuid="7e18ddcd-9924-4c92-b926-100a7498630b"
$ stratis pool list --uuid 7e18ddcd-9924-4c92-b926-100a7498630b
UUID: 7e18ddcd-9924-4c92-b926-100a7498630b
Name: net.ejjohnson.home
Alerts: 1
     WS001: All devices fully allocated
Actions Allowed: fully_operational
Cache: No
Filesystem Limit: 100
Allows Overprovisioning: Yes
Key Description: unencrypted
Clevis Configuration: unencrypted
Space Usage:
Fully Allocated: Yes
    Size: 2.73 TiB
    Allocated: 2.73 TiB
    Used: 2.50 TiB

Remounting the filesystem has succeeded and the drive is accessible again. Thank you very very much for the help with this issue 🙏

Just a few remaining questions:

  1. Is there any other data (or anything else) that I can provide which would be of use to you to prevent this issue from affecting other users?
  2. Is it safe to update again to the mainline stratisd current version 3.6.3-1.fc39?
  3. If the issue does reappear (since your comments mentioned that stability may still be an issue until patches can be released), then is the presence of the journalctl logs previously analyzed sufficient to diagnose the identical issue, and expect the same steps to repair the pool again? Or would it be best to reopen a ticket to diagnose any future instability?

@mingnus re:

so I would like to rule out the possibilities of userland tools, unless you had tried any informal built by your own.

No, AFAIR no other tools have been used to manipulate the filesystem

@mulkieran
Copy link
Member

@erickj I'm pleased to hear that your pool is back up.

Regarding question (2) I believe that it will be safe for you to reinstall the current version of stratisd.

Regarding question (1), there are really three issues that affected you, in sequence. The first was that there were stray zeros in a particular region in the thin metadata on this pool only. The second was that the new version of thin_check detected these stray zeros, which it had not previously done. The third was that when stratisd ran thin_repair the target device on your pool was too small. I can not guess why those stray zeros appeared and it may be very hard to discover that. Regarding whether thin_check should report an error on this condition, I am uncertain, and @mingnus is best able to make that decision. Regarding the third problem, that the thin meta spare device is too small to be usable as a target: we are working on developing a remediation that will be safe and well tested and also a way of identifying this problem for any other users.

I expect we will close this issue in about a week assuming your pool continues well. I've opened a new issue[1] for the remediation task.

Thanks for your patience and clear communication around all of this.

[1] stratis-storage/project#683

@mingnus
Copy link

mingnus commented Jan 11, 2024

@erickj @mulkieran yes, I'll remove the constraint from thin_check.

mingnus added a commit to mingnus/thin-provisioning-tools that referenced this issue Jan 12, 2024
Previously, we assumed unused entries in the index block were all
zero-initialized, leading to issues while loading the block with
unexpected bytes and a valid checksum [1]. The updated approach loads
index entries based on actual size information from superblock and
therefore improves compatibility.

[1] stratis-storage/stratisd#3520
@refi64
Copy link

refi64 commented Nov 3, 2024

So I seem to have hit a very similar issue on stratis 3.7.3, specifically because I apparently exhausted all my free space (again!) from, uhh, trying to upgrade to F41 with only 7GiB of space left in the pool. However, I couldn't run any repair tools manually because I also got the Device or resource busy (os error 16) error; in particular, this was before dbus was started (so no normal stratisd), and stratisd-min seemed to move on setting up the pool after the error, but stratis-fstab-setup kept failing anyway.

The fix was to boot into the initramfs and do a stratis-min pool start & stratis-min pool stop manually. It's not clear to me why thin_repair automatically running during boot wasn't doing anything, maybe it just took too long?

(This isn't entirely relevant to the original issue, but this is basically the only google search result for partially_constructed_pools, so I figured it would be worth noting it here for future reference.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants