![]() dev/sdb: calling ioclt to re-read partition table: ~]# ~]# lsblk -a dev/sdb: 2 bytes were erased at offset 0x000001fe (dos): 55 aa Wipefs: error: /dev/sdb: probing initialization failed: Device or resource ~]# wipefs -a /dev/sdb -force └─centos_odin-home 253:7 0 17.2G 0 lvm ~]# wipefs /dev/sdbĠx1fe dos # wipefs -a /dev/sdb dev/sdb1 is apparently in use by the system will not make a filesystem ~]# lsblk -a The partition table has been altered! Calling ioctl() to re-read partition table. Partition 1 of type Linux and of size 477 GiB is set I/O size (minimum/optimal): 512 bytes / 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes if I could format with regular file system but it won't ~]# fdisk /dev/sdbĬhanges will remain in memory only, until you decide to write them.īe careful before using the write command.ĭevice does not contain a recognized partition tableīuilding a new DOS disklabel with disk identifier 0xbcff3e20.ĭisk /dev/sdb: 512.1 GB, 512110190592 bytes, 1000215216 sectors Lots of hits on google about this "excluded by a filter." message but no answers that make sense or answers to posts. I attempted many clean up attempts and even gave reboot a try. Vdo: ERROR - Device /dev/disk/by-id/ata-SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306 excluded by a ~]# Pvcreate -config devices/scan_lvs=1 -qq -test /dev/disk/by-id/ata-SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306 First step was to remove Windows Partitions (and even ran dd of /dev/zero to drive) and then lay down ~]# vdo create -name=odin_vdo_bay1 -device=/dev/disk/by-id/ata-SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306 -activate=enabled -compression=enabled -deduplication=enabled -vdoLogicalSize=750G -writePolicy=async -verbose Installed a new one and now trying to format it for use. I needed to upgrade a drive and so pulled it out. I have a posting to CentOS Community forum here: I don't know what to make of this.I don't know if this is a VDO issue or some subordinate step VDO uses when it creates a new volume. Now they don't bomb when zapping, but they remain in that state and can't be added. ![]() S1 /dev/sda ssd Samsung_SSD_860_EVO_1TB_S32T 1000G Insufficient space (<10 extents) on vgs, LVM detected, locked Right after rebooting (after setting unmanaged=true), the disks show up thus: The fact that the devices are locked led me to believe that somehow the management was competing with itself trying to bring them up as OSDs. usr/bin/podman: stderr -> failed to wipefs device, will try again to workaround probable race condition usr/bin/podman: stderr stderr: wipefs: error: /dev/sdb: probing initialization failed: Device or resource busy usr/bin/podman: stderr -> Zapping: /dev/sdb This spits out some output, and this seems to indicate that the devices are actually, well, locked: Running ceph orch device zap tries to do lvm zap -destroy /dev/sdb, which in turn calls wipefs. ceph orch apply osd -all-available-devices -unmanaged=true (and then rebooting everything again) Even rebooting all of the nodes at the same time. I used sgdisk -Z, and ceph orch device zap s1 /dev/sdb -force. (names and serial numbers shortened for clarity and privacy) S3 /dev/sdb ssd Samsung_SSD_860_EVO_1TB_S33M 1000G locked S3 /dev/sda ssd Samsung_SSD_860_EVO_1TB_S36V 1000G locked ![]() S2 /dev/sdb ssd Samsung_SSD_860_EVO_1TB_S35A 1000G locked S2 /dev/sda ssd Samsung_SSD_860_EVO_1TB_S35E 1000G locked S1 /dev/sdb ssd Samsung_SSD_860_EVO_1TB_S37M 1000G locked S1 /dev/sda ssd Samsung_SSD_860_EVO_1TB_S32T 1000G locked HOST PATH TYPE DEVICE ID SIZE AVAILABLE REJECT REASONS This is how they appear to the first node: I took a few SSDs off other clusters, wiped them and tried to add them. The cluster is up and mostly healthy but warning that it doesn't have any OSDs. Ceph is currently at 16.2.7+ds-4+b2, which is the same version string as for cephadm. I have installed a cluster with 3x Dell r730 servers using Debian Bookworm's cephadm.
0 Comments
Leave a Reply. |