Nov 8 01:18:13.047685 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 01:18:13.047699 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:18:13.047707 kernel: BIOS-provided physical RAM map: Nov 8 01:18:13.047711 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 8 01:18:13.047715 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 8 01:18:13.047719 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 8 01:18:13.047724 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 8 01:18:13.047728 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 8 01:18:13.047732 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2cfff] usable Nov 8 01:18:13.047736 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x0000000081b2dfff] ACPI NVS Nov 8 01:18:13.047740 kernel: BIOS-e820: [mem 0x0000000081b2e000-0x0000000081b2efff] reserved Nov 8 01:18:13.047745 kernel: BIOS-e820: [mem 0x0000000081b2f000-0x000000008afccfff] usable Nov 8 01:18:13.047750 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 8 01:18:13.047754 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 8 01:18:13.047759 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 8 01:18:13.047764 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 8 01:18:13.047770 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 8 01:18:13.047775 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 8 01:18:13.047779 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 8 01:18:13.047784 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 8 01:18:13.047789 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 8 01:18:13.047793 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 01:18:13.047798 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 8 01:18:13.047802 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 8 01:18:13.047807 kernel: NX (Execute Disable) protection: active Nov 8 01:18:13.047812 kernel: APIC: Static calls initialized Nov 8 01:18:13.047816 kernel: SMBIOS 3.2.1 present. Nov 8 01:18:13.047821 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 8 01:18:13.047827 kernel: tsc: Detected 3400.000 MHz processor Nov 8 01:18:13.047832 kernel: tsc: Detected 3399.906 MHz TSC Nov 8 01:18:13.047836 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 01:18:13.047841 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 01:18:13.047846 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 8 01:18:13.047851 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 8 01:18:13.047856 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 01:18:13.047861 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 8 01:18:13.047866 kernel: Using GB pages for direct mapping Nov 8 01:18:13.047871 kernel: ACPI: Early table checksum verification disabled Nov 8 01:18:13.047876 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 8 01:18:13.047881 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 8 01:18:13.047888 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 8 01:18:13.047893 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 8 01:18:13.047898 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 8 01:18:13.047904 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 8 01:18:13.047910 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 8 01:18:13.047915 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 8 01:18:13.047920 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 8 01:18:13.047925 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 8 01:18:13.047930 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 8 01:18:13.047935 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 8 01:18:13.047940 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 8 01:18:13.047946 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047951 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 8 01:18:13.047956 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 8 01:18:13.047961 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047966 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047971 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 8 01:18:13.047977 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 8 01:18:13.047982 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047987 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047993 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 8 01:18:13.047998 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 8 01:18:13.048003 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 8 01:18:13.048008 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 8 01:18:13.048013 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 8 01:18:13.048018 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 8 01:18:13.048023 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 8 01:18:13.048028 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 8 01:18:13.048034 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 8 01:18:13.048039 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 8 01:18:13.048044 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 8 01:18:13.048049 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 8 01:18:13.048055 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 8 01:18:13.048060 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 8 01:18:13.048065 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 8 01:18:13.048070 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 8 01:18:13.048075 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 8 01:18:13.048081 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 8 01:18:13.048086 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 8 01:18:13.048091 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 8 01:18:13.048096 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 8 01:18:13.048101 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 8 01:18:13.048106 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 8 01:18:13.048111 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 8 01:18:13.048116 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 8 01:18:13.048121 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 8 01:18:13.048127 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 8 01:18:13.048132 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 8 01:18:13.048137 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 8 01:18:13.048142 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 8 01:18:13.048147 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 8 01:18:13.048152 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 8 01:18:13.048157 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 8 01:18:13.048162 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 8 01:18:13.048167 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 8 01:18:13.048173 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 8 01:18:13.048178 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 8 01:18:13.048183 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 8 01:18:13.048188 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 8 01:18:13.048193 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 8 01:18:13.048198 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 8 01:18:13.048203 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 8 01:18:13.048208 kernel: No NUMA configuration found Nov 8 01:18:13.048213 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 8 01:18:13.048218 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 8 01:18:13.048224 kernel: Zone ranges: Nov 8 01:18:13.048230 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 01:18:13.048235 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 01:18:13.048240 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 8 01:18:13.048245 kernel: Movable zone start for each node Nov 8 01:18:13.048250 kernel: Early memory node ranges Nov 8 01:18:13.048255 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 8 01:18:13.048260 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 8 01:18:13.048265 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2cfff] Nov 8 01:18:13.048271 kernel: node 0: [mem 0x0000000081b2f000-0x000000008afccfff] Nov 8 01:18:13.048276 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 8 01:18:13.048281 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 8 01:18:13.048289 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 8 01:18:13.048299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 8 01:18:13.048304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 01:18:13.048310 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 8 01:18:13.048315 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 8 01:18:13.048322 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 8 01:18:13.048327 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 8 01:18:13.048333 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 8 01:18:13.048338 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 8 01:18:13.048344 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 8 01:18:13.048349 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 8 01:18:13.048354 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 01:18:13.048360 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 01:18:13.048365 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 01:18:13.048372 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 01:18:13.048377 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 01:18:13.048383 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 01:18:13.048388 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 01:18:13.048393 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 01:18:13.048399 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 01:18:13.048404 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 01:18:13.048409 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 01:18:13.048415 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 01:18:13.048421 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 01:18:13.048427 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 01:18:13.048432 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 01:18:13.048437 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 01:18:13.048443 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 8 01:18:13.048448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 01:18:13.048454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 01:18:13.048459 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 01:18:13.048465 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 01:18:13.048470 kernel: TSC deadline timer available Nov 8 01:18:13.048477 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 8 01:18:13.048482 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 8 01:18:13.048488 kernel: Booting paravirtualized kernel on bare hardware Nov 8 01:18:13.048493 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 01:18:13.048499 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 8 01:18:13.048504 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 01:18:13.048510 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 01:18:13.048515 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 8 01:18:13.048522 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:18:13.048528 kernel: random: crng init done Nov 8 01:18:13.048533 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 8 01:18:13.048539 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 8 01:18:13.048544 kernel: Fallback order for Node 0: 0 Nov 8 01:18:13.048549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 8 01:18:13.048555 kernel: Policy zone: Normal Nov 8 01:18:13.048560 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 01:18:13.048566 kernel: software IO TLB: area num 16. Nov 8 01:18:13.048572 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 732416K reserved, 0K cma-reserved) Nov 8 01:18:13.048578 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 8 01:18:13.048583 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 01:18:13.048589 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 01:18:13.048595 kernel: Dynamic Preempt: voluntary Nov 8 01:18:13.048601 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 01:18:13.048607 kernel: rcu: RCU event tracing is enabled. Nov 8 01:18:13.048612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 8 01:18:13.048618 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 01:18:13.048624 kernel: Rude variant of Tasks RCU enabled. Nov 8 01:18:13.048629 kernel: Tracing variant of Tasks RCU enabled. Nov 8 01:18:13.048635 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 01:18:13.048640 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 8 01:18:13.048646 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 8 01:18:13.048651 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 01:18:13.048657 kernel: Console: colour dummy device 80x25 Nov 8 01:18:13.048662 kernel: printk: console [tty0] enabled Nov 8 01:18:13.048668 kernel: printk: console [ttyS1] enabled Nov 8 01:18:13.048674 kernel: ACPI: Core revision 20230628 Nov 8 01:18:13.048680 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 8 01:18:13.048685 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 01:18:13.048691 kernel: DMAR: Host address width 39 Nov 8 01:18:13.048696 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 8 01:18:13.048702 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 8 01:18:13.048707 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 8 01:18:13.048713 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 8 01:18:13.048718 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 8 01:18:13.048725 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 8 01:18:13.048730 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 8 01:18:13.048736 kernel: x2apic enabled Nov 8 01:18:13.048741 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 8 01:18:13.048747 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 8 01:18:13.048752 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 8 01:18:13.048758 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 8 01:18:13.048763 kernel: process: using mwait in idle threads Nov 8 01:18:13.048769 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 01:18:13.048775 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 01:18:13.048780 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 01:18:13.048786 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 01:18:13.048791 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 01:18:13.048797 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 01:18:13.048802 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 01:18:13.048808 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 01:18:13.048813 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 01:18:13.048818 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 01:18:13.048824 kernel: TAA: Mitigation: TSX disabled Nov 8 01:18:13.048829 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 8 01:18:13.048835 kernel: SRBDS: Mitigation: Microcode Nov 8 01:18:13.048841 kernel: GDS: Mitigation: Microcode Nov 8 01:18:13.048846 kernel: active return thunk: its_return_thunk Nov 8 01:18:13.048852 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 01:18:13.048857 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 8 01:18:13.048863 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 01:18:13.048868 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 01:18:13.048873 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 01:18:13.048879 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 01:18:13.048884 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 01:18:13.048890 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 01:18:13.048895 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 01:18:13.048901 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 01:18:13.048907 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 8 01:18:13.048912 kernel: Freeing SMP alternatives memory: 32K Nov 8 01:18:13.048918 kernel: pid_max: default: 32768 minimum: 301 Nov 8 01:18:13.048923 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 01:18:13.048928 kernel: landlock: Up and running. Nov 8 01:18:13.048934 kernel: SELinux: Initializing. Nov 8 01:18:13.048939 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.048945 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.048950 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 01:18:13.048955 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:18:13.048962 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:18:13.048968 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:18:13.048973 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 8 01:18:13.048979 kernel: ... version: 4 Nov 8 01:18:13.048984 kernel: ... bit width: 48 Nov 8 01:18:13.048989 kernel: ... generic registers: 4 Nov 8 01:18:13.048995 kernel: ... value mask: 0000ffffffffffff Nov 8 01:18:13.049000 kernel: ... max period: 00007fffffffffff Nov 8 01:18:13.049006 kernel: ... fixed-purpose events: 3 Nov 8 01:18:13.049012 kernel: ... event mask: 000000070000000f Nov 8 01:18:13.049018 kernel: signal: max sigframe size: 2032 Nov 8 01:18:13.049023 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 8 01:18:13.049029 kernel: rcu: Hierarchical SRCU implementation. Nov 8 01:18:13.049034 kernel: rcu: Max phase no-delay instances is 400. Nov 8 01:18:13.049040 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 8 01:18:13.049045 kernel: smp: Bringing up secondary CPUs ... Nov 8 01:18:13.049050 kernel: smpboot: x86: Booting SMP configuration: Nov 8 01:18:13.049056 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 8 01:18:13.049063 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 01:18:13.049068 kernel: smp: Brought up 1 node, 16 CPUs Nov 8 01:18:13.049074 kernel: smpboot: Max logical packages: 1 Nov 8 01:18:13.049079 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 8 01:18:13.049085 kernel: devtmpfs: initialized Nov 8 01:18:13.049090 kernel: x86/mm: Memory block size: 128MB Nov 8 01:18:13.049096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2d000-0x81b2dfff] (4096 bytes) Nov 8 01:18:13.049101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 8 01:18:13.049107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 01:18:13.049113 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 8 01:18:13.049119 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 01:18:13.049124 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 01:18:13.049130 kernel: audit: initializing netlink subsys (disabled) Nov 8 01:18:13.049135 kernel: audit: type=2000 audit(1762564687.040:1): state=initialized audit_enabled=0 res=1 Nov 8 01:18:13.049140 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 01:18:13.049146 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 01:18:13.049151 kernel: cpuidle: using governor menu Nov 8 01:18:13.049158 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 01:18:13.049163 kernel: dca service started, version 1.12.1 Nov 8 01:18:13.049169 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 8 01:18:13.049174 kernel: PCI: Using configuration type 1 for base access Nov 8 01:18:13.049180 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 8 01:18:13.049185 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 01:18:13.049191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 01:18:13.049196 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 01:18:13.049202 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 01:18:13.049208 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 01:18:13.049214 kernel: ACPI: Added _OSI(Module Device) Nov 8 01:18:13.049219 kernel: ACPI: Added _OSI(Processor Device) Nov 8 01:18:13.049225 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 01:18:13.049230 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 8 01:18:13.049236 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049241 kernel: ACPI: SSDT 0xFFFF9E2081B57800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 8 01:18:13.049247 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049252 kernel: ACPI: SSDT 0xFFFF9E2081B4F800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 8 01:18:13.049259 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049264 kernel: ACPI: SSDT 0xFFFF9E2080247500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 8 01:18:13.049270 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049275 kernel: ACPI: SSDT 0xFFFF9E2081E79000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 8 01:18:13.049280 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049287 kernel: ACPI: SSDT 0xFFFF9E208012A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 8 01:18:13.049293 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049299 kernel: ACPI: SSDT 0xFFFF9E2081B55400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 8 01:18:13.049304 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 8 01:18:13.049309 kernel: ACPI: Interpreter enabled Nov 8 01:18:13.049316 kernel: ACPI: PM: (supports S0 S5) Nov 8 01:18:13.049321 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 01:18:13.049327 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 8 01:18:13.049332 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 8 01:18:13.049338 kernel: HEST: Table parsing has been initialized. Nov 8 01:18:13.049343 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 8 01:18:13.049349 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 01:18:13.049354 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 01:18:13.049360 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 8 01:18:13.049366 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 8 01:18:13.049372 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 8 01:18:13.049377 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 8 01:18:13.049383 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 8 01:18:13.049388 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 8 01:18:13.049394 kernel: ACPI: \_TZ_.FN00: New power resource Nov 8 01:18:13.049399 kernel: ACPI: \_TZ_.FN01: New power resource Nov 8 01:18:13.049405 kernel: ACPI: \_TZ_.FN02: New power resource Nov 8 01:18:13.049410 kernel: ACPI: \_TZ_.FN03: New power resource Nov 8 01:18:13.049417 kernel: ACPI: \_TZ_.FN04: New power resource Nov 8 01:18:13.049422 kernel: ACPI: \PIN_: New power resource Nov 8 01:18:13.049428 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 8 01:18:13.049504 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 01:18:13.049560 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 8 01:18:13.049609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 8 01:18:13.049617 kernel: PCI host bridge to bus 0000:00 Nov 8 01:18:13.049671 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 01:18:13.049716 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 01:18:13.049760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 01:18:13.049803 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 8 01:18:13.049847 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 8 01:18:13.049889 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 8 01:18:13.049950 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 8 01:18:13.050011 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 8 01:18:13.050063 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.050116 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 8 01:18:13.050166 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 8 01:18:13.050220 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 8 01:18:13.050270 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 8 01:18:13.050329 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 8 01:18:13.050380 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 8 01:18:13.050430 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 8 01:18:13.050482 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 8 01:18:13.050533 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 8 01:18:13.050580 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 8 01:18:13.050636 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 8 01:18:13.050685 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 01:18:13.050741 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 8 01:18:13.050791 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 01:18:13.050844 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 8 01:18:13.050893 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 8 01:18:13.050944 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 8 01:18:13.050996 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 8 01:18:13.051055 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 8 01:18:13.051106 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 8 01:18:13.051159 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 8 01:18:13.051208 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 8 01:18:13.051257 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 8 01:18:13.051323 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 8 01:18:13.051374 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 8 01:18:13.051423 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 8 01:18:13.051473 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 8 01:18:13.051521 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 8 01:18:13.051571 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 8 01:18:13.051622 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 8 01:18:13.051672 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 8 01:18:13.051725 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 8 01:18:13.051775 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.051834 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 8 01:18:13.051887 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.051941 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 8 01:18:13.051991 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.052046 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 8 01:18:13.052095 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.052149 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 8 01:18:13.052201 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.052255 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 8 01:18:13.052331 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 01:18:13.052385 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 8 01:18:13.052437 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 8 01:18:13.052486 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 8 01:18:13.052539 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 8 01:18:13.052595 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 8 01:18:13.052645 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 8 01:18:13.052703 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 8 01:18:13.052754 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 8 01:18:13.052806 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 8 01:18:13.052856 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 8 01:18:13.052910 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 01:18:13.052960 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 01:18:13.053016 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 8 01:18:13.053068 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 8 01:18:13.053118 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 8 01:18:13.053169 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 8 01:18:13.053219 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 01:18:13.053273 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 01:18:13.053326 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 01:18:13.053376 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 8 01:18:13.053425 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 01:18:13.053477 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 8 01:18:13.053532 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 8 01:18:13.053584 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 8 01:18:13.053638 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 8 01:18:13.053689 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 8 01:18:13.053740 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 8 01:18:13.053790 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.053841 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 8 01:18:13.053890 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 01:18:13.053941 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 01:18:13.053998 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 8 01:18:13.054050 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 8 01:18:13.054101 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 8 01:18:13.054151 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 8 01:18:13.054203 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 8 01:18:13.054254 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.054307 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 8 01:18:13.054357 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 01:18:13.054410 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 01:18:13.054461 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 8 01:18:13.054518 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 8 01:18:13.054571 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 8 01:18:13.054623 kernel: pci 0000:06:00.0: supports D1 D2 Nov 8 01:18:13.054674 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 01:18:13.054725 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 8 01:18:13.054778 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.054827 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.054883 kernel: pci_bus 0000:07: extended config space not accessible Nov 8 01:18:13.054941 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 8 01:18:13.054995 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 8 01:18:13.055050 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 8 01:18:13.055104 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 8 01:18:13.055160 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 01:18:13.055212 kernel: pci 0000:07:00.0: supports D1 D2 Nov 8 01:18:13.055266 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 01:18:13.055321 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 8 01:18:13.055374 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.055426 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.055434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 8 01:18:13.055440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 8 01:18:13.055448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 8 01:18:13.055454 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 8 01:18:13.055460 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 8 01:18:13.055465 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 8 01:18:13.055471 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 8 01:18:13.055477 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 8 01:18:13.055483 kernel: iommu: Default domain type: Translated Nov 8 01:18:13.055489 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 01:18:13.055495 kernel: PCI: Using ACPI for IRQ routing Nov 8 01:18:13.055501 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 01:18:13.055507 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 8 01:18:13.055513 kernel: e820: reserve RAM buffer [mem 0x81b2d000-0x83ffffff] Nov 8 01:18:13.055518 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 8 01:18:13.055524 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 8 01:18:13.055530 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 8 01:18:13.055535 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 8 01:18:13.055587 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 8 01:18:13.055641 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 8 01:18:13.055695 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 01:18:13.055704 kernel: vgaarb: loaded Nov 8 01:18:13.055710 kernel: clocksource: Switched to clocksource tsc-early Nov 8 01:18:13.055716 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 01:18:13.055722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 01:18:13.055728 kernel: pnp: PnP ACPI init Nov 8 01:18:13.055779 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 8 01:18:13.055830 kernel: pnp 00:02: [dma 0 disabled] Nov 8 01:18:13.055884 kernel: pnp 00:03: [dma 0 disabled] Nov 8 01:18:13.055936 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 8 01:18:13.055983 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 8 01:18:13.056032 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 8 01:18:13.056078 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 8 01:18:13.056124 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 8 01:18:13.056172 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 8 01:18:13.056218 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 8 01:18:13.056266 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 8 01:18:13.056316 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 8 01:18:13.056362 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 8 01:18:13.056414 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 8 01:18:13.056460 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 8 01:18:13.056509 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 8 01:18:13.056554 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 8 01:18:13.056599 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 8 01:18:13.056645 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 8 01:18:13.056690 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 8 01:18:13.056741 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 8 01:18:13.056750 kernel: pnp: PnP ACPI: found 9 devices Nov 8 01:18:13.056758 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 01:18:13.056764 kernel: NET: Registered PF_INET protocol family Nov 8 01:18:13.056770 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 01:18:13.056776 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 8 01:18:13.056782 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 01:18:13.056788 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 01:18:13.056794 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 01:18:13.056800 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 8 01:18:13.056805 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.056813 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.056819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 01:18:13.056824 kernel: NET: Registered PF_XDP protocol family Nov 8 01:18:13.056874 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 8 01:18:13.056925 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 8 01:18:13.056974 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 8 01:18:13.057027 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057079 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057134 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057184 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057235 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 01:18:13.057291 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 8 01:18:13.057342 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 01:18:13.057393 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 8 01:18:13.057446 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 8 01:18:13.057496 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 01:18:13.057545 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 01:18:13.057595 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 8 01:18:13.057644 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 01:18:13.057694 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 01:18:13.057743 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 8 01:18:13.057796 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 8 01:18:13.057847 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.057898 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.057947 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 8 01:18:13.057999 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.058049 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.058094 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 8 01:18:13.058139 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 01:18:13.058182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 01:18:13.058229 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 01:18:13.058273 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 8 01:18:13.058320 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 8 01:18:13.058369 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 8 01:18:13.058417 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 01:18:13.058467 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 8 01:18:13.058516 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 8 01:18:13.058568 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 8 01:18:13.058615 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 8 01:18:13.058665 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 8 01:18:13.058712 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 8 01:18:13.058760 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 8 01:18:13.058808 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 8 01:18:13.058817 kernel: PCI: CLS 64 bytes, default 64 Nov 8 01:18:13.058824 kernel: DMAR: No ATSR found Nov 8 01:18:13.058830 kernel: DMAR: No SATC found Nov 8 01:18:13.058836 kernel: DMAR: dmar0: Using Queued invalidation Nov 8 01:18:13.058886 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 8 01:18:13.058937 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 8 01:18:13.058987 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 8 01:18:13.059037 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 8 01:18:13.059088 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 8 01:18:13.059138 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 8 01:18:13.059187 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 8 01:18:13.059236 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 8 01:18:13.059288 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 8 01:18:13.059338 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 8 01:18:13.059388 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 8 01:18:13.059436 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 8 01:18:13.059486 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 8 01:18:13.059537 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 8 01:18:13.059588 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 8 01:18:13.059637 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 8 01:18:13.059686 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 8 01:18:13.059735 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 8 01:18:13.059785 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 8 01:18:13.059835 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 8 01:18:13.059884 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 8 01:18:13.059938 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 8 01:18:13.059989 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 8 01:18:13.060041 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 8 01:18:13.060092 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 8 01:18:13.060144 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 8 01:18:13.060197 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 8 01:18:13.060205 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 8 01:18:13.060211 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 01:18:13.060219 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 8 01:18:13.060225 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 8 01:18:13.060231 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 8 01:18:13.060237 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 8 01:18:13.060243 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 8 01:18:13.060299 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 8 01:18:13.060308 kernel: Initialise system trusted keyrings Nov 8 01:18:13.060314 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 8 01:18:13.060322 kernel: Key type asymmetric registered Nov 8 01:18:13.060327 kernel: Asymmetric key parser 'x509' registered Nov 8 01:18:13.060333 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 01:18:13.060339 kernel: io scheduler mq-deadline registered Nov 8 01:18:13.060345 kernel: io scheduler kyber registered Nov 8 01:18:13.060351 kernel: io scheduler bfq registered Nov 8 01:18:13.060401 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 8 01:18:13.060451 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 8 01:18:13.060501 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 8 01:18:13.060552 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 8 01:18:13.060602 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 8 01:18:13.060651 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 8 01:18:13.060706 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 8 01:18:13.060715 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 8 01:18:13.060721 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 8 01:18:13.060727 kernel: pstore: Using crash dump compression: deflate Nov 8 01:18:13.060735 kernel: pstore: Registered erst as persistent store backend Nov 8 01:18:13.060741 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 01:18:13.060747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 01:18:13.060753 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 01:18:13.060759 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 01:18:13.060765 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 8 01:18:13.060813 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 8 01:18:13.060822 kernel: i8042: PNP: No PS/2 controller found. Nov 8 01:18:13.060868 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 8 01:18:13.060916 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 8 01:18:13.060962 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-08T01:18:11 UTC (1762564691) Nov 8 01:18:13.061007 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 8 01:18:13.061015 kernel: intel_pstate: Intel P-state driver initializing Nov 8 01:18:13.061022 kernel: intel_pstate: Disabling energy efficiency optimization Nov 8 01:18:13.061027 kernel: intel_pstate: HWP enabled Nov 8 01:18:13.061033 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 8 01:18:13.061039 kernel: vesafb: scrolling: redraw Nov 8 01:18:13.061047 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 8 01:18:13.061053 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000008793c84d, using 768k, total 768k Nov 8 01:18:13.061058 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 01:18:13.061064 kernel: fb0: VESA VGA frame buffer device Nov 8 01:18:13.061070 kernel: NET: Registered PF_INET6 protocol family Nov 8 01:18:13.061076 kernel: Segment Routing with IPv6 Nov 8 01:18:13.061082 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 01:18:13.061088 kernel: NET: Registered PF_PACKET protocol family Nov 8 01:18:13.061093 kernel: Key type dns_resolver registered Nov 8 01:18:13.061100 kernel: microcode: Current revision: 0x000000fc Nov 8 01:18:13.061106 kernel: microcode: Updated early from: 0x000000f4 Nov 8 01:18:13.061112 kernel: microcode: Microcode Update Driver: v2.2. Nov 8 01:18:13.061118 kernel: IPI shorthand broadcast: enabled Nov 8 01:18:13.061124 kernel: sched_clock: Marking stable (1568000685, 1369131625)->(4408011168, -1470878858) Nov 8 01:18:13.061129 kernel: registered taskstats version 1 Nov 8 01:18:13.061135 kernel: Loading compiled-in X.509 certificates Nov 8 01:18:13.061141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 01:18:13.061147 kernel: Key type .fscrypt registered Nov 8 01:18:13.061153 kernel: Key type fscrypt-provisioning registered Nov 8 01:18:13.061159 kernel: ima: Allocated hash algorithm: sha1 Nov 8 01:18:13.061165 kernel: ima: No architecture policies found Nov 8 01:18:13.061171 kernel: clk: Disabling unused clocks Nov 8 01:18:13.061177 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 01:18:13.061182 kernel: Write protecting the kernel read-only data: 36864k Nov 8 01:18:13.061188 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 01:18:13.061194 kernel: Run /init as init process Nov 8 01:18:13.061200 kernel: with arguments: Nov 8 01:18:13.061207 kernel: /init Nov 8 01:18:13.061212 kernel: with environment: Nov 8 01:18:13.061218 kernel: HOME=/ Nov 8 01:18:13.061224 kernel: TERM=linux Nov 8 01:18:13.061231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 01:18:13.061238 systemd[1]: Detected architecture x86-64. Nov 8 01:18:13.061244 systemd[1]: Running in initrd. Nov 8 01:18:13.061250 systemd[1]: No hostname configured, using default hostname. Nov 8 01:18:13.061257 systemd[1]: Hostname set to . Nov 8 01:18:13.061263 systemd[1]: Initializing machine ID from random generator. Nov 8 01:18:13.061269 systemd[1]: Queued start job for default target initrd.target. Nov 8 01:18:13.061275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:18:13.061281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:18:13.061290 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 01:18:13.061296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 01:18:13.061303 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 01:18:13.061310 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 01:18:13.061317 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 01:18:13.061323 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 8 01:18:13.061329 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 8 01:18:13.061335 kernel: clocksource: Switched to clocksource tsc Nov 8 01:18:13.061341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 01:18:13.061348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:18:13.061355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:18:13.061361 systemd[1]: Reached target paths.target - Path Units. Nov 8 01:18:13.061367 systemd[1]: Reached target slices.target - Slice Units. Nov 8 01:18:13.061373 systemd[1]: Reached target swap.target - Swaps. Nov 8 01:18:13.061379 systemd[1]: Reached target timers.target - Timer Units. Nov 8 01:18:13.061385 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 01:18:13.061391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 01:18:13.061397 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 01:18:13.061404 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 01:18:13.061410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:18:13.061416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 01:18:13.061422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:18:13.061428 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 01:18:13.061434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 01:18:13.061441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 01:18:13.061447 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 01:18:13.061454 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 01:18:13.061460 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 01:18:13.061476 systemd-journald[268]: Collecting audit messages is disabled. Nov 8 01:18:13.061491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 01:18:13.061498 systemd-journald[268]: Journal started Nov 8 01:18:13.061512 systemd-journald[268]: Runtime Journal (/run/log/journal/ac093926137148df86804e545a58dbfa) is 8.0M, max 639.9M, 631.9M free. Nov 8 01:18:13.104305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:13.104344 systemd-modules-load[270]: Inserted module 'overlay' Nov 8 01:18:13.133849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 01:18:13.199536 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 01:18:13.199550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 01:18:13.199559 kernel: Bridge firewalling registered Nov 8 01:18:13.176008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:18:13.194633 systemd-modules-load[270]: Inserted module 'br_netfilter' Nov 8 01:18:13.211623 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 01:18:13.236570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 01:18:13.261600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:13.284554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:18:13.296990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 01:18:13.298557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 01:18:13.300256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 01:18:13.305576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 01:18:13.306192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 01:18:13.306307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:18:13.307092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:18:13.307987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 01:18:13.311078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:18:13.322527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:13.325673 systemd-resolved[302]: Positive Trust Anchors: Nov 8 01:18:13.325679 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 01:18:13.325702 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 01:18:13.327288 systemd-resolved[302]: Defaulting to hostname 'linux'. Nov 8 01:18:13.355554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 01:18:13.372674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:18:13.406568 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 01:18:13.527376 dracut-cmdline[308]: dracut-dracut-053 Nov 8 01:18:13.534538 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:18:13.739327 kernel: SCSI subsystem initialized Nov 8 01:18:13.762317 kernel: Loading iSCSI transport class v2.0-870. Nov 8 01:18:13.785347 kernel: iscsi: registered transport (tcp) Nov 8 01:18:13.817678 kernel: iscsi: registered transport (qla4xxx) Nov 8 01:18:13.817695 kernel: QLogic iSCSI HBA Driver Nov 8 01:18:13.850615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 01:18:13.876636 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 01:18:13.935079 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 01:18:13.935102 kernel: device-mapper: uevent: version 1.0.3 Nov 8 01:18:13.954624 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 01:18:14.012359 kernel: raid6: avx2x4 gen() 53421 MB/s Nov 8 01:18:14.044360 kernel: raid6: avx2x2 gen() 53257 MB/s Nov 8 01:18:14.080628 kernel: raid6: avx2x1 gen() 45292 MB/s Nov 8 01:18:14.080645 kernel: raid6: using algorithm avx2x4 gen() 53421 MB/s Nov 8 01:18:14.127696 kernel: raid6: .... xor() 18532 MB/s, rmw enabled Nov 8 01:18:14.127713 kernel: raid6: using avx2x2 recovery algorithm Nov 8 01:18:14.168291 kernel: xor: automatically using best checksumming function avx Nov 8 01:18:14.286323 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 01:18:14.291691 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 01:18:14.319609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:18:14.326493 systemd-udevd[494]: Using default interface naming scheme 'v255'. Nov 8 01:18:14.330401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:18:14.363514 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 01:18:14.410959 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Nov 8 01:18:14.428781 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 01:18:14.452637 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 01:18:14.538475 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:18:14.572607 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 01:18:14.572675 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 01:18:14.593291 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 01:18:14.593335 kernel: libata version 3.00 loaded. Nov 8 01:18:14.615309 kernel: PTP clock support registered Nov 8 01:18:14.615365 kernel: ACPI: bus type USB registered Nov 8 01:18:14.637573 kernel: usbcore: registered new interface driver usbfs Nov 8 01:18:14.644642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 01:18:14.691382 kernel: usbcore: registered new interface driver hub Nov 8 01:18:14.691397 kernel: usbcore: registered new device driver usb Nov 8 01:18:14.691406 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 01:18:14.677113 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 01:18:14.756395 kernel: AES CTR mode by8 optimization enabled Nov 8 01:18:14.756432 kernel: ahci 0000:00:17.0: version 3.0 Nov 8 01:18:14.756783 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 01:18:14.757092 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 8 01:18:14.757407 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 8 01:18:14.757689 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 8 01:18:14.757982 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 8 01:18:14.708519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 01:18:15.140405 kernel: scsi host0: ahci Nov 8 01:18:15.140489 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 01:18:15.140562 kernel: scsi host1: ahci Nov 8 01:18:15.140627 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 8 01:18:15.140693 kernel: scsi host2: ahci Nov 8 01:18:15.140759 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 8 01:18:15.140826 kernel: scsi host3: ahci Nov 8 01:18:15.140888 kernel: hub 1-0:1.0: USB hub found Nov 8 01:18:15.140963 kernel: scsi host4: ahci Nov 8 01:18:15.141027 kernel: hub 1-0:1.0: 16 ports detected Nov 8 01:18:15.141095 kernel: scsi host5: ahci Nov 8 01:18:15.141158 kernel: hub 2-0:1.0: USB hub found Nov 8 01:18:15.141235 kernel: scsi host6: ahci Nov 8 01:18:15.141301 kernel: hub 2-0:1.0: 10 ports detected Nov 8 01:18:15.141371 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 8 01:18:15.141380 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 8 01:18:15.141387 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 8 01:18:15.141394 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 8 01:18:15.141401 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 8 01:18:15.141408 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 8 01:18:15.141418 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 8 01:18:15.141425 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 8 01:18:15.141440 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 8 01:18:15.045166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:18:15.180401 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 8 01:18:15.180413 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 8 01:18:15.166541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 01:18:15.278388 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 01:18:15.278749 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:40 Nov 8 01:18:15.279014 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 8 01:18:15.279322 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 01:18:15.279622 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 8 01:18:15.279928 kernel: hub 1-14:1.0: USB hub found Nov 8 01:18:15.280263 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 01:18:15.280564 kernel: hub 1-14:1.0: 4 ports detected Nov 8 01:18:15.280887 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 8 01:18:15.281138 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 01:18:15.281314 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:41 Nov 8 01:18:15.281486 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 8 01:18:15.281658 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 01:18:15.245698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 01:18:15.422557 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 01:18:15.422574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.422584 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.422593 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.245813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:15.497336 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.497351 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 01:18:15.497360 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 01:18:15.497370 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.497379 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 01:18:15.443096 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:18:15.604379 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 01:18:15.604392 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 8 01:18:15.604483 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 01:18:15.604492 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 8 01:18:15.604567 kernel: ata1.00: Features: NCQ-prio Nov 8 01:18:15.604576 kernel: ata2.00: Features: NCQ-prio Nov 8 01:18:15.555531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 01:18:15.685386 kernel: ata1.00: configured for UDMA/133 Nov 8 01:18:15.685399 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 8 01:18:15.685417 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 01:18:15.685499 kernel: ata2.00: configured for UDMA/133 Nov 8 01:18:15.685508 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 01:18:15.616318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 01:18:15.616430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:15.709379 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 8 01:18:15.685997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:15.939660 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:15.939681 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 01:18:15.939691 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 01:18:15.939803 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 01:18:15.939888 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 8 01:18:15.939976 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 8 01:18:15.940053 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 8 01:18:15.940130 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 8 01:18:15.940205 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 8 01:18:15.940283 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 8 01:18:15.940364 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 8 01:18:15.940439 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 01:18:15.940514 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 01:18:15.940589 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 8 01:18:15.940664 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 8 01:18:15.940739 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:15.940751 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 01:18:15.940761 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 8 01:18:15.940836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 01:18:15.923747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:16.107898 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 01:18:16.108022 kernel: GPT:9289727 != 937703087 Nov 8 01:18:16.108039 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 8 01:18:16.108162 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 01:18:16.108176 kernel: GPT:9289727 != 937703087 Nov 8 01:18:16.108189 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 01:18:16.108304 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 01:18:16.108319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.108333 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 8 01:18:16.108444 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 01:18:16.108402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 01:18:16.169664 kernel: usbcore: registered new interface driver usbhid Nov 8 01:18:16.169679 kernel: usbhid: USB HID core driver Nov 8 01:18:16.169688 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (668) Nov 8 01:18:16.169696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (669) Nov 8 01:18:16.170324 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 8 01:18:16.190766 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 8 01:18:16.225249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 8 01:18:16.288390 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 8 01:18:16.288487 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 8 01:18:16.263433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:16.394181 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 8 01:18:16.394340 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 8 01:18:16.394350 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 8 01:18:16.362728 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 01:18:16.405533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 01:18:16.409244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 8 01:18:16.455483 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 01:18:16.474572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:18:16.515372 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:16.515388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.515517 disk-uuid[720]: Primary Header is updated. Nov 8 01:18:16.515517 disk-uuid[720]: Secondary Entries is updated. Nov 8 01:18:16.515517 disk-uuid[720]: Secondary Header is updated. Nov 8 01:18:16.533710 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:16.549524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:16.638403 kernel: GPT:disk_guids don't match. Nov 8 01:18:16.638415 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 01:18:16.638426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.638433 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:16.638440 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 01:18:16.638530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.661346 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 8 01:18:16.690335 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 8 01:18:17.567597 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:17.587089 disk-uuid[721]: The operation has completed successfully. Nov 8 01:18:17.595433 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:17.630933 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 01:18:17.630984 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 01:18:17.662600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 01:18:17.687486 sh[749]: Success Nov 8 01:18:17.697410 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 01:18:17.749851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 01:18:17.771425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 01:18:17.779622 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 01:18:17.856850 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 01:18:17.856872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:17.879027 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 01:18:17.898727 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 01:18:17.917490 kernel: BTRFS info (device dm-0): using free space tree Nov 8 01:18:17.957318 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 01:18:17.958903 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 01:18:17.968751 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 01:18:17.976575 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 01:18:18.127556 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:18.127570 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:18.127579 kernel: BTRFS info (device sda6): using free space tree Nov 8 01:18:18.127586 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 01:18:18.127593 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 01:18:18.127604 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:18.123633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 01:18:18.138838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 01:18:18.174510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 01:18:18.185705 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 01:18:18.225423 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 01:18:18.240861 systemd-networkd[933]: lo: Link UP Nov 8 01:18:18.244014 ignition[910]: Ignition 2.19.0 Nov 8 01:18:18.240863 systemd-networkd[933]: lo: Gained carrier Nov 8 01:18:18.244018 ignition[910]: Stage: fetch-offline Nov 8 01:18:18.243478 systemd-networkd[933]: Enumeration completed Nov 8 01:18:18.244038 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:18.243574 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 01:18:18.244043 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:18.244187 systemd-networkd[933]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.244097 ignition[910]: parsed url from cmdline: "" Nov 8 01:18:18.246220 unknown[910]: fetched base config from "system" Nov 8 01:18:18.244099 ignition[910]: no config URL provided Nov 8 01:18:18.246232 unknown[910]: fetched user config from "system" Nov 8 01:18:18.244102 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 01:18:18.256689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 01:18:18.244125 ignition[910]: parsing config with SHA512: 6e9eb37395a719163a6edd5ec05d4e2d2bd74d77c2931bc2a56712b2ac858d33f169269ae288eb510f87d8837f6e26a890cd07634be73968690ab08c35959fe4 Nov 8 01:18:18.274773 systemd-networkd[933]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.246907 ignition[910]: fetch-offline: fetch-offline passed Nov 8 01:18:18.275515 systemd[1]: Reached target network.target - Network. Nov 8 01:18:18.246910 ignition[910]: POST message to Packet Timeline Nov 8 01:18:18.288580 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 01:18:18.246913 ignition[910]: POST Status error: resource requires networking Nov 8 01:18:18.305735 systemd-networkd[933]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.246957 ignition[910]: Ignition finished successfully Nov 8 01:18:18.306680 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 01:18:18.336982 ignition[947]: Ignition 2.19.0 Nov 8 01:18:18.515470 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 8 01:18:18.510350 systemd-networkd[933]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.337001 ignition[947]: Stage: kargs Nov 8 01:18:18.337490 ignition[947]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:18.337520 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:18.340010 ignition[947]: kargs: kargs passed Nov 8 01:18:18.340023 ignition[947]: POST message to Packet Timeline Nov 8 01:18:18.340058 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:18.341923 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34577->[::1]:53: read: connection refused Nov 8 01:18:18.542764 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Nov 8 01:18:18.543654 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46579->[::1]:53: read: connection refused Nov 8 01:18:18.725324 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 8 01:18:18.726493 systemd-networkd[933]: eno1: Link UP Nov 8 01:18:18.726624 systemd-networkd[933]: eno2: Link UP Nov 8 01:18:18.726741 systemd-networkd[933]: enp1s0f0np0: Link UP Nov 8 01:18:18.726890 systemd-networkd[933]: enp1s0f0np0: Gained carrier Nov 8 01:18:18.737435 systemd-networkd[933]: enp1s0f1np1: Link UP Nov 8 01:18:18.759417 systemd-networkd[933]: enp1s0f0np0: DHCPv4 address 139.178.94.41/31, gateway 139.178.94.40 acquired from 145.40.83.140 Nov 8 01:18:18.944191 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Nov 8 01:18:18.945255 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42971->[::1]:53: read: connection refused Nov 8 01:18:19.521049 systemd-networkd[933]: enp1s0f1np1: Gained carrier Nov 8 01:18:19.745794 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Nov 8 01:18:19.746925 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46753->[::1]:53: read: connection refused Nov 8 01:18:20.032933 systemd-networkd[933]: enp1s0f0np0: Gained IPv6LL Nov 8 01:18:21.348642 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Nov 8 01:18:21.349623 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55583->[::1]:53: read: connection refused Nov 8 01:18:21.376809 systemd-networkd[933]: enp1s0f1np1: Gained IPv6LL Nov 8 01:18:24.552267 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Nov 8 01:18:25.745557 ignition[947]: GET result: OK Nov 8 01:18:26.150433 ignition[947]: Ignition finished successfully Nov 8 01:18:26.155596 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 01:18:26.184532 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 01:18:26.190790 ignition[967]: Ignition 2.19.0 Nov 8 01:18:26.190795 ignition[967]: Stage: disks Nov 8 01:18:26.190906 ignition[967]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:26.190914 ignition[967]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:26.191494 ignition[967]: disks: disks passed Nov 8 01:18:26.191497 ignition[967]: POST message to Packet Timeline Nov 8 01:18:26.191506 ignition[967]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:27.062457 ignition[967]: GET result: OK Nov 8 01:18:27.499542 ignition[967]: Ignition finished successfully Nov 8 01:18:27.502045 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 01:18:27.518551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 01:18:27.537715 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 01:18:27.558707 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 01:18:27.580706 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 01:18:27.601602 systemd[1]: Reached target basic.target - Basic System. Nov 8 01:18:27.631543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 01:18:27.672240 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 01:18:27.681896 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 01:18:27.704561 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 01:18:27.808289 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 01:18:27.808571 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 01:18:27.818807 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 01:18:27.835582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 01:18:27.861223 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 01:18:27.909346 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (993) Nov 8 01:18:27.909360 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:27.877823 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 01:18:27.991634 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:27.991645 kernel: BTRFS info (device sda6): using free space tree Nov 8 01:18:27.991656 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 01:18:27.991663 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 01:18:28.002915 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 8 01:18:28.010745 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 01:18:28.010839 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 01:18:28.085428 coreos-metadata[995]: Nov 08 01:18:28.068 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 01:18:28.030279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 01:18:28.118392 coreos-metadata[1011]: Nov 08 01:18:28.090 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 01:18:28.057541 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 01:18:28.079097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 01:18:28.150549 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 01:18:28.161401 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory Nov 8 01:18:28.172392 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 01:18:28.182401 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 01:18:28.197522 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 01:18:28.222528 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 01:18:28.259504 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:28.242120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 01:18:28.269145 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 01:18:28.290691 ignition[1113]: INFO : Ignition 2.19.0 Nov 8 01:18:28.290691 ignition[1113]: INFO : Stage: mount Nov 8 01:18:28.297434 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:28.297434 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:28.297434 ignition[1113]: INFO : mount: mount passed Nov 8 01:18:28.297434 ignition[1113]: INFO : POST message to Packet Timeline Nov 8 01:18:28.297434 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:28.295231 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 01:18:29.083431 coreos-metadata[1011]: Nov 08 01:18:29.083 INFO Fetch successful Nov 8 01:18:29.165592 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 8 01:18:29.165652 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 8 01:18:29.248204 ignition[1113]: INFO : GET result: OK Nov 8 01:18:29.395776 coreos-metadata[995]: Nov 08 01:18:29.395 INFO Fetch successful Nov 8 01:18:29.455421 coreos-metadata[995]: Nov 08 01:18:29.455 INFO wrote hostname ci-4081.3.6-n-8acfe54808 to /sysroot/etc/hostname Nov 8 01:18:29.457109 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 01:18:30.104490 ignition[1113]: INFO : Ignition finished successfully Nov 8 01:18:30.107565 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 01:18:30.140514 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 01:18:30.150647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 01:18:30.215309 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1139) Nov 8 01:18:30.244879 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:30.244895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:30.262663 kernel: BTRFS info (device sda6): using free space tree Nov 8 01:18:30.300821 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 01:18:30.300837 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 01:18:30.313996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 01:18:30.341130 ignition[1156]: INFO : Ignition 2.19.0 Nov 8 01:18:30.341130 ignition[1156]: INFO : Stage: files Nov 8 01:18:30.356563 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:30.356563 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:30.356563 ignition[1156]: DEBUG : files: compiled without relabeling support, skipping Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 01:18:30.356563 ignition[1156]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 01:18:30.345736 unknown[1156]: wrote ssh authorized keys file for user: core Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:30.739645 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 01:18:31.002461 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 01:18:32.219592 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:32.219592 ignition[1156]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: files passed Nov 8 01:18:32.249539 ignition[1156]: INFO : POST message to Packet Timeline Nov 8 01:18:32.249539 ignition[1156]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:33.231129 ignition[1156]: INFO : GET result: OK Nov 8 01:18:33.612817 ignition[1156]: INFO : Ignition finished successfully Nov 8 01:18:33.616867 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 01:18:33.645562 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 01:18:33.655893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 01:18:33.665647 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 01:18:33.665707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 01:18:33.698876 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 01:18:33.718816 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 01:18:33.758481 initrd-setup-root-after-ignition[1195]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:18:33.758481 initrd-setup-root-after-ignition[1195]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:18:33.755533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 01:18:33.809596 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:18:33.832548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 01:18:33.832609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 01:18:33.850710 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 01:18:33.871490 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 01:18:33.892823 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 01:18:33.907720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 01:18:33.984667 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 01:18:34.017031 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 01:18:34.036785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:18:34.040518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:18:34.071610 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 01:18:34.089668 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 01:18:34.089864 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 01:18:34.117148 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 01:18:34.137923 systemd[1]: Stopped target basic.target - Basic System. Nov 8 01:18:34.156882 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 01:18:34.175018 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 01:18:34.195901 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 01:18:34.216897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 01:18:34.237022 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 01:18:34.257901 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 01:18:34.279022 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 01:18:34.299006 systemd[1]: Stopped target swap.target - Swaps. Nov 8 01:18:34.317786 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 01:18:34.318194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 01:18:34.344108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:18:34.363923 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:18:34.384775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 01:18:34.385240 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:18:34.407890 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 01:18:34.408312 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 01:18:34.439854 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 01:18:34.440339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 01:18:34.460098 systemd[1]: Stopped target paths.target - Path Units. Nov 8 01:18:34.477766 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 01:18:34.478186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:18:34.498912 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 01:18:34.516910 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 01:18:34.535867 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 01:18:34.536176 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 01:18:34.556043 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 01:18:34.556383 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 01:18:34.578952 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 01:18:34.579373 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 01:18:34.598001 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 01:18:34.710434 ignition[1221]: INFO : Ignition 2.19.0 Nov 8 01:18:34.710434 ignition[1221]: INFO : Stage: umount Nov 8 01:18:34.710434 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:34.710434 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:34.710434 ignition[1221]: INFO : umount: umount passed Nov 8 01:18:34.710434 ignition[1221]: INFO : POST message to Packet Timeline Nov 8 01:18:34.710434 ignition[1221]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:34.598401 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 01:18:34.615870 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 01:18:34.616244 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 01:18:34.650550 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 01:18:34.670369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 01:18:34.670470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:18:34.705614 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 01:18:34.718491 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 01:18:34.718907 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:18:34.736886 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 01:18:34.737250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 01:18:34.769935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 01:18:34.771796 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 01:18:34.772053 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 01:18:34.787230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 01:18:34.787495 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 01:18:35.678340 ignition[1221]: INFO : GET result: OK Nov 8 01:18:36.105163 ignition[1221]: INFO : Ignition finished successfully Nov 8 01:18:36.107929 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 01:18:36.108226 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 01:18:36.126639 systemd[1]: Stopped target network.target - Network. Nov 8 01:18:36.142577 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 01:18:36.142782 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 01:18:36.160686 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 01:18:36.160854 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 01:18:36.179702 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 01:18:36.179863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 01:18:36.197706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 01:18:36.197873 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 01:18:36.215801 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 01:18:36.215973 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 01:18:36.235223 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 01:18:36.246426 systemd-networkd[933]: enp1s0f0np0: DHCPv6 lease lost Nov 8 01:18:36.253510 systemd-networkd[933]: enp1s0f1np1: DHCPv6 lease lost Nov 8 01:18:36.253781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 01:18:36.272443 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 01:18:36.272730 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 01:18:36.291530 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 01:18:36.291882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 01:18:36.311950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 01:18:36.312072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:18:36.347431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 01:18:36.370441 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 01:18:36.370484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 01:18:36.390675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 01:18:36.390765 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:18:36.409789 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 01:18:36.409952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 01:18:36.429806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 01:18:36.429973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:18:36.449042 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:18:36.471690 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 01:18:36.472170 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:18:36.506392 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 01:18:36.506542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 01:18:36.508802 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 01:18:36.508904 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:18:36.536566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 01:18:36.536730 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 01:18:36.566873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 01:18:36.567042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 01:18:36.606480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 01:18:36.606651 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:36.647559 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 01:18:36.677333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 01:18:36.897492 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Nov 8 01:18:36.677377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:18:36.696592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 01:18:36.696734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:36.719688 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 01:18:36.719938 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 01:18:36.755136 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 01:18:36.755506 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 01:18:36.770521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 01:18:36.806789 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 01:18:36.830843 systemd[1]: Switching root. Nov 8 01:18:36.991476 systemd-journald[268]: Journal stopped Nov 8 01:18:13.047685 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 01:18:13.047699 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:18:13.047707 kernel: BIOS-provided physical RAM map: Nov 8 01:18:13.047711 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 8 01:18:13.047715 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 8 01:18:13.047719 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 8 01:18:13.047724 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 8 01:18:13.047728 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 8 01:18:13.047732 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2cfff] usable Nov 8 01:18:13.047736 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x0000000081b2dfff] ACPI NVS Nov 8 01:18:13.047740 kernel: BIOS-e820: [mem 0x0000000081b2e000-0x0000000081b2efff] reserved Nov 8 01:18:13.047745 kernel: BIOS-e820: [mem 0x0000000081b2f000-0x000000008afccfff] usable Nov 8 01:18:13.047750 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 8 01:18:13.047754 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 8 01:18:13.047759 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 8 01:18:13.047764 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 8 01:18:13.047770 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 8 01:18:13.047775 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 8 01:18:13.047779 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 8 01:18:13.047784 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 8 01:18:13.047789 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 8 01:18:13.047793 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 01:18:13.047798 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 8 01:18:13.047802 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 8 01:18:13.047807 kernel: NX (Execute Disable) protection: active Nov 8 01:18:13.047812 kernel: APIC: Static calls initialized Nov 8 01:18:13.047816 kernel: SMBIOS 3.2.1 present. Nov 8 01:18:13.047821 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 8 01:18:13.047827 kernel: tsc: Detected 3400.000 MHz processor Nov 8 01:18:13.047832 kernel: tsc: Detected 3399.906 MHz TSC Nov 8 01:18:13.047836 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 01:18:13.047841 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 01:18:13.047846 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 8 01:18:13.047851 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 8 01:18:13.047856 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 01:18:13.047861 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 8 01:18:13.047866 kernel: Using GB pages for direct mapping Nov 8 01:18:13.047871 kernel: ACPI: Early table checksum verification disabled Nov 8 01:18:13.047876 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 8 01:18:13.047881 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 8 01:18:13.047888 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 8 01:18:13.047893 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 8 01:18:13.047898 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 8 01:18:13.047904 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 8 01:18:13.047910 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 8 01:18:13.047915 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 8 01:18:13.047920 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 8 01:18:13.047925 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 8 01:18:13.047930 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 8 01:18:13.047935 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 8 01:18:13.047940 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 8 01:18:13.047946 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047951 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 8 01:18:13.047956 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 8 01:18:13.047961 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047966 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047971 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 8 01:18:13.047977 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 8 01:18:13.047982 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047987 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 8 01:18:13.047993 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 8 01:18:13.047998 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 8 01:18:13.048003 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 8 01:18:13.048008 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 8 01:18:13.048013 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 8 01:18:13.048018 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 8 01:18:13.048023 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 8 01:18:13.048028 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 8 01:18:13.048034 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 8 01:18:13.048039 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 8 01:18:13.048044 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 8 01:18:13.048049 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 8 01:18:13.048055 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 8 01:18:13.048060 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 8 01:18:13.048065 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 8 01:18:13.048070 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 8 01:18:13.048075 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 8 01:18:13.048081 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 8 01:18:13.048086 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 8 01:18:13.048091 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 8 01:18:13.048096 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 8 01:18:13.048101 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 8 01:18:13.048106 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 8 01:18:13.048111 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 8 01:18:13.048116 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 8 01:18:13.048121 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 8 01:18:13.048127 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 8 01:18:13.048132 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 8 01:18:13.048137 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 8 01:18:13.048142 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 8 01:18:13.048147 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 8 01:18:13.048152 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 8 01:18:13.048157 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 8 01:18:13.048162 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 8 01:18:13.048167 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 8 01:18:13.048173 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 8 01:18:13.048178 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 8 01:18:13.048183 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 8 01:18:13.048188 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 8 01:18:13.048193 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 8 01:18:13.048198 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 8 01:18:13.048203 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 8 01:18:13.048208 kernel: No NUMA configuration found Nov 8 01:18:13.048213 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 8 01:18:13.048218 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 8 01:18:13.048224 kernel: Zone ranges: Nov 8 01:18:13.048230 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 01:18:13.048235 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 01:18:13.048240 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 8 01:18:13.048245 kernel: Movable zone start for each node Nov 8 01:18:13.048250 kernel: Early memory node ranges Nov 8 01:18:13.048255 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 8 01:18:13.048260 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 8 01:18:13.048265 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2cfff] Nov 8 01:18:13.048271 kernel: node 0: [mem 0x0000000081b2f000-0x000000008afccfff] Nov 8 01:18:13.048276 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 8 01:18:13.048281 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 8 01:18:13.048289 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 8 01:18:13.048299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 8 01:18:13.048304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 01:18:13.048310 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 8 01:18:13.048315 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 8 01:18:13.048322 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 8 01:18:13.048327 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 8 01:18:13.048333 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 8 01:18:13.048338 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 8 01:18:13.048344 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 8 01:18:13.048349 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 8 01:18:13.048354 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 01:18:13.048360 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 01:18:13.048365 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 01:18:13.048372 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 01:18:13.048377 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 01:18:13.048383 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 01:18:13.048388 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 01:18:13.048393 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 01:18:13.048399 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 01:18:13.048404 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 01:18:13.048409 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 01:18:13.048415 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 01:18:13.048421 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 01:18:13.048427 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 01:18:13.048432 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 01:18:13.048437 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 01:18:13.048443 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 8 01:18:13.048448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 01:18:13.048454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 01:18:13.048459 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 01:18:13.048465 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 01:18:13.048470 kernel: TSC deadline timer available Nov 8 01:18:13.048477 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 8 01:18:13.048482 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 8 01:18:13.048488 kernel: Booting paravirtualized kernel on bare hardware Nov 8 01:18:13.048493 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 01:18:13.048499 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 8 01:18:13.048504 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 01:18:13.048510 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 01:18:13.048515 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 8 01:18:13.048522 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:18:13.048528 kernel: random: crng init done Nov 8 01:18:13.048533 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 8 01:18:13.048539 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 8 01:18:13.048544 kernel: Fallback order for Node 0: 0 Nov 8 01:18:13.048549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 8 01:18:13.048555 kernel: Policy zone: Normal Nov 8 01:18:13.048560 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 01:18:13.048566 kernel: software IO TLB: area num 16. Nov 8 01:18:13.048572 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 732416K reserved, 0K cma-reserved) Nov 8 01:18:13.048578 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 8 01:18:13.048583 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 01:18:13.048589 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 01:18:13.048595 kernel: Dynamic Preempt: voluntary Nov 8 01:18:13.048601 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 01:18:13.048607 kernel: rcu: RCU event tracing is enabled. Nov 8 01:18:13.048612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 8 01:18:13.048618 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 01:18:13.048624 kernel: Rude variant of Tasks RCU enabled. Nov 8 01:18:13.048629 kernel: Tracing variant of Tasks RCU enabled. Nov 8 01:18:13.048635 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 01:18:13.048640 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 8 01:18:13.048646 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 8 01:18:13.048651 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 01:18:13.048657 kernel: Console: colour dummy device 80x25 Nov 8 01:18:13.048662 kernel: printk: console [tty0] enabled Nov 8 01:18:13.048668 kernel: printk: console [ttyS1] enabled Nov 8 01:18:13.048674 kernel: ACPI: Core revision 20230628 Nov 8 01:18:13.048680 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 8 01:18:13.048685 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 01:18:13.048691 kernel: DMAR: Host address width 39 Nov 8 01:18:13.048696 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 8 01:18:13.048702 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 8 01:18:13.048707 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 8 01:18:13.048713 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 8 01:18:13.048718 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 8 01:18:13.048725 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 8 01:18:13.048730 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 8 01:18:13.048736 kernel: x2apic enabled Nov 8 01:18:13.048741 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 8 01:18:13.048747 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 8 01:18:13.048752 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 8 01:18:13.048758 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 8 01:18:13.048763 kernel: process: using mwait in idle threads Nov 8 01:18:13.048769 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 01:18:13.048775 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 01:18:13.048780 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 01:18:13.048786 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 01:18:13.048791 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 01:18:13.048797 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 01:18:13.048802 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 01:18:13.048808 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 01:18:13.048813 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 01:18:13.048818 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 01:18:13.048824 kernel: TAA: Mitigation: TSX disabled Nov 8 01:18:13.048829 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 8 01:18:13.048835 kernel: SRBDS: Mitigation: Microcode Nov 8 01:18:13.048841 kernel: GDS: Mitigation: Microcode Nov 8 01:18:13.048846 kernel: active return thunk: its_return_thunk Nov 8 01:18:13.048852 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 01:18:13.048857 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 8 01:18:13.048863 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 01:18:13.048868 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 01:18:13.048873 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 01:18:13.048879 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 01:18:13.048884 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 01:18:13.048890 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 01:18:13.048895 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 01:18:13.048901 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 01:18:13.048907 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 8 01:18:13.048912 kernel: Freeing SMP alternatives memory: 32K Nov 8 01:18:13.048918 kernel: pid_max: default: 32768 minimum: 301 Nov 8 01:18:13.048923 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 01:18:13.048928 kernel: landlock: Up and running. Nov 8 01:18:13.048934 kernel: SELinux: Initializing. Nov 8 01:18:13.048939 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.048945 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.048950 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 01:18:13.048955 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:18:13.048962 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:18:13.048968 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:18:13.048973 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 8 01:18:13.048979 kernel: ... version: 4 Nov 8 01:18:13.048984 kernel: ... bit width: 48 Nov 8 01:18:13.048989 kernel: ... generic registers: 4 Nov 8 01:18:13.048995 kernel: ... value mask: 0000ffffffffffff Nov 8 01:18:13.049000 kernel: ... max period: 00007fffffffffff Nov 8 01:18:13.049006 kernel: ... fixed-purpose events: 3 Nov 8 01:18:13.049012 kernel: ... event mask: 000000070000000f Nov 8 01:18:13.049018 kernel: signal: max sigframe size: 2032 Nov 8 01:18:13.049023 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 8 01:18:13.049029 kernel: rcu: Hierarchical SRCU implementation. Nov 8 01:18:13.049034 kernel: rcu: Max phase no-delay instances is 400. Nov 8 01:18:13.049040 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 8 01:18:13.049045 kernel: smp: Bringing up secondary CPUs ... Nov 8 01:18:13.049050 kernel: smpboot: x86: Booting SMP configuration: Nov 8 01:18:13.049056 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 8 01:18:13.049063 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 01:18:13.049068 kernel: smp: Brought up 1 node, 16 CPUs Nov 8 01:18:13.049074 kernel: smpboot: Max logical packages: 1 Nov 8 01:18:13.049079 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 8 01:18:13.049085 kernel: devtmpfs: initialized Nov 8 01:18:13.049090 kernel: x86/mm: Memory block size: 128MB Nov 8 01:18:13.049096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2d000-0x81b2dfff] (4096 bytes) Nov 8 01:18:13.049101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 8 01:18:13.049107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 01:18:13.049113 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 8 01:18:13.049119 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 01:18:13.049124 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 01:18:13.049130 kernel: audit: initializing netlink subsys (disabled) Nov 8 01:18:13.049135 kernel: audit: type=2000 audit(1762564687.040:1): state=initialized audit_enabled=0 res=1 Nov 8 01:18:13.049140 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 01:18:13.049146 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 01:18:13.049151 kernel: cpuidle: using governor menu Nov 8 01:18:13.049158 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 01:18:13.049163 kernel: dca service started, version 1.12.1 Nov 8 01:18:13.049169 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 8 01:18:13.049174 kernel: PCI: Using configuration type 1 for base access Nov 8 01:18:13.049180 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 8 01:18:13.049185 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 01:18:13.049191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 01:18:13.049196 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 01:18:13.049202 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 01:18:13.049208 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 01:18:13.049214 kernel: ACPI: Added _OSI(Module Device) Nov 8 01:18:13.049219 kernel: ACPI: Added _OSI(Processor Device) Nov 8 01:18:13.049225 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 01:18:13.049230 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 8 01:18:13.049236 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049241 kernel: ACPI: SSDT 0xFFFF9E2081B57800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 8 01:18:13.049247 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049252 kernel: ACPI: SSDT 0xFFFF9E2081B4F800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 8 01:18:13.049259 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049264 kernel: ACPI: SSDT 0xFFFF9E2080247500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 8 01:18:13.049270 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049275 kernel: ACPI: SSDT 0xFFFF9E2081E79000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 8 01:18:13.049280 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049287 kernel: ACPI: SSDT 0xFFFF9E208012A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 8 01:18:13.049293 kernel: ACPI: Dynamic OEM Table Load: Nov 8 01:18:13.049299 kernel: ACPI: SSDT 0xFFFF9E2081B55400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 8 01:18:13.049304 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 8 01:18:13.049309 kernel: ACPI: Interpreter enabled Nov 8 01:18:13.049316 kernel: ACPI: PM: (supports S0 S5) Nov 8 01:18:13.049321 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 01:18:13.049327 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 8 01:18:13.049332 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 8 01:18:13.049338 kernel: HEST: Table parsing has been initialized. Nov 8 01:18:13.049343 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 8 01:18:13.049349 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 01:18:13.049354 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 01:18:13.049360 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 8 01:18:13.049366 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 8 01:18:13.049372 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 8 01:18:13.049377 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 8 01:18:13.049383 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 8 01:18:13.049388 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 8 01:18:13.049394 kernel: ACPI: \_TZ_.FN00: New power resource Nov 8 01:18:13.049399 kernel: ACPI: \_TZ_.FN01: New power resource Nov 8 01:18:13.049405 kernel: ACPI: \_TZ_.FN02: New power resource Nov 8 01:18:13.049410 kernel: ACPI: \_TZ_.FN03: New power resource Nov 8 01:18:13.049417 kernel: ACPI: \_TZ_.FN04: New power resource Nov 8 01:18:13.049422 kernel: ACPI: \PIN_: New power resource Nov 8 01:18:13.049428 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 8 01:18:13.049504 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 01:18:13.049560 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 8 01:18:13.049609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 8 01:18:13.049617 kernel: PCI host bridge to bus 0000:00 Nov 8 01:18:13.049671 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 01:18:13.049716 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 01:18:13.049760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 01:18:13.049803 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 8 01:18:13.049847 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 8 01:18:13.049889 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 8 01:18:13.049950 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 8 01:18:13.050011 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 8 01:18:13.050063 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.050116 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 8 01:18:13.050166 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 8 01:18:13.050220 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 8 01:18:13.050270 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 8 01:18:13.050329 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 8 01:18:13.050380 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 8 01:18:13.050430 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 8 01:18:13.050482 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 8 01:18:13.050533 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 8 01:18:13.050580 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 8 01:18:13.050636 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 8 01:18:13.050685 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 01:18:13.050741 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 8 01:18:13.050791 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 01:18:13.050844 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 8 01:18:13.050893 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 8 01:18:13.050944 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 8 01:18:13.050996 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 8 01:18:13.051055 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 8 01:18:13.051106 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 8 01:18:13.051159 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 8 01:18:13.051208 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 8 01:18:13.051257 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 8 01:18:13.051323 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 8 01:18:13.051374 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 8 01:18:13.051423 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 8 01:18:13.051473 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 8 01:18:13.051521 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 8 01:18:13.051571 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 8 01:18:13.051622 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 8 01:18:13.051672 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 8 01:18:13.051725 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 8 01:18:13.051775 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.051834 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 8 01:18:13.051887 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.051941 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 8 01:18:13.051991 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.052046 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 8 01:18:13.052095 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.052149 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 8 01:18:13.052201 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.052255 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 8 01:18:13.052331 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 01:18:13.052385 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 8 01:18:13.052437 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 8 01:18:13.052486 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 8 01:18:13.052539 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 8 01:18:13.052595 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 8 01:18:13.052645 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 8 01:18:13.052703 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 8 01:18:13.052754 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 8 01:18:13.052806 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 8 01:18:13.052856 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 8 01:18:13.052910 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 01:18:13.052960 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 01:18:13.053016 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 8 01:18:13.053068 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 8 01:18:13.053118 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 8 01:18:13.053169 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 8 01:18:13.053219 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 01:18:13.053273 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 01:18:13.053326 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 01:18:13.053376 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 8 01:18:13.053425 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 01:18:13.053477 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 8 01:18:13.053532 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 8 01:18:13.053584 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 8 01:18:13.053638 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 8 01:18:13.053689 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 8 01:18:13.053740 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 8 01:18:13.053790 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.053841 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 8 01:18:13.053890 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 01:18:13.053941 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 01:18:13.053998 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 8 01:18:13.054050 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 8 01:18:13.054101 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 8 01:18:13.054151 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 8 01:18:13.054203 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 8 01:18:13.054254 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 8 01:18:13.054307 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 8 01:18:13.054357 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 01:18:13.054410 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 01:18:13.054461 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 8 01:18:13.054518 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 8 01:18:13.054571 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 8 01:18:13.054623 kernel: pci 0000:06:00.0: supports D1 D2 Nov 8 01:18:13.054674 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 01:18:13.054725 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 8 01:18:13.054778 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.054827 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.054883 kernel: pci_bus 0000:07: extended config space not accessible Nov 8 01:18:13.054941 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 8 01:18:13.054995 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 8 01:18:13.055050 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 8 01:18:13.055104 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 8 01:18:13.055160 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 01:18:13.055212 kernel: pci 0000:07:00.0: supports D1 D2 Nov 8 01:18:13.055266 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 01:18:13.055321 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 8 01:18:13.055374 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.055426 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.055434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 8 01:18:13.055440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 8 01:18:13.055448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 8 01:18:13.055454 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 8 01:18:13.055460 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 8 01:18:13.055465 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 8 01:18:13.055471 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 8 01:18:13.055477 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 8 01:18:13.055483 kernel: iommu: Default domain type: Translated Nov 8 01:18:13.055489 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 01:18:13.055495 kernel: PCI: Using ACPI for IRQ routing Nov 8 01:18:13.055501 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 01:18:13.055507 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 8 01:18:13.055513 kernel: e820: reserve RAM buffer [mem 0x81b2d000-0x83ffffff] Nov 8 01:18:13.055518 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 8 01:18:13.055524 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 8 01:18:13.055530 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 8 01:18:13.055535 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 8 01:18:13.055587 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 8 01:18:13.055641 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 8 01:18:13.055695 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 01:18:13.055704 kernel: vgaarb: loaded Nov 8 01:18:13.055710 kernel: clocksource: Switched to clocksource tsc-early Nov 8 01:18:13.055716 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 01:18:13.055722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 01:18:13.055728 kernel: pnp: PnP ACPI init Nov 8 01:18:13.055779 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 8 01:18:13.055830 kernel: pnp 00:02: [dma 0 disabled] Nov 8 01:18:13.055884 kernel: pnp 00:03: [dma 0 disabled] Nov 8 01:18:13.055936 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 8 01:18:13.055983 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 8 01:18:13.056032 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 8 01:18:13.056078 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 8 01:18:13.056124 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 8 01:18:13.056172 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 8 01:18:13.056218 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 8 01:18:13.056266 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 8 01:18:13.056316 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 8 01:18:13.056362 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 8 01:18:13.056414 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 8 01:18:13.056460 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 8 01:18:13.056509 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 8 01:18:13.056554 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 8 01:18:13.056599 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 8 01:18:13.056645 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 8 01:18:13.056690 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 8 01:18:13.056741 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 8 01:18:13.056750 kernel: pnp: PnP ACPI: found 9 devices Nov 8 01:18:13.056758 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 01:18:13.056764 kernel: NET: Registered PF_INET protocol family Nov 8 01:18:13.056770 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 01:18:13.056776 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 8 01:18:13.056782 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 01:18:13.056788 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 01:18:13.056794 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 01:18:13.056800 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 8 01:18:13.056805 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.056813 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 01:18:13.056819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 01:18:13.056824 kernel: NET: Registered PF_XDP protocol family Nov 8 01:18:13.056874 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 8 01:18:13.056925 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 8 01:18:13.056974 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 8 01:18:13.057027 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057079 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057134 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057184 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 01:18:13.057235 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 01:18:13.057291 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 8 01:18:13.057342 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 01:18:13.057393 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 8 01:18:13.057446 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 8 01:18:13.057496 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 01:18:13.057545 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 01:18:13.057595 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 8 01:18:13.057644 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 01:18:13.057694 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 01:18:13.057743 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 8 01:18:13.057796 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 8 01:18:13.057847 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.057898 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.057947 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 8 01:18:13.057999 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 8 01:18:13.058049 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 8 01:18:13.058094 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 8 01:18:13.058139 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 01:18:13.058182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 01:18:13.058229 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 01:18:13.058273 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 8 01:18:13.058320 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 8 01:18:13.058369 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 8 01:18:13.058417 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 01:18:13.058467 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 8 01:18:13.058516 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 8 01:18:13.058568 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 8 01:18:13.058615 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 8 01:18:13.058665 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 8 01:18:13.058712 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 8 01:18:13.058760 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 8 01:18:13.058808 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 8 01:18:13.058817 kernel: PCI: CLS 64 bytes, default 64 Nov 8 01:18:13.058824 kernel: DMAR: No ATSR found Nov 8 01:18:13.058830 kernel: DMAR: No SATC found Nov 8 01:18:13.058836 kernel: DMAR: dmar0: Using Queued invalidation Nov 8 01:18:13.058886 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 8 01:18:13.058937 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 8 01:18:13.058987 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 8 01:18:13.059037 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 8 01:18:13.059088 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 8 01:18:13.059138 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 8 01:18:13.059187 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 8 01:18:13.059236 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 8 01:18:13.059288 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 8 01:18:13.059338 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 8 01:18:13.059388 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 8 01:18:13.059436 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 8 01:18:13.059486 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 8 01:18:13.059537 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 8 01:18:13.059588 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 8 01:18:13.059637 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 8 01:18:13.059686 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 8 01:18:13.059735 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 8 01:18:13.059785 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 8 01:18:13.059835 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 8 01:18:13.059884 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 8 01:18:13.059938 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 8 01:18:13.059989 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 8 01:18:13.060041 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 8 01:18:13.060092 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 8 01:18:13.060144 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 8 01:18:13.060197 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 8 01:18:13.060205 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 8 01:18:13.060211 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 01:18:13.060219 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 8 01:18:13.060225 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 8 01:18:13.060231 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 8 01:18:13.060237 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 8 01:18:13.060243 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 8 01:18:13.060299 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 8 01:18:13.060308 kernel: Initialise system trusted keyrings Nov 8 01:18:13.060314 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 8 01:18:13.060322 kernel: Key type asymmetric registered Nov 8 01:18:13.060327 kernel: Asymmetric key parser 'x509' registered Nov 8 01:18:13.060333 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 01:18:13.060339 kernel: io scheduler mq-deadline registered Nov 8 01:18:13.060345 kernel: io scheduler kyber registered Nov 8 01:18:13.060351 kernel: io scheduler bfq registered Nov 8 01:18:13.060401 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 8 01:18:13.060451 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 8 01:18:13.060501 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 8 01:18:13.060552 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 8 01:18:13.060602 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 8 01:18:13.060651 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 8 01:18:13.060706 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 8 01:18:13.060715 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 8 01:18:13.060721 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 8 01:18:13.060727 kernel: pstore: Using crash dump compression: deflate Nov 8 01:18:13.060735 kernel: pstore: Registered erst as persistent store backend Nov 8 01:18:13.060741 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 01:18:13.060747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 01:18:13.060753 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 01:18:13.060759 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 01:18:13.060765 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 8 01:18:13.060813 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 8 01:18:13.060822 kernel: i8042: PNP: No PS/2 controller found. Nov 8 01:18:13.060868 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 8 01:18:13.060916 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 8 01:18:13.060962 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-08T01:18:11 UTC (1762564691) Nov 8 01:18:13.061007 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 8 01:18:13.061015 kernel: intel_pstate: Intel P-state driver initializing Nov 8 01:18:13.061022 kernel: intel_pstate: Disabling energy efficiency optimization Nov 8 01:18:13.061027 kernel: intel_pstate: HWP enabled Nov 8 01:18:13.061033 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 8 01:18:13.061039 kernel: vesafb: scrolling: redraw Nov 8 01:18:13.061047 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 8 01:18:13.061053 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000008793c84d, using 768k, total 768k Nov 8 01:18:13.061058 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 01:18:13.061064 kernel: fb0: VESA VGA frame buffer device Nov 8 01:18:13.061070 kernel: NET: Registered PF_INET6 protocol family Nov 8 01:18:13.061076 kernel: Segment Routing with IPv6 Nov 8 01:18:13.061082 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 01:18:13.061088 kernel: NET: Registered PF_PACKET protocol family Nov 8 01:18:13.061093 kernel: Key type dns_resolver registered Nov 8 01:18:13.061100 kernel: microcode: Current revision: 0x000000fc Nov 8 01:18:13.061106 kernel: microcode: Updated early from: 0x000000f4 Nov 8 01:18:13.061112 kernel: microcode: Microcode Update Driver: v2.2. Nov 8 01:18:13.061118 kernel: IPI shorthand broadcast: enabled Nov 8 01:18:13.061124 kernel: sched_clock: Marking stable (1568000685, 1369131625)->(4408011168, -1470878858) Nov 8 01:18:13.061129 kernel: registered taskstats version 1 Nov 8 01:18:13.061135 kernel: Loading compiled-in X.509 certificates Nov 8 01:18:13.061141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 01:18:13.061147 kernel: Key type .fscrypt registered Nov 8 01:18:13.061153 kernel: Key type fscrypt-provisioning registered Nov 8 01:18:13.061159 kernel: ima: Allocated hash algorithm: sha1 Nov 8 01:18:13.061165 kernel: ima: No architecture policies found Nov 8 01:18:13.061171 kernel: clk: Disabling unused clocks Nov 8 01:18:13.061177 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 01:18:13.061182 kernel: Write protecting the kernel read-only data: 36864k Nov 8 01:18:13.061188 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 01:18:13.061194 kernel: Run /init as init process Nov 8 01:18:13.061200 kernel: with arguments: Nov 8 01:18:13.061207 kernel: /init Nov 8 01:18:13.061212 kernel: with environment: Nov 8 01:18:13.061218 kernel: HOME=/ Nov 8 01:18:13.061224 kernel: TERM=linux Nov 8 01:18:13.061231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 01:18:13.061238 systemd[1]: Detected architecture x86-64. Nov 8 01:18:13.061244 systemd[1]: Running in initrd. Nov 8 01:18:13.061250 systemd[1]: No hostname configured, using default hostname. Nov 8 01:18:13.061257 systemd[1]: Hostname set to . Nov 8 01:18:13.061263 systemd[1]: Initializing machine ID from random generator. Nov 8 01:18:13.061269 systemd[1]: Queued start job for default target initrd.target. Nov 8 01:18:13.061275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:18:13.061281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:18:13.061290 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 01:18:13.061296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 01:18:13.061303 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 01:18:13.061310 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 01:18:13.061317 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 01:18:13.061323 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 8 01:18:13.061329 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 8 01:18:13.061335 kernel: clocksource: Switched to clocksource tsc Nov 8 01:18:13.061341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 01:18:13.061348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:18:13.061355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:18:13.061361 systemd[1]: Reached target paths.target - Path Units. Nov 8 01:18:13.061367 systemd[1]: Reached target slices.target - Slice Units. Nov 8 01:18:13.061373 systemd[1]: Reached target swap.target - Swaps. Nov 8 01:18:13.061379 systemd[1]: Reached target timers.target - Timer Units. Nov 8 01:18:13.061385 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 01:18:13.061391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 01:18:13.061397 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 01:18:13.061404 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 01:18:13.061410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:18:13.061416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 01:18:13.061422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:18:13.061428 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 01:18:13.061434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 01:18:13.061441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 01:18:13.061447 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 01:18:13.061454 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 01:18:13.061460 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 01:18:13.061476 systemd-journald[268]: Collecting audit messages is disabled. Nov 8 01:18:13.061491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 01:18:13.061498 systemd-journald[268]: Journal started Nov 8 01:18:13.061512 systemd-journald[268]: Runtime Journal (/run/log/journal/ac093926137148df86804e545a58dbfa) is 8.0M, max 639.9M, 631.9M free. Nov 8 01:18:13.104305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:13.104344 systemd-modules-load[270]: Inserted module 'overlay' Nov 8 01:18:13.133849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 01:18:13.199536 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 01:18:13.199550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 01:18:13.199559 kernel: Bridge firewalling registered Nov 8 01:18:13.176008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:18:13.194633 systemd-modules-load[270]: Inserted module 'br_netfilter' Nov 8 01:18:13.211623 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 01:18:13.236570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 01:18:13.261600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:13.284554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:18:13.296990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 01:18:13.298557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 01:18:13.300256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 01:18:13.305576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 01:18:13.306192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 01:18:13.306307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:18:13.307092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:18:13.307987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 01:18:13.311078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:18:13.322527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:13.325673 systemd-resolved[302]: Positive Trust Anchors: Nov 8 01:18:13.325679 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 01:18:13.325702 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 01:18:13.327288 systemd-resolved[302]: Defaulting to hostname 'linux'. Nov 8 01:18:13.355554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 01:18:13.372674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:18:13.406568 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 01:18:13.527376 dracut-cmdline[308]: dracut-dracut-053 Nov 8 01:18:13.534538 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:18:13.739327 kernel: SCSI subsystem initialized Nov 8 01:18:13.762317 kernel: Loading iSCSI transport class v2.0-870. Nov 8 01:18:13.785347 kernel: iscsi: registered transport (tcp) Nov 8 01:18:13.817678 kernel: iscsi: registered transport (qla4xxx) Nov 8 01:18:13.817695 kernel: QLogic iSCSI HBA Driver Nov 8 01:18:13.850615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 01:18:13.876636 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 01:18:13.935079 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 01:18:13.935102 kernel: device-mapper: uevent: version 1.0.3 Nov 8 01:18:13.954624 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 01:18:14.012359 kernel: raid6: avx2x4 gen() 53421 MB/s Nov 8 01:18:14.044360 kernel: raid6: avx2x2 gen() 53257 MB/s Nov 8 01:18:14.080628 kernel: raid6: avx2x1 gen() 45292 MB/s Nov 8 01:18:14.080645 kernel: raid6: using algorithm avx2x4 gen() 53421 MB/s Nov 8 01:18:14.127696 kernel: raid6: .... xor() 18532 MB/s, rmw enabled Nov 8 01:18:14.127713 kernel: raid6: using avx2x2 recovery algorithm Nov 8 01:18:14.168291 kernel: xor: automatically using best checksumming function avx Nov 8 01:18:14.286323 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 01:18:14.291691 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 01:18:14.319609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:18:14.326493 systemd-udevd[494]: Using default interface naming scheme 'v255'. Nov 8 01:18:14.330401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:18:14.363514 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 01:18:14.410959 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Nov 8 01:18:14.428781 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 01:18:14.452637 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 01:18:14.538475 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:18:14.572607 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 01:18:14.572675 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 01:18:14.593291 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 01:18:14.593335 kernel: libata version 3.00 loaded. Nov 8 01:18:14.615309 kernel: PTP clock support registered Nov 8 01:18:14.615365 kernel: ACPI: bus type USB registered Nov 8 01:18:14.637573 kernel: usbcore: registered new interface driver usbfs Nov 8 01:18:14.644642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 01:18:14.691382 kernel: usbcore: registered new interface driver hub Nov 8 01:18:14.691397 kernel: usbcore: registered new device driver usb Nov 8 01:18:14.691406 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 01:18:14.677113 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 01:18:14.756395 kernel: AES CTR mode by8 optimization enabled Nov 8 01:18:14.756432 kernel: ahci 0000:00:17.0: version 3.0 Nov 8 01:18:14.756783 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 01:18:14.757092 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 8 01:18:14.757407 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 8 01:18:14.757689 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 8 01:18:14.757982 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 8 01:18:14.708519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 01:18:15.140405 kernel: scsi host0: ahci Nov 8 01:18:15.140489 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 01:18:15.140562 kernel: scsi host1: ahci Nov 8 01:18:15.140627 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 8 01:18:15.140693 kernel: scsi host2: ahci Nov 8 01:18:15.140759 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 8 01:18:15.140826 kernel: scsi host3: ahci Nov 8 01:18:15.140888 kernel: hub 1-0:1.0: USB hub found Nov 8 01:18:15.140963 kernel: scsi host4: ahci Nov 8 01:18:15.141027 kernel: hub 1-0:1.0: 16 ports detected Nov 8 01:18:15.141095 kernel: scsi host5: ahci Nov 8 01:18:15.141158 kernel: hub 2-0:1.0: USB hub found Nov 8 01:18:15.141235 kernel: scsi host6: ahci Nov 8 01:18:15.141301 kernel: hub 2-0:1.0: 10 ports detected Nov 8 01:18:15.141371 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 8 01:18:15.141380 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 8 01:18:15.141387 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 8 01:18:15.141394 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 8 01:18:15.141401 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 8 01:18:15.141408 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 8 01:18:15.141418 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 8 01:18:15.141425 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 8 01:18:15.141440 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 8 01:18:15.045166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:18:15.180401 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 8 01:18:15.180413 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 8 01:18:15.166541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 01:18:15.278388 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 01:18:15.278749 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:40 Nov 8 01:18:15.279014 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 8 01:18:15.279322 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 01:18:15.279622 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 8 01:18:15.279928 kernel: hub 1-14:1.0: USB hub found Nov 8 01:18:15.280263 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 01:18:15.280564 kernel: hub 1-14:1.0: 4 ports detected Nov 8 01:18:15.280887 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 8 01:18:15.281138 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 01:18:15.281314 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:41 Nov 8 01:18:15.281486 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 8 01:18:15.281658 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 01:18:15.245698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 01:18:15.422557 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 01:18:15.422574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.422584 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.422593 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.245813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:15.497336 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.497351 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 01:18:15.497360 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 01:18:15.497370 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 8 01:18:15.497379 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 01:18:15.443096 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:18:15.604379 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 01:18:15.604392 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 8 01:18:15.604483 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 01:18:15.604492 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 8 01:18:15.604567 kernel: ata1.00: Features: NCQ-prio Nov 8 01:18:15.604576 kernel: ata2.00: Features: NCQ-prio Nov 8 01:18:15.555531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 01:18:15.685386 kernel: ata1.00: configured for UDMA/133 Nov 8 01:18:15.685399 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 8 01:18:15.685417 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 01:18:15.685499 kernel: ata2.00: configured for UDMA/133 Nov 8 01:18:15.685508 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 01:18:15.616318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 01:18:15.616430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:15.709379 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 8 01:18:15.685997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:15.939660 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:15.939681 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 01:18:15.939691 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 01:18:15.939803 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 01:18:15.939888 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 8 01:18:15.939976 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 8 01:18:15.940053 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 8 01:18:15.940130 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 8 01:18:15.940205 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 8 01:18:15.940283 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 8 01:18:15.940364 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 8 01:18:15.940439 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 01:18:15.940514 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 01:18:15.940589 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 8 01:18:15.940664 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 8 01:18:15.940739 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:15.940751 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 01:18:15.940761 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 8 01:18:15.940836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 01:18:15.923747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:16.107898 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 01:18:16.108022 kernel: GPT:9289727 != 937703087 Nov 8 01:18:16.108039 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 8 01:18:16.108162 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 01:18:16.108176 kernel: GPT:9289727 != 937703087 Nov 8 01:18:16.108189 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 01:18:16.108304 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 01:18:16.108319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.108333 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 8 01:18:16.108444 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 01:18:16.108402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 01:18:16.169664 kernel: usbcore: registered new interface driver usbhid Nov 8 01:18:16.169679 kernel: usbhid: USB HID core driver Nov 8 01:18:16.169688 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (668) Nov 8 01:18:16.169696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (669) Nov 8 01:18:16.170324 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 8 01:18:16.190766 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 8 01:18:16.225249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 8 01:18:16.288390 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 8 01:18:16.288487 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 8 01:18:16.263433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:16.394181 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 8 01:18:16.394340 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 8 01:18:16.394350 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 8 01:18:16.362728 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 01:18:16.405533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 01:18:16.409244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 8 01:18:16.455483 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 01:18:16.474572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:18:16.515372 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:16.515388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.515517 disk-uuid[720]: Primary Header is updated. Nov 8 01:18:16.515517 disk-uuid[720]: Secondary Entries is updated. Nov 8 01:18:16.515517 disk-uuid[720]: Secondary Header is updated. Nov 8 01:18:16.533710 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:16.549524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:16.638403 kernel: GPT:disk_guids don't match. Nov 8 01:18:16.638415 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 01:18:16.638426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.638433 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:16.638440 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 01:18:16.638530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:16.661346 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 8 01:18:16.690335 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 8 01:18:17.567597 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 01:18:17.587089 disk-uuid[721]: The operation has completed successfully. Nov 8 01:18:17.595433 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 01:18:17.630933 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 01:18:17.630984 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 01:18:17.662600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 01:18:17.687486 sh[749]: Success Nov 8 01:18:17.697410 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 01:18:17.749851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 01:18:17.771425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 01:18:17.779622 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 01:18:17.856850 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 01:18:17.856872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:17.879027 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 01:18:17.898727 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 01:18:17.917490 kernel: BTRFS info (device dm-0): using free space tree Nov 8 01:18:17.957318 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 01:18:17.958903 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 01:18:17.968751 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 01:18:17.976575 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 01:18:18.127556 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:18.127570 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:18.127579 kernel: BTRFS info (device sda6): using free space tree Nov 8 01:18:18.127586 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 01:18:18.127593 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 01:18:18.127604 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:18.123633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 01:18:18.138838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 01:18:18.174510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 01:18:18.185705 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 01:18:18.225423 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 01:18:18.240861 systemd-networkd[933]: lo: Link UP Nov 8 01:18:18.244014 ignition[910]: Ignition 2.19.0 Nov 8 01:18:18.240863 systemd-networkd[933]: lo: Gained carrier Nov 8 01:18:18.244018 ignition[910]: Stage: fetch-offline Nov 8 01:18:18.243478 systemd-networkd[933]: Enumeration completed Nov 8 01:18:18.244038 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:18.243574 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 01:18:18.244043 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:18.244187 systemd-networkd[933]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.244097 ignition[910]: parsed url from cmdline: "" Nov 8 01:18:18.246220 unknown[910]: fetched base config from "system" Nov 8 01:18:18.244099 ignition[910]: no config URL provided Nov 8 01:18:18.246232 unknown[910]: fetched user config from "system" Nov 8 01:18:18.244102 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 01:18:18.256689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 01:18:18.244125 ignition[910]: parsing config with SHA512: 6e9eb37395a719163a6edd5ec05d4e2d2bd74d77c2931bc2a56712b2ac858d33f169269ae288eb510f87d8837f6e26a890cd07634be73968690ab08c35959fe4 Nov 8 01:18:18.274773 systemd-networkd[933]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.246907 ignition[910]: fetch-offline: fetch-offline passed Nov 8 01:18:18.275515 systemd[1]: Reached target network.target - Network. Nov 8 01:18:18.246910 ignition[910]: POST message to Packet Timeline Nov 8 01:18:18.288580 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 01:18:18.246913 ignition[910]: POST Status error: resource requires networking Nov 8 01:18:18.305735 systemd-networkd[933]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.246957 ignition[910]: Ignition finished successfully Nov 8 01:18:18.306680 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 01:18:18.336982 ignition[947]: Ignition 2.19.0 Nov 8 01:18:18.515470 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 8 01:18:18.510350 systemd-networkd[933]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:18:18.337001 ignition[947]: Stage: kargs Nov 8 01:18:18.337490 ignition[947]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:18.337520 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:18.340010 ignition[947]: kargs: kargs passed Nov 8 01:18:18.340023 ignition[947]: POST message to Packet Timeline Nov 8 01:18:18.340058 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:18.341923 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34577->[::1]:53: read: connection refused Nov 8 01:18:18.542764 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Nov 8 01:18:18.543654 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46579->[::1]:53: read: connection refused Nov 8 01:18:18.725324 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 8 01:18:18.726493 systemd-networkd[933]: eno1: Link UP Nov 8 01:18:18.726624 systemd-networkd[933]: eno2: Link UP Nov 8 01:18:18.726741 systemd-networkd[933]: enp1s0f0np0: Link UP Nov 8 01:18:18.726890 systemd-networkd[933]: enp1s0f0np0: Gained carrier Nov 8 01:18:18.737435 systemd-networkd[933]: enp1s0f1np1: Link UP Nov 8 01:18:18.759417 systemd-networkd[933]: enp1s0f0np0: DHCPv4 address 139.178.94.41/31, gateway 139.178.94.40 acquired from 145.40.83.140 Nov 8 01:18:18.944191 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Nov 8 01:18:18.945255 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42971->[::1]:53: read: connection refused Nov 8 01:18:19.521049 systemd-networkd[933]: enp1s0f1np1: Gained carrier Nov 8 01:18:19.745794 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Nov 8 01:18:19.746925 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46753->[::1]:53: read: connection refused Nov 8 01:18:20.032933 systemd-networkd[933]: enp1s0f0np0: Gained IPv6LL Nov 8 01:18:21.348642 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Nov 8 01:18:21.349623 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55583->[::1]:53: read: connection refused Nov 8 01:18:21.376809 systemd-networkd[933]: enp1s0f1np1: Gained IPv6LL Nov 8 01:18:24.552267 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Nov 8 01:18:25.745557 ignition[947]: GET result: OK Nov 8 01:18:26.150433 ignition[947]: Ignition finished successfully Nov 8 01:18:26.155596 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 01:18:26.184532 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 01:18:26.190790 ignition[967]: Ignition 2.19.0 Nov 8 01:18:26.190795 ignition[967]: Stage: disks Nov 8 01:18:26.190906 ignition[967]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:26.190914 ignition[967]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:26.191494 ignition[967]: disks: disks passed Nov 8 01:18:26.191497 ignition[967]: POST message to Packet Timeline Nov 8 01:18:26.191506 ignition[967]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:27.062457 ignition[967]: GET result: OK Nov 8 01:18:27.499542 ignition[967]: Ignition finished successfully Nov 8 01:18:27.502045 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 01:18:27.518551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 01:18:27.537715 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 01:18:27.558707 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 01:18:27.580706 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 01:18:27.601602 systemd[1]: Reached target basic.target - Basic System. Nov 8 01:18:27.631543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 01:18:27.672240 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 01:18:27.681896 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 01:18:27.704561 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 01:18:27.808289 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 01:18:27.808571 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 01:18:27.818807 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 01:18:27.835582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 01:18:27.861223 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 01:18:27.909346 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (993) Nov 8 01:18:27.909360 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:27.877823 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 01:18:27.991634 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:27.991645 kernel: BTRFS info (device sda6): using free space tree Nov 8 01:18:27.991656 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 01:18:27.991663 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 01:18:28.002915 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 8 01:18:28.010745 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 01:18:28.010839 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 01:18:28.085428 coreos-metadata[995]: Nov 08 01:18:28.068 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 01:18:28.030279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 01:18:28.118392 coreos-metadata[1011]: Nov 08 01:18:28.090 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 01:18:28.057541 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 01:18:28.079097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 01:18:28.150549 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 01:18:28.161401 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory Nov 8 01:18:28.172392 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 01:18:28.182401 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 01:18:28.197522 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 01:18:28.222528 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 01:18:28.259504 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:28.242120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 01:18:28.269145 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 01:18:28.290691 ignition[1113]: INFO : Ignition 2.19.0 Nov 8 01:18:28.290691 ignition[1113]: INFO : Stage: mount Nov 8 01:18:28.297434 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:28.297434 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:28.297434 ignition[1113]: INFO : mount: mount passed Nov 8 01:18:28.297434 ignition[1113]: INFO : POST message to Packet Timeline Nov 8 01:18:28.297434 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:28.295231 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 01:18:29.083431 coreos-metadata[1011]: Nov 08 01:18:29.083 INFO Fetch successful Nov 8 01:18:29.165592 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 8 01:18:29.165652 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 8 01:18:29.248204 ignition[1113]: INFO : GET result: OK Nov 8 01:18:29.395776 coreos-metadata[995]: Nov 08 01:18:29.395 INFO Fetch successful Nov 8 01:18:29.455421 coreos-metadata[995]: Nov 08 01:18:29.455 INFO wrote hostname ci-4081.3.6-n-8acfe54808 to /sysroot/etc/hostname Nov 8 01:18:29.457109 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 01:18:30.104490 ignition[1113]: INFO : Ignition finished successfully Nov 8 01:18:30.107565 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 01:18:30.140514 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 01:18:30.150647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 01:18:30.215309 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1139) Nov 8 01:18:30.244879 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:18:30.244895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:18:30.262663 kernel: BTRFS info (device sda6): using free space tree Nov 8 01:18:30.300821 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 01:18:30.300837 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 01:18:30.313996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 01:18:30.341130 ignition[1156]: INFO : Ignition 2.19.0 Nov 8 01:18:30.341130 ignition[1156]: INFO : Stage: files Nov 8 01:18:30.356563 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:30.356563 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:30.356563 ignition[1156]: DEBUG : files: compiled without relabeling support, skipping Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 01:18:30.356563 ignition[1156]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 01:18:30.356563 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 01:18:30.345736 unknown[1156]: wrote ssh authorized keys file for user: core Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:30.490576 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:30.739645 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 01:18:31.002461 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 01:18:32.219592 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:18:32.219592 ignition[1156]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 01:18:32.249539 ignition[1156]: INFO : files: files passed Nov 8 01:18:32.249539 ignition[1156]: INFO : POST message to Packet Timeline Nov 8 01:18:32.249539 ignition[1156]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:33.231129 ignition[1156]: INFO : GET result: OK Nov 8 01:18:33.612817 ignition[1156]: INFO : Ignition finished successfully Nov 8 01:18:33.616867 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 01:18:33.645562 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 01:18:33.655893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 01:18:33.665647 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 01:18:33.665707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 01:18:33.698876 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 01:18:33.718816 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 01:18:33.758481 initrd-setup-root-after-ignition[1195]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:18:33.758481 initrd-setup-root-after-ignition[1195]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:18:33.755533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 01:18:33.809596 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:18:33.832548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 01:18:33.832609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 01:18:33.850710 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 01:18:33.871490 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 01:18:33.892823 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 01:18:33.907720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 01:18:33.984667 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 01:18:34.017031 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 01:18:34.036785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:18:34.040518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:18:34.071610 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 01:18:34.089668 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 01:18:34.089864 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 01:18:34.117148 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 01:18:34.137923 systemd[1]: Stopped target basic.target - Basic System. Nov 8 01:18:34.156882 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 01:18:34.175018 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 01:18:34.195901 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 01:18:34.216897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 01:18:34.237022 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 01:18:34.257901 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 01:18:34.279022 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 01:18:34.299006 systemd[1]: Stopped target swap.target - Swaps. Nov 8 01:18:34.317786 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 01:18:34.318194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 01:18:34.344108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:18:34.363923 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:18:34.384775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 01:18:34.385240 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:18:34.407890 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 01:18:34.408312 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 01:18:34.439854 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 01:18:34.440339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 01:18:34.460098 systemd[1]: Stopped target paths.target - Path Units. Nov 8 01:18:34.477766 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 01:18:34.478186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:18:34.498912 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 01:18:34.516910 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 01:18:34.535867 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 01:18:34.536176 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 01:18:34.556043 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 01:18:34.556383 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 01:18:34.578952 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 01:18:34.579373 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 01:18:34.598001 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 01:18:34.710434 ignition[1221]: INFO : Ignition 2.19.0 Nov 8 01:18:34.710434 ignition[1221]: INFO : Stage: umount Nov 8 01:18:34.710434 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:18:34.710434 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 01:18:34.710434 ignition[1221]: INFO : umount: umount passed Nov 8 01:18:34.710434 ignition[1221]: INFO : POST message to Packet Timeline Nov 8 01:18:34.710434 ignition[1221]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 01:18:34.598401 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 01:18:34.615870 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 01:18:34.616244 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 01:18:34.650550 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 01:18:34.670369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 01:18:34.670470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:18:34.705614 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 01:18:34.718491 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 01:18:34.718907 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:18:34.736886 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 01:18:34.737250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 01:18:34.769935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 01:18:34.771796 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 01:18:34.772053 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 01:18:34.787230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 01:18:34.787495 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 01:18:35.678340 ignition[1221]: INFO : GET result: OK Nov 8 01:18:36.105163 ignition[1221]: INFO : Ignition finished successfully Nov 8 01:18:36.107929 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 01:18:36.108226 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 01:18:36.126639 systemd[1]: Stopped target network.target - Network. Nov 8 01:18:36.142577 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 01:18:36.142782 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 01:18:36.160686 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 01:18:36.160854 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 01:18:36.179702 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 01:18:36.179863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 01:18:36.197706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 01:18:36.197873 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 01:18:36.215801 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 01:18:36.215973 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 01:18:36.235223 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 01:18:36.246426 systemd-networkd[933]: enp1s0f0np0: DHCPv6 lease lost Nov 8 01:18:36.253510 systemd-networkd[933]: enp1s0f1np1: DHCPv6 lease lost Nov 8 01:18:36.253781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 01:18:36.272443 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 01:18:36.272730 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 01:18:36.291530 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 01:18:36.291882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 01:18:36.311950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 01:18:36.312072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:18:36.347431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 01:18:36.370441 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 01:18:36.370484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 01:18:36.390675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 01:18:36.390765 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:18:36.409789 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 01:18:36.409952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 01:18:36.429806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 01:18:36.429973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:18:36.449042 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:18:36.471690 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 01:18:36.472170 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:18:36.506392 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 01:18:36.506542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 01:18:36.508802 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 01:18:36.508904 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:18:36.536566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 01:18:36.536730 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 01:18:36.566873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 01:18:36.567042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 01:18:36.606480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 01:18:36.606651 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:18:36.647559 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 01:18:36.677333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 01:18:36.897492 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Nov 8 01:18:36.677377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:18:36.696592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 01:18:36.696734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:36.719688 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 01:18:36.719938 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 01:18:36.755136 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 01:18:36.755506 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 01:18:36.770521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 01:18:36.806789 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 01:18:36.830843 systemd[1]: Switching root. Nov 8 01:18:36.991476 systemd-journald[268]: Journal stopped Nov 8 01:18:39.597398 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 01:18:39.597412 kernel: SELinux: policy capability open_perms=1 Nov 8 01:18:39.597420 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 01:18:39.597426 kernel: SELinux: policy capability always_check_network=0 Nov 8 01:18:39.597432 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 01:18:39.597437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 01:18:39.597444 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 01:18:39.597449 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 01:18:39.597454 kernel: audit: type=1403 audit(1762564717.217:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 01:18:39.597461 systemd[1]: Successfully loaded SELinux policy in 160.749ms. Nov 8 01:18:39.597469 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.098ms. Nov 8 01:18:39.597476 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 01:18:39.597483 systemd[1]: Detected architecture x86-64. Nov 8 01:18:39.597489 systemd[1]: Detected first boot. Nov 8 01:18:39.597495 systemd[1]: Hostname set to . Nov 8 01:18:39.597503 systemd[1]: Initializing machine ID from random generator. Nov 8 01:18:39.597510 zram_generator::config[1271]: No configuration found. Nov 8 01:18:39.597516 systemd[1]: Populated /etc with preset unit settings. Nov 8 01:18:39.597523 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 01:18:39.597529 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 01:18:39.597535 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 01:18:39.597542 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 01:18:39.597549 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 01:18:39.597556 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 01:18:39.597563 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 01:18:39.597569 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 01:18:39.597576 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 01:18:39.597582 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 01:18:39.597589 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 01:18:39.597596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:18:39.597603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:18:39.597609 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 01:18:39.597616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 01:18:39.597623 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 01:18:39.597629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 01:18:39.597636 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 8 01:18:39.597642 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:18:39.597650 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 01:18:39.597656 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 01:18:39.597663 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 01:18:39.597671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 01:18:39.597678 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:18:39.597685 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 01:18:39.597692 systemd[1]: Reached target slices.target - Slice Units. Nov 8 01:18:39.597699 systemd[1]: Reached target swap.target - Swaps. Nov 8 01:18:39.597706 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 01:18:39.597715 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 01:18:39.597722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:18:39.597729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 01:18:39.597735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:18:39.597743 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 01:18:39.597750 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 01:18:39.597757 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 01:18:39.597764 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 01:18:39.597770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:39.597777 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 01:18:39.597784 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 01:18:39.597792 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 01:18:39.597799 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 01:18:39.597806 systemd[1]: Reached target machines.target - Containers. Nov 8 01:18:39.597813 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 01:18:39.597820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:18:39.597827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 01:18:39.597833 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 01:18:39.597840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 01:18:39.597847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 01:18:39.597855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:18:39.597862 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 01:18:39.597868 kernel: ACPI: bus type drm_connector registered Nov 8 01:18:39.597875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 01:18:39.597881 kernel: fuse: init (API version 7.39) Nov 8 01:18:39.597888 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 01:18:39.597895 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 01:18:39.597902 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 01:18:39.597909 kernel: loop: module loaded Nov 8 01:18:39.597916 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 01:18:39.597923 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 01:18:39.597929 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 01:18:39.597944 systemd-journald[1375]: Collecting audit messages is disabled. Nov 8 01:18:39.597960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 01:18:39.597967 systemd-journald[1375]: Journal started Nov 8 01:18:39.597981 systemd-journald[1375]: Runtime Journal (/run/log/journal/a26b0b17e01f4062b3f21a9db132b962) is 8.0M, max 639.9M, 631.9M free. Nov 8 01:18:37.777349 systemd[1]: Queued start job for default target multi-user.target. Nov 8 01:18:37.792258 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 01:18:37.792548 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 01:18:39.649355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 01:18:39.683361 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 01:18:39.716331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 01:18:39.749637 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 01:18:39.749671 systemd[1]: Stopped verity-setup.service. Nov 8 01:18:39.812338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:39.833487 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 01:18:39.843880 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 01:18:39.854578 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 01:18:39.864566 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 01:18:39.874433 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 01:18:39.884801 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 01:18:39.894815 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 01:18:39.905187 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 01:18:39.917176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:18:39.929214 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 01:18:39.929626 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 01:18:39.941239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 01:18:39.941732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 01:18:39.953443 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 01:18:39.953861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 01:18:39.964232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:18:39.964692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:18:39.976231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 01:18:39.976650 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 01:18:39.987224 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 01:18:39.987632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 01:18:39.998327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 01:18:40.009216 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 01:18:40.021190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 01:18:40.033198 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:18:40.069487 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 01:18:40.093572 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 01:18:40.104105 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 01:18:40.113484 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 01:18:40.113507 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 01:18:40.124250 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 01:18:40.136506 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 01:18:40.148982 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 01:18:40.158602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:18:40.160606 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 01:18:40.170945 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 01:18:40.181426 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 01:18:40.182093 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 01:18:40.186485 systemd-journald[1375]: Time spent on flushing to /var/log/journal/a26b0b17e01f4062b3f21a9db132b962 is 12.838ms for 1368 entries. Nov 8 01:18:40.186485 systemd-journald[1375]: System Journal (/var/log/journal/a26b0b17e01f4062b3f21a9db132b962) is 8.0M, max 195.6M, 187.6M free. Nov 8 01:18:40.224104 systemd-journald[1375]: Received client request to flush runtime journal. Nov 8 01:18:40.213430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 01:18:40.233813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 01:18:40.244093 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 01:18:40.270835 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 01:18:40.288135 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 01:18:40.297288 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 01:18:40.308454 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 01:18:40.319473 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 01:18:40.335643 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 01:18:40.349331 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 01:18:40.359604 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 01:18:40.370506 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 01:18:40.381497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:18:40.397502 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 01:18:40.403299 kernel: loop1: detected capacity change from 0 to 224512 Nov 8 01:18:40.415561 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 01:18:40.443537 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 01:18:40.455010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 01:18:40.469624 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 01:18:40.470054 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 01:18:40.482341 kernel: loop2: detected capacity change from 0 to 8 Nov 8 01:18:40.482708 udevadm[1411]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 01:18:40.549296 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 01:18:40.558332 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Nov 8 01:18:40.558343 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Nov 8 01:18:40.561391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:18:40.575508 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 01:18:40.586786 ldconfig[1401]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 01:18:40.597680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:18:40.609502 systemd-udevd[1432]: Using default interface naming scheme 'v255'. Nov 8 01:18:40.609599 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 01:18:40.627637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:18:40.640299 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 01:18:40.664612 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Nov 8 01:18:40.671303 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1488) Nov 8 01:18:40.680296 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 8 01:18:40.680352 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 01:18:40.699304 kernel: ACPI: button: Sleep Button [SLPB] Nov 8 01:18:40.721255 kernel: loop6: detected capacity change from 0 to 8 Nov 8 01:18:40.721303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 01:18:40.729360 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 01:18:40.735290 kernel: ACPI: button: Power Button [PWRF] Nov 8 01:18:40.745335 (sd-merge)[1436]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 8 01:18:40.745576 (sd-merge)[1436]: Merged extensions into '/usr'. Nov 8 01:18:40.804145 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 01:18:40.818406 kernel: IPMI message handler: version 39.2 Nov 8 01:18:40.818471 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 01:18:40.818493 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 8 01:18:40.818654 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 8 01:18:40.819292 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 8 01:18:40.820695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 8 01:18:40.910294 kernel: ipmi device interface Nov 8 01:18:40.910350 kernel: iTCO_vendor_support: vendor-support=0 Nov 8 01:18:40.921353 systemd[1]: Reloading requested from client PID 1407 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 01:18:40.921361 systemd[1]: Reloading... Nov 8 01:18:40.970513 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 8 01:18:40.970706 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 8 01:18:40.977294 zram_generator::config[1547]: No configuration found. Nov 8 01:18:41.019327 kernel: ipmi_si: IPMI System Interface driver Nov 8 01:18:41.019377 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 8 01:18:41.033196 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 8 01:18:41.033308 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 8 01:18:41.033391 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 8 01:18:41.033402 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 8 01:18:41.033457 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 8 01:18:41.051872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:18:41.108531 systemd[1]: Reloading finished in 186 ms. Nov 8 01:18:41.144097 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 8 01:18:41.163543 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 8 01:18:41.179945 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 8 01:18:41.191327 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 8 01:18:41.242490 kernel: intel_rapl_common: Found RAPL domain package Nov 8 01:18:41.242529 kernel: intel_rapl_common: Found RAPL domain core Nov 8 01:18:41.242540 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 8 01:18:41.242637 kernel: intel_rapl_common: Found RAPL domain dram Nov 8 01:18:41.291330 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 8 01:18:41.342640 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 01:18:41.371292 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 8 01:18:41.388316 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 8 01:18:41.397454 systemd[1]: Starting ensure-sysext.service... Nov 8 01:18:41.405949 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 01:18:41.416930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 01:18:41.436925 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 01:18:41.447321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:18:41.447782 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 01:18:41.448215 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 01:18:41.449010 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 01:18:41.449316 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Nov 8 01:18:41.449404 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Nov 8 01:18:41.451790 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 01:18:41.451797 systemd-tmpfiles[1614]: Skipping /boot Nov 8 01:18:41.456739 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 01:18:41.457574 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 01:18:41.457581 systemd-tmpfiles[1614]: Skipping /boot Nov 8 01:18:41.467538 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 01:18:41.467733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 01:18:41.470895 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 01:18:41.470996 systemd[1]: Reloading requested from client PID 1612 ('systemctl') (unit ensure-sysext.service)... Nov 8 01:18:41.471002 systemd[1]: Reloading... Nov 8 01:18:41.480717 lvm[1627]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 01:18:41.514302 zram_generator::config[1657]: No configuration found. Nov 8 01:18:41.520124 systemd-networkd[1517]: lo: Link UP Nov 8 01:18:41.520127 systemd-networkd[1517]: lo: Gained carrier Nov 8 01:18:41.522899 systemd-networkd[1517]: bond0: netdev ready Nov 8 01:18:41.523888 systemd-networkd[1517]: Enumeration completed Nov 8 01:18:41.525704 systemd-networkd[1517]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:15:b6:dc.network. Nov 8 01:18:41.581166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:18:41.636581 systemd[1]: Reloading finished in 165 ms. Nov 8 01:18:41.655074 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 01:18:41.676655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:18:41.692068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:18:41.702437 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 8 01:18:41.720586 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 01:18:41.725293 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 8 01:18:41.725887 systemd-networkd[1517]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:15:b6:dd.network. Nov 8 01:18:41.752722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:18:41.777940 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 01:18:41.791122 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 01:18:41.803800 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 01:18:41.813246 augenrules[1738]: No rules Nov 8 01:18:41.815218 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 01:18:41.817183 lvm[1734]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 01:18:41.827032 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 01:18:41.839558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 01:18:41.850075 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 01:18:41.861957 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 01:18:41.877667 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 01:18:41.885445 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 8 01:18:41.904067 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 01:18:41.907637 systemd-networkd[1517]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 8 01:18:41.908347 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 8 01:18:41.909198 systemd-networkd[1517]: enp1s0f0np0: Link UP Nov 8 01:18:41.909371 systemd-networkd[1517]: enp1s0f0np0: Gained carrier Nov 8 01:18:41.930347 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 8 01:18:41.937751 systemd-networkd[1517]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:15:b6:dc.network. Nov 8 01:18:41.937906 systemd-networkd[1517]: enp1s0f1np1: Link UP Nov 8 01:18:41.938068 systemd-networkd[1517]: enp1s0f1np1: Gained carrier Nov 8 01:18:41.938626 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 01:18:41.951463 systemd-networkd[1517]: bond0: Link UP Nov 8 01:18:41.951657 systemd-networkd[1517]: bond0: Gained carrier Nov 8 01:18:41.953989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:41.954162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:18:41.960052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 01:18:41.970301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:18:41.981988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 01:18:41.991194 systemd-resolved[1745]: Positive Trust Anchors: Nov 8 01:18:41.991201 systemd-resolved[1745]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 01:18:41.991224 systemd-resolved[1745]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 01:18:41.991444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:18:41.992354 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 01:18:41.993983 systemd-resolved[1745]: Using system hostname 'ci-4081.3.6-n-8acfe54808'. Nov 8 01:18:42.002399 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 01:18:42.002495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:42.003473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 01:18:42.013772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 01:18:42.013851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 01:18:42.043574 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Nov 8 01:18:42.043593 kernel: bond0: active interface up! Nov 8 01:18:42.059673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:18:42.059745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:18:42.070651 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 01:18:42.070724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 01:18:42.080641 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 01:18:42.091159 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 01:18:42.104822 systemd[1]: Reached target network.target - Network. Nov 8 01:18:42.113427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:18:42.124428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:42.124540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:18:42.141522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 01:18:42.151957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:18:42.170110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 01:18:42.181112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:18:42.181184 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 01:18:42.181234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:42.181344 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Nov 8 01:18:42.181795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 01:18:42.181866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 01:18:42.203643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:18:42.203713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:18:42.214566 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 01:18:42.214634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 01:18:42.226462 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:42.226585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:18:42.240520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 01:18:42.250951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 01:18:42.260915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:18:42.271956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 01:18:42.281444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:18:42.281559 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 01:18:42.281609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:18:42.282258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 01:18:42.282367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 01:18:42.293767 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 01:18:42.293835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 01:18:42.303595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:18:42.303664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:18:42.314576 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 01:18:42.314645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 01:18:42.325257 systemd[1]: Finished ensure-sysext.service. Nov 8 01:18:42.334771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 01:18:42.334803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 01:18:42.344478 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 01:18:42.378865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 01:18:42.389449 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 01:18:42.399417 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 01:18:42.410387 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 01:18:42.421364 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 01:18:42.432355 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 01:18:42.432370 systemd[1]: Reached target paths.target - Path Units. Nov 8 01:18:42.440363 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 01:18:42.450427 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 01:18:42.460394 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 01:18:42.471355 systemd[1]: Reached target timers.target - Timer Units. Nov 8 01:18:42.480007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 01:18:42.490022 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 01:18:42.500188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 01:18:42.509630 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 01:18:42.519421 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 01:18:42.529366 systemd[1]: Reached target basic.target - Basic System. Nov 8 01:18:42.537385 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 01:18:42.537401 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 01:18:42.543388 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 01:18:42.554039 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 01:18:42.563890 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 01:18:42.572957 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 01:18:42.575790 coreos-metadata[1787]: Nov 08 01:18:42.575 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 01:18:42.583064 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 01:18:42.584775 jq[1791]: false Nov 8 01:18:42.585525 dbus-daemon[1788]: [system] SELinux support is enabled Nov 8 01:18:42.592421 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 01:18:42.593056 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 01:18:42.601161 extend-filesystems[1793]: Found loop4 Nov 8 01:18:42.601161 extend-filesystems[1793]: Found loop5 Nov 8 01:18:42.658481 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Nov 8 01:18:42.658499 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1488) Nov 8 01:18:42.658515 extend-filesystems[1793]: Found loop6 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found loop7 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda1 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda2 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda3 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found usr Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda4 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda6 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda7 Nov 8 01:18:42.658515 extend-filesystems[1793]: Found sda9 Nov 8 01:18:42.658515 extend-filesystems[1793]: Checking size of /dev/sda9 Nov 8 01:18:42.658515 extend-filesystems[1793]: Resized partition /dev/sda9 Nov 8 01:18:42.603098 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 01:18:42.771594 extend-filesystems[1803]: resize2fs 1.47.1 (20-May-2024) Nov 8 01:18:42.667459 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 01:18:42.702396 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 01:18:42.741433 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 01:18:42.759846 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 8 01:18:42.771724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 01:18:42.772122 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 01:18:42.780115 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 01:18:42.780877 systemd-logind[1816]: Watching system buttons on /dev/input/event3 (Power Button) Nov 8 01:18:42.780888 systemd-logind[1816]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 8 01:18:42.780899 systemd-logind[1816]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 8 01:18:42.781198 systemd-logind[1816]: New seat seat0. Nov 8 01:18:42.803462 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 01:18:42.807132 jq[1819]: true Nov 8 01:18:42.810912 update_engine[1818]: I20251108 01:18:42.810851 1818 main.cc:92] Flatcar Update Engine starting Nov 8 01:18:42.811504 update_engine[1818]: I20251108 01:18:42.811465 1818 update_check_scheduler.cc:74] Next update check in 11m50s Nov 8 01:18:42.815831 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 01:18:42.836483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 01:18:42.836595 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 01:18:42.836790 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 01:18:42.836893 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 01:18:42.846782 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 01:18:42.846897 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 01:18:42.883281 jq[1822]: true Nov 8 01:18:42.884100 (ntainerd)[1823]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 01:18:42.888853 dbus-daemon[1788]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 01:18:42.891669 tar[1821]: linux-amd64/LICENSE Nov 8 01:18:42.891806 tar[1821]: linux-amd64/helm Nov 8 01:18:42.893033 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 8 01:18:42.893129 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 8 01:18:42.900577 systemd[1]: Started update-engine.service - Update Engine. Nov 8 01:18:42.911167 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 01:18:42.911339 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 01:18:42.922408 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 01:18:42.922534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 01:18:42.934018 sshd_keygen[1815]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 01:18:42.943513 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 01:18:42.945066 bash[1850]: Updated "/home/core/.ssh/authorized_keys" Nov 8 01:18:42.956025 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 01:18:42.965562 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 01:18:42.968420 locksmithd[1852]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 01:18:42.988550 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 01:18:42.997608 systemd[1]: Starting sshkeys.service... Nov 8 01:18:43.005826 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 01:18:43.005964 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 01:18:43.017797 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 01:18:43.029942 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 01:18:43.041416 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 01:18:43.052811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 01:18:43.063765 coreos-metadata[1880]: Nov 08 01:18:43.063 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 01:18:43.065654 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 01:18:43.066584 containerd[1823]: time="2025-11-08T01:18:43.066544093Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 01:18:43.074468 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 8 01:18:43.079088 containerd[1823]: time="2025-11-08T01:18:43.079068315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.079831 containerd[1823]: time="2025-11-08T01:18:43.079813530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:18:43.079856 containerd[1823]: time="2025-11-08T01:18:43.079830375Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 01:18:43.079856 containerd[1823]: time="2025-11-08T01:18:43.079840306Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 01:18:43.079937 containerd[1823]: time="2025-11-08T01:18:43.079927324Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 01:18:43.079965 containerd[1823]: time="2025-11-08T01:18:43.079938785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.079987 containerd[1823]: time="2025-11-08T01:18:43.079973440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:18:43.079987 containerd[1823]: time="2025-11-08T01:18:43.079982014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080082 containerd[1823]: time="2025-11-08T01:18:43.080069441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080108 containerd[1823]: time="2025-11-08T01:18:43.080083330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080108 containerd[1823]: time="2025-11-08T01:18:43.080091948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080108 containerd[1823]: time="2025-11-08T01:18:43.080097630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080168 containerd[1823]: time="2025-11-08T01:18:43.080140694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080276 containerd[1823]: time="2025-11-08T01:18:43.080268564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080346 containerd[1823]: time="2025-11-08T01:18:43.080336134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:18:43.080364 containerd[1823]: time="2025-11-08T01:18:43.080345355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 01:18:43.080396 containerd[1823]: time="2025-11-08T01:18:43.080388277Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 01:18:43.080421 containerd[1823]: time="2025-11-08T01:18:43.080414594Z" level=info msg="metadata content store policy set" policy=shared Nov 8 01:18:43.084487 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 01:18:43.104667 containerd[1823]: time="2025-11-08T01:18:43.104628097Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 01:18:43.104667 containerd[1823]: time="2025-11-08T01:18:43.104662670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 01:18:43.104709 containerd[1823]: time="2025-11-08T01:18:43.104674612Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 01:18:43.104709 containerd[1823]: time="2025-11-08T01:18:43.104684268Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 01:18:43.104709 containerd[1823]: time="2025-11-08T01:18:43.104692566Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 01:18:43.104809 containerd[1823]: time="2025-11-08T01:18:43.104766000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 01:18:43.104895 containerd[1823]: time="2025-11-08T01:18:43.104886161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 01:18:43.104978 containerd[1823]: time="2025-11-08T01:18:43.104941171Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 01:18:43.104978 containerd[1823]: time="2025-11-08T01:18:43.104951128Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 01:18:43.104978 containerd[1823]: time="2025-11-08T01:18:43.104958759Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 01:18:43.104978 containerd[1823]: time="2025-11-08T01:18:43.104966142Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.104978 containerd[1823]: time="2025-11-08T01:18:43.104974490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.104981363Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.104989072Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.104997166Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105004588Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105011278Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105017376Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105028924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105036783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105044629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105057 containerd[1823]: time="2025-11-08T01:18:43.105056526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105064078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105071344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105078170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105085344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105092226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105100520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105110038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105116858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105123709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105132041Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105145649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105152607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105158369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 01:18:43.105197 containerd[1823]: time="2025-11-08T01:18:43.105182467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105192624Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105200006Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105206722Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105212123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105218881Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105225139Z" level=info msg="NRI interface is disabled by configuration." Nov 8 01:18:43.105391 containerd[1823]: time="2025-11-08T01:18:43.105231168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 01:18:43.105488 containerd[1823]: time="2025-11-08T01:18:43.105394736Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 01:18:43.105488 containerd[1823]: time="2025-11-08T01:18:43.105428376Z" level=info msg="Connect containerd service" Nov 8 01:18:43.105488 containerd[1823]: time="2025-11-08T01:18:43.105446731Z" level=info msg="using legacy CRI server" Nov 8 01:18:43.105488 containerd[1823]: time="2025-11-08T01:18:43.105451156Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 01:18:43.105606 containerd[1823]: time="2025-11-08T01:18:43.105501147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 01:18:43.105824 containerd[1823]: time="2025-11-08T01:18:43.105789814Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 01:18:43.105908 containerd[1823]: time="2025-11-08T01:18:43.105881527Z" level=info msg="Start subscribing containerd event" Nov 8 01:18:43.105930 containerd[1823]: time="2025-11-08T01:18:43.105916608Z" level=info msg="Start recovering state" Nov 8 01:18:43.106000 containerd[1823]: time="2025-11-08T01:18:43.105965035Z" level=info msg="Start event monitor" Nov 8 01:18:43.106000 containerd[1823]: time="2025-11-08T01:18:43.105968943Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 01:18:43.106000 containerd[1823]: time="2025-11-08T01:18:43.105973173Z" level=info msg="Start snapshots syncer" Nov 8 01:18:43.106000 containerd[1823]: time="2025-11-08T01:18:43.105998246Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 01:18:43.106059 containerd[1823]: time="2025-11-08T01:18:43.106002200Z" level=info msg="Start cni network conf syncer for default" Nov 8 01:18:43.106059 containerd[1823]: time="2025-11-08T01:18:43.106025865Z" level=info msg="Start streaming server" Nov 8 01:18:43.106086 containerd[1823]: time="2025-11-08T01:18:43.106063981Z" level=info msg="containerd successfully booted in 0.040150s" Nov 8 01:18:43.106102 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 01:18:43.136383 systemd-networkd[1517]: bond0: Gained IPv6LL Nov 8 01:18:43.193328 tar[1821]: linux-amd64/README.md Nov 8 01:18:43.204500 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 01:18:43.229506 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Nov 8 01:18:43.255761 extend-filesystems[1803]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 01:18:43.255761 extend-filesystems[1803]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 8 01:18:43.255761 extend-filesystems[1803]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Nov 8 01:18:43.297375 extend-filesystems[1793]: Resized filesystem in /dev/sda9 Nov 8 01:18:43.297375 extend-filesystems[1793]: Found sdb Nov 8 01:18:43.256256 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 01:18:43.256352 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 01:18:43.329517 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 01:18:43.341901 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 01:18:43.364434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:18:43.375965 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 01:18:43.395577 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 01:18:44.164170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:18:44.175941 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:18:44.552017 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 8 01:18:44.552170 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Nov 8 01:18:44.655294 kernel: mlx5_core 0000:01:00.0: lag map: port 1:2 port 2:2 Nov 8 01:18:44.661603 kubelet[1919]: E1108 01:18:44.661585 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:18:44.663015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:18:44.663101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:18:44.686348 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 8 01:18:45.164987 systemd-resolved[1745]: Clock change detected. Flushing caches. Nov 8 01:18:45.165201 systemd-timesyncd[1780]: Contacted time server 142.202.190.19:123 (0.flatcar.pool.ntp.org). Nov 8 01:18:45.165330 systemd-timesyncd[1780]: Initial clock synchronization to Sat 2025-11-08 01:18:45.164837 UTC. Nov 8 01:18:45.774231 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 01:18:45.789829 systemd[1]: Started sshd@0-139.178.94.41:22-139.178.68.195:53658.service - OpenSSH per-connection server daemon (139.178.68.195:53658). Nov 8 01:18:45.870675 sshd[1939]: Accepted publickey for core from 139.178.68.195 port 53658 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:45.872712 sshd[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:45.878393 systemd-logind[1816]: New session 1 of user core. Nov 8 01:18:45.879536 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 01:18:45.900798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 01:18:45.914746 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 01:18:45.943835 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 01:18:45.954648 (systemd)[1943]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 01:18:46.033339 systemd[1943]: Queued start job for default target default.target. Nov 8 01:18:46.041150 systemd[1943]: Created slice app.slice - User Application Slice. Nov 8 01:18:46.041164 systemd[1943]: Reached target paths.target - Paths. Nov 8 01:18:46.041172 systemd[1943]: Reached target timers.target - Timers. Nov 8 01:18:46.041837 systemd[1943]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 01:18:46.047367 systemd[1943]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 01:18:46.047396 systemd[1943]: Reached target sockets.target - Sockets. Nov 8 01:18:46.047406 systemd[1943]: Reached target basic.target - Basic System. Nov 8 01:18:46.047427 systemd[1943]: Reached target default.target - Main User Target. Nov 8 01:18:46.047444 systemd[1943]: Startup finished in 88ms. Nov 8 01:18:46.047550 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 01:18:46.059531 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 01:18:46.125279 systemd[1]: Started sshd@1-139.178.94.41:22-139.178.68.195:39258.service - OpenSSH per-connection server daemon (139.178.68.195:39258). Nov 8 01:18:46.162430 sshd[1955]: Accepted publickey for core from 139.178.68.195 port 39258 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:46.163131 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:46.165728 systemd-logind[1816]: New session 2 of user core. Nov 8 01:18:46.178667 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 01:18:46.213730 coreos-metadata[1787]: Nov 08 01:18:46.213 INFO Fetch successful Nov 8 01:18:46.235545 sshd[1955]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:46.243053 systemd[1]: sshd@1-139.178.94.41:22-139.178.68.195:39258.service: Deactivated successfully. Nov 8 01:18:46.243862 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 01:18:46.244464 systemd-logind[1816]: Session 2 logged out. Waiting for processes to exit. Nov 8 01:18:46.245146 systemd[1]: Started sshd@2-139.178.94.41:22-139.178.68.195:39268.service - OpenSSH per-connection server daemon (139.178.68.195:39268). Nov 8 01:18:46.258173 systemd-logind[1816]: Removed session 2. Nov 8 01:18:46.264672 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 01:18:46.275907 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 8 01:18:46.288620 sshd[1962]: Accepted publickey for core from 139.178.68.195 port 39268 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:46.289563 sshd[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:46.292680 systemd-logind[1816]: New session 3 of user core. Nov 8 01:18:46.293662 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 01:18:46.357647 sshd[1962]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:46.359031 systemd[1]: sshd@2-139.178.94.41:22-139.178.68.195:39268.service: Deactivated successfully. Nov 8 01:18:46.359886 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 01:18:46.360535 systemd-logind[1816]: Session 3 logged out. Waiting for processes to exit. Nov 8 01:18:46.361239 systemd-logind[1816]: Removed session 3. Nov 8 01:18:46.450679 coreos-metadata[1880]: Nov 08 01:18:46.450 INFO Fetch successful Nov 8 01:18:46.534376 unknown[1880]: wrote ssh authorized keys file for user: core Nov 8 01:18:46.556842 update-ssh-keys[1975]: Updated "/home/core/.ssh/authorized_keys" Nov 8 01:18:46.557110 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 01:18:46.569519 systemd[1]: Finished sshkeys.service. Nov 8 01:18:46.666227 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 8 01:18:46.678291 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 01:18:46.687891 systemd[1]: Startup finished in 1.796s (kernel) + 25.161s (initrd) + 10.156s (userspace) = 37.114s. Nov 8 01:18:46.707858 login[1892]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 01:18:46.711225 systemd-logind[1816]: New session 4 of user core. Nov 8 01:18:46.726578 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 01:18:46.734233 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 01:18:46.737056 systemd-logind[1816]: New session 5 of user core. Nov 8 01:18:46.737948 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 01:18:54.387999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 01:18:54.404760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:18:54.651648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:18:54.654107 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:18:54.680949 kubelet[2011]: E1108 01:18:54.680869 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:18:54.683316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:18:54.683400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:18:56.381765 systemd[1]: Started sshd@3-139.178.94.41:22-139.178.68.195:57260.service - OpenSSH per-connection server daemon (139.178.68.195:57260). Nov 8 01:18:56.406884 sshd[2031]: Accepted publickey for core from 139.178.68.195 port 57260 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:56.407782 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:56.411318 systemd-logind[1816]: New session 6 of user core. Nov 8 01:18:56.422706 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 01:18:56.477481 sshd[2031]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:56.489058 systemd[1]: sshd@3-139.178.94.41:22-139.178.68.195:57260.service: Deactivated successfully. Nov 8 01:18:56.489830 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 01:18:56.490513 systemd-logind[1816]: Session 6 logged out. Waiting for processes to exit. Nov 8 01:18:56.491206 systemd[1]: Started sshd@4-139.178.94.41:22-139.178.68.195:57264.service - OpenSSH per-connection server daemon (139.178.68.195:57264). Nov 8 01:18:56.491673 systemd-logind[1816]: Removed session 6. Nov 8 01:18:56.519420 sshd[2038]: Accepted publickey for core from 139.178.68.195 port 57264 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:56.522877 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:56.534368 systemd-logind[1816]: New session 7 of user core. Nov 8 01:18:56.545939 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 01:18:56.606389 sshd[2038]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:56.624599 systemd[1]: sshd@4-139.178.94.41:22-139.178.68.195:57264.service: Deactivated successfully. Nov 8 01:18:56.628339 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 01:18:56.631809 systemd-logind[1816]: Session 7 logged out. Waiting for processes to exit. Nov 8 01:18:56.657551 systemd[1]: Started sshd@5-139.178.94.41:22-139.178.68.195:57278.service - OpenSSH per-connection server daemon (139.178.68.195:57278). Nov 8 01:18:56.660406 systemd-logind[1816]: Removed session 7. Nov 8 01:18:56.709534 sshd[2045]: Accepted publickey for core from 139.178.68.195 port 57278 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:56.710182 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:56.712731 systemd-logind[1816]: New session 8 of user core. Nov 8 01:18:56.719755 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 01:18:56.772886 sshd[2045]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:56.788375 systemd[1]: sshd@5-139.178.94.41:22-139.178.68.195:57278.service: Deactivated successfully. Nov 8 01:18:56.789342 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 01:18:56.790048 systemd-logind[1816]: Session 8 logged out. Waiting for processes to exit. Nov 8 01:18:56.790621 systemd[1]: Started sshd@6-139.178.94.41:22-139.178.68.195:57284.service - OpenSSH per-connection server daemon (139.178.68.195:57284). Nov 8 01:18:56.791094 systemd-logind[1816]: Removed session 8. Nov 8 01:18:56.818038 sshd[2052]: Accepted publickey for core from 139.178.68.195 port 57284 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:56.818930 sshd[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:56.822454 systemd-logind[1816]: New session 9 of user core. Nov 8 01:18:56.834780 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 01:18:56.901227 sudo[2055]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 01:18:56.901376 sudo[2055]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:18:56.926559 sudo[2055]: pam_unix(sudo:session): session closed for user root Nov 8 01:18:56.927730 sshd[2052]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:56.942231 systemd[1]: sshd@6-139.178.94.41:22-139.178.68.195:57284.service: Deactivated successfully. Nov 8 01:18:56.943658 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 01:18:56.944987 systemd-logind[1816]: Session 9 logged out. Waiting for processes to exit. Nov 8 01:18:56.946445 systemd[1]: Started sshd@7-139.178.94.41:22-139.178.68.195:57300.service - OpenSSH per-connection server daemon (139.178.68.195:57300). Nov 8 01:18:56.947578 systemd-logind[1816]: Removed session 9. Nov 8 01:18:56.998716 sshd[2060]: Accepted publickey for core from 139.178.68.195 port 57300 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:57.001110 sshd[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:57.008557 systemd-logind[1816]: New session 10 of user core. Nov 8 01:18:57.022029 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 01:18:57.081444 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 01:18:57.081610 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:18:57.083670 sudo[2064]: pam_unix(sudo:session): session closed for user root Nov 8 01:18:57.086359 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 01:18:57.086581 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:18:57.103814 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 01:18:57.105035 auditctl[2067]: No rules Nov 8 01:18:57.105287 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 01:18:57.105422 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 01:18:57.107146 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 01:18:57.123544 augenrules[2085]: No rules Nov 8 01:18:57.123844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 01:18:57.124423 sudo[2063]: pam_unix(sudo:session): session closed for user root Nov 8 01:18:57.125522 sshd[2060]: pam_unix(sshd:session): session closed for user core Nov 8 01:18:57.134236 systemd[1]: sshd@7-139.178.94.41:22-139.178.68.195:57300.service: Deactivated successfully. Nov 8 01:18:57.135158 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 01:18:57.136058 systemd-logind[1816]: Session 10 logged out. Waiting for processes to exit. Nov 8 01:18:57.136855 systemd[1]: Started sshd@8-139.178.94.41:22-139.178.68.195:57306.service - OpenSSH per-connection server daemon (139.178.68.195:57306). Nov 8 01:18:57.137390 systemd-logind[1816]: Removed session 10. Nov 8 01:18:57.168784 sshd[2093]: Accepted publickey for core from 139.178.68.195 port 57306 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:18:57.169860 sshd[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:18:57.173837 systemd-logind[1816]: New session 11 of user core. Nov 8 01:18:57.190785 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 01:18:57.243439 sudo[2096]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 01:18:57.243783 sudo[2096]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:18:57.529845 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 01:18:57.529902 (dockerd)[2122]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 01:18:57.869295 dockerd[2122]: time="2025-11-08T01:18:57.869201214Z" level=info msg="Starting up" Nov 8 01:18:57.937566 dockerd[2122]: time="2025-11-08T01:18:57.937462797Z" level=info msg="Loading containers: start." Nov 8 01:18:58.032480 kernel: Initializing XFRM netlink socket Nov 8 01:18:58.088851 systemd-networkd[1517]: docker0: Link UP Nov 8 01:18:58.111861 dockerd[2122]: time="2025-11-08T01:18:58.111802834Z" level=info msg="Loading containers: done." Nov 8 01:18:58.120719 dockerd[2122]: time="2025-11-08T01:18:58.120678168Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 01:18:58.120776 dockerd[2122]: time="2025-11-08T01:18:58.120724112Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 01:18:58.120795 dockerd[2122]: time="2025-11-08T01:18:58.120776684Z" level=info msg="Daemon has completed initialization" Nov 8 01:18:58.120801 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck113178771-merged.mount: Deactivated successfully. Nov 8 01:18:58.134403 dockerd[2122]: time="2025-11-08T01:18:58.134350994Z" level=info msg="API listen on /run/docker.sock" Nov 8 01:18:58.134496 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 01:18:58.929693 containerd[1823]: time="2025-11-08T01:18:58.929644033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 01:18:59.514124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117680357.mount: Deactivated successfully. Nov 8 01:19:00.867903 containerd[1823]: time="2025-11-08T01:19:00.867853411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:00.868114 containerd[1823]: time="2025-11-08T01:19:00.868058346Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 01:19:00.868484 containerd[1823]: time="2025-11-08T01:19:00.868445186Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:00.870058 containerd[1823]: time="2025-11-08T01:19:00.870019938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:00.870689 containerd[1823]: time="2025-11-08T01:19:00.870647247Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.94098063s" Nov 8 01:19:00.870689 containerd[1823]: time="2025-11-08T01:19:00.870666375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 01:19:00.871023 containerd[1823]: time="2025-11-08T01:19:00.871011030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 01:19:02.380122 containerd[1823]: time="2025-11-08T01:19:02.380072676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:02.380358 containerd[1823]: time="2025-11-08T01:19:02.380216710Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 01:19:02.380693 containerd[1823]: time="2025-11-08T01:19:02.380679469Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:02.382290 containerd[1823]: time="2025-11-08T01:19:02.382276047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:02.382949 containerd[1823]: time="2025-11-08T01:19:02.382934292Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.511907587s" Nov 8 01:19:02.382983 containerd[1823]: time="2025-11-08T01:19:02.382952465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 01:19:02.383197 containerd[1823]: time="2025-11-08T01:19:02.383187132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 01:19:03.500158 containerd[1823]: time="2025-11-08T01:19:03.500104492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:03.500373 containerd[1823]: time="2025-11-08T01:19:03.500301241Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 01:19:03.500779 containerd[1823]: time="2025-11-08T01:19:03.500744980Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:03.502375 containerd[1823]: time="2025-11-08T01:19:03.502341089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:03.503501 containerd[1823]: time="2025-11-08T01:19:03.503457944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.120255539s" Nov 8 01:19:03.503501 containerd[1823]: time="2025-11-08T01:19:03.503477416Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 01:19:03.503752 containerd[1823]: time="2025-11-08T01:19:03.503739675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 01:19:04.566950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035264569.mount: Deactivated successfully. Nov 8 01:19:04.759713 containerd[1823]: time="2025-11-08T01:19:04.759687067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:04.759973 containerd[1823]: time="2025-11-08T01:19:04.759923374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 01:19:04.760272 containerd[1823]: time="2025-11-08T01:19:04.760255236Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:04.761208 containerd[1823]: time="2025-11-08T01:19:04.761195902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:04.761878 containerd[1823]: time="2025-11-08T01:19:04.761862494Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.258105545s" Nov 8 01:19:04.761925 containerd[1823]: time="2025-11-08T01:19:04.761882097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 01:19:04.762182 containerd[1823]: time="2025-11-08T01:19:04.762170412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 01:19:04.933895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 01:19:04.944750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:19:05.178330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:05.180559 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:19:05.199196 kubelet[2375]: E1108 01:19:05.199103 2375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:19:05.200243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:19:05.200325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:19:05.396227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108694152.mount: Deactivated successfully. Nov 8 01:19:05.926131 containerd[1823]: time="2025-11-08T01:19:05.926075704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:05.926348 containerd[1823]: time="2025-11-08T01:19:05.926308732Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 01:19:05.926765 containerd[1823]: time="2025-11-08T01:19:05.926723448Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:05.928382 containerd[1823]: time="2025-11-08T01:19:05.928338385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:05.929015 containerd[1823]: time="2025-11-08T01:19:05.928973797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.16678754s" Nov 8 01:19:05.929015 containerd[1823]: time="2025-11-08T01:19:05.928989638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 01:19:05.929238 containerd[1823]: time="2025-11-08T01:19:05.929225901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 01:19:06.417320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883905753.mount: Deactivated successfully. Nov 8 01:19:06.418677 containerd[1823]: time="2025-11-08T01:19:06.418632824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:06.418862 containerd[1823]: time="2025-11-08T01:19:06.418815992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 01:19:06.419222 containerd[1823]: time="2025-11-08T01:19:06.419175549Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:06.420232 containerd[1823]: time="2025-11-08T01:19:06.420192457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:06.420733 containerd[1823]: time="2025-11-08T01:19:06.420692716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 491.451624ms" Nov 8 01:19:06.420733 containerd[1823]: time="2025-11-08T01:19:06.420707873Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 01:19:06.421113 containerd[1823]: time="2025-11-08T01:19:06.421084996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 01:19:06.854311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408593884.mount: Deactivated successfully. Nov 8 01:19:07.966315 containerd[1823]: time="2025-11-08T01:19:07.966244506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:07.966612 containerd[1823]: time="2025-11-08T01:19:07.966424257Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 01:19:07.966945 containerd[1823]: time="2025-11-08T01:19:07.966933628Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:07.969012 containerd[1823]: time="2025-11-08T01:19:07.968974650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:07.969625 containerd[1823]: time="2025-11-08T01:19:07.969608913Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.548509463s" Nov 8 01:19:07.969667 containerd[1823]: time="2025-11-08T01:19:07.969626496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 01:19:09.807513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:09.821863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:19:09.837816 systemd[1]: Reloading requested from client PID 2549 ('systemctl') (unit session-11.scope)... Nov 8 01:19:09.837824 systemd[1]: Reloading... Nov 8 01:19:09.903544 zram_generator::config[2588]: No configuration found. Nov 8 01:19:09.968984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:19:10.030120 systemd[1]: Reloading finished in 192 ms. Nov 8 01:19:10.076000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:10.077226 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:19:10.078280 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 01:19:10.078384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:10.079244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:19:10.312342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:10.315112 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 01:19:10.337352 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:19:10.337352 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 01:19:10.337352 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:19:10.337672 kubelet[2658]: I1108 01:19:10.337345 2658 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 01:19:10.541305 kubelet[2658]: I1108 01:19:10.541285 2658 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 01:19:10.541305 kubelet[2658]: I1108 01:19:10.541301 2658 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 01:19:10.541450 kubelet[2658]: I1108 01:19:10.541445 2658 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 01:19:10.564921 kubelet[2658]: E1108 01:19:10.564880 2658 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.94.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:10.565600 kubelet[2658]: I1108 01:19:10.565520 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 01:19:10.569614 kubelet[2658]: E1108 01:19:10.569572 2658 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 01:19:10.569614 kubelet[2658]: I1108 01:19:10.569586 2658 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 01:19:10.578177 kubelet[2658]: I1108 01:19:10.578140 2658 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 01:19:10.579188 kubelet[2658]: I1108 01:19:10.579146 2658 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 01:19:10.579277 kubelet[2658]: I1108 01:19:10.579161 2658 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8acfe54808","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 01:19:10.579277 kubelet[2658]: I1108 01:19:10.579256 2658 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 01:19:10.579277 kubelet[2658]: I1108 01:19:10.579262 2658 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 01:19:10.579366 kubelet[2658]: I1108 01:19:10.579324 2658 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:19:10.582534 kubelet[2658]: I1108 01:19:10.582489 2658 kubelet.go:446] "Attempting to sync node with API server" Nov 8 01:19:10.582579 kubelet[2658]: I1108 01:19:10.582539 2658 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 01:19:10.582579 kubelet[2658]: I1108 01:19:10.582567 2658 kubelet.go:352] "Adding apiserver pod source" Nov 8 01:19:10.582579 kubelet[2658]: I1108 01:19:10.582574 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 01:19:10.584199 kubelet[2658]: W1108 01:19:10.584154 2658 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.41:6443: connect: connection refused Nov 8 01:19:10.584199 kubelet[2658]: W1108 01:19:10.584159 2658 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8acfe54808&limit=500&resourceVersion=0": dial tcp 139.178.94.41:6443: connect: connection refused Nov 8 01:19:10.584262 kubelet[2658]: E1108 01:19:10.584199 2658 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:10.584262 kubelet[2658]: E1108 01:19:10.584223 2658 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8acfe54808&limit=500&resourceVersion=0\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:10.585066 kubelet[2658]: I1108 01:19:10.585055 2658 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 01:19:10.585365 kubelet[2658]: I1108 01:19:10.585334 2658 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 01:19:10.587124 kubelet[2658]: W1108 01:19:10.587092 2658 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 01:19:10.600327 kubelet[2658]: I1108 01:19:10.600138 2658 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 01:19:10.600327 kubelet[2658]: I1108 01:19:10.600221 2658 server.go:1287] "Started kubelet" Nov 8 01:19:10.600709 kubelet[2658]: I1108 01:19:10.600360 2658 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 01:19:10.603709 kubelet[2658]: I1108 01:19:10.603631 2658 server.go:479] "Adding debug handlers to kubelet server" Nov 8 01:19:10.614620 kubelet[2658]: I1108 01:19:10.614571 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 01:19:10.614682 kubelet[2658]: I1108 01:19:10.614638 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 01:19:10.614799 kubelet[2658]: E1108 01:19:10.614741 2658 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8acfe54808\" not found" Nov 8 01:19:10.615010 kubelet[2658]: I1108 01:19:10.614989 2658 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 01:19:10.615088 kubelet[2658]: I1108 01:19:10.615035 2658 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 01:19:10.615088 kubelet[2658]: I1108 01:19:10.615085 2658 reconciler.go:26] "Reconciler: start to sync state" Nov 8 01:19:10.620558 kubelet[2658]: I1108 01:19:10.620345 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 01:19:10.621090 kubelet[2658]: E1108 01:19:10.620868 2658 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8acfe54808?timeout=10s\": dial tcp 139.178.94.41:6443: connect: connection refused" interval="200ms" Nov 8 01:19:10.621090 kubelet[2658]: I1108 01:19:10.621082 2658 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 01:19:10.621377 kubelet[2658]: W1108 01:19:10.621307 2658 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.41:6443: connect: connection refused Nov 8 01:19:10.621466 kubelet[2658]: E1108 01:19:10.621390 2658 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:10.621466 kubelet[2658]: I1108 01:19:10.621419 2658 factory.go:221] Registration of the systemd container factory successfully Nov 8 01:19:10.621613 kubelet[2658]: I1108 01:19:10.621559 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 01:19:10.623340 kubelet[2658]: I1108 01:19:10.623315 2658 factory.go:221] Registration of the containerd container factory successfully Nov 8 01:19:10.625078 kubelet[2658]: E1108 01:19:10.622653 2658 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.41:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8acfe54808.1875e34d5d25f2f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8acfe54808,UID:ci-4081.3.6-n-8acfe54808,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8acfe54808,},FirstTimestamp:2025-11-08 01:19:10.600172273 +0000 UTC m=+0.282851185,LastTimestamp:2025-11-08 01:19:10.600172273 +0000 UTC m=+0.282851185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8acfe54808,}" Nov 8 01:19:10.625480 kubelet[2658]: E1108 01:19:10.625455 2658 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 01:19:10.636858 kubelet[2658]: I1108 01:19:10.636832 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 01:19:10.637727 kubelet[2658]: I1108 01:19:10.637682 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 01:19:10.637727 kubelet[2658]: I1108 01:19:10.637700 2658 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 01:19:10.637727 kubelet[2658]: I1108 01:19:10.637715 2658 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 01:19:10.637727 kubelet[2658]: I1108 01:19:10.637723 2658 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 01:19:10.637871 kubelet[2658]: E1108 01:19:10.637762 2658 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 01:19:10.638133 kubelet[2658]: W1108 01:19:10.638091 2658 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.41:6443: connect: connection refused Nov 8 01:19:10.638133 kubelet[2658]: E1108 01:19:10.638127 2658 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:10.665436 kubelet[2658]: I1108 01:19:10.665367 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 01:19:10.665436 kubelet[2658]: I1108 01:19:10.665400 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 01:19:10.665436 kubelet[2658]: I1108 01:19:10.665438 2658 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:19:10.667443 kubelet[2658]: I1108 01:19:10.667406 2658 policy_none.go:49] "None policy: Start" Nov 8 01:19:10.667443 kubelet[2658]: I1108 01:19:10.667424 2658 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 01:19:10.667443 kubelet[2658]: I1108 01:19:10.667436 2658 state_mem.go:35] "Initializing new in-memory state store" Nov 8 01:19:10.670208 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 01:19:10.690109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 01:19:10.691671 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 01:19:10.704118 kubelet[2658]: I1108 01:19:10.704081 2658 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 01:19:10.704217 kubelet[2658]: I1108 01:19:10.704169 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 01:19:10.704217 kubelet[2658]: I1108 01:19:10.704176 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 01:19:10.704269 kubelet[2658]: I1108 01:19:10.704263 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 01:19:10.705313 kubelet[2658]: E1108 01:19:10.705256 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 01:19:10.705527 kubelet[2658]: E1108 01:19:10.705346 2658 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-8acfe54808\" not found" Nov 8 01:19:10.759099 systemd[1]: Created slice kubepods-burstable-pod8ed860fee0c2cfb053318ecd34d773bf.slice - libcontainer container kubepods-burstable-pod8ed860fee0c2cfb053318ecd34d773bf.slice. Nov 8 01:19:10.788377 kubelet[2658]: E1108 01:19:10.788269 2658 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8acfe54808\" not found" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.796052 systemd[1]: Created slice kubepods-burstable-pod817c7a33eecb9286ced884c8fff83be2.slice - libcontainer container kubepods-burstable-pod817c7a33eecb9286ced884c8fff83be2.slice. Nov 8 01:19:10.810661 kubelet[2658]: E1108 01:19:10.809137 2658 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8acfe54808\" not found" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.811949 kubelet[2658]: I1108 01:19:10.811869 2658 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.812739 kubelet[2658]: E1108 01:19:10.812627 2658 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.41:6443/api/v1/nodes\": dial tcp 139.178.94.41:6443: connect: connection refused" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.817377 systemd[1]: Created slice kubepods-burstable-podddcbb5510ef164eb94d20503edec2033.slice - libcontainer container kubepods-burstable-podddcbb5510ef164eb94d20503edec2033.slice. Nov 8 01:19:10.821666 kubelet[2658]: E1108 01:19:10.821578 2658 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8acfe54808\" not found" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.822092 kubelet[2658]: E1108 01:19:10.821990 2658 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8acfe54808?timeout=10s\": dial tcp 139.178.94.41:6443: connect: connection refused" interval="400ms" Nov 8 01:19:10.916795 kubelet[2658]: I1108 01:19:10.916704 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.916795 kubelet[2658]: I1108 01:19:10.916811 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917213 kubelet[2658]: I1108 01:19:10.916932 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917213 kubelet[2658]: I1108 01:19:10.917018 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/817c7a33eecb9286ced884c8fff83be2-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" (UID: \"817c7a33eecb9286ced884c8fff83be2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917213 kubelet[2658]: I1108 01:19:10.917076 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/817c7a33eecb9286ced884c8fff83be2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" (UID: \"817c7a33eecb9286ced884c8fff83be2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917213 kubelet[2658]: I1108 01:19:10.917126 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917213 kubelet[2658]: I1108 01:19:10.917191 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917787 kubelet[2658]: I1108 01:19:10.917242 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ed860fee0c2cfb053318ecd34d773bf-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8acfe54808\" (UID: \"8ed860fee0c2cfb053318ecd34d773bf\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:10.917787 kubelet[2658]: I1108 01:19:10.917292 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/817c7a33eecb9286ced884c8fff83be2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" (UID: \"817c7a33eecb9286ced884c8fff83be2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:11.017776 kubelet[2658]: I1108 01:19:11.017712 2658 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:11.018704 kubelet[2658]: E1108 01:19:11.018627 2658 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.41:6443/api/v1/nodes\": dial tcp 139.178.94.41:6443: connect: connection refused" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:11.090674 containerd[1823]: time="2025-11-08T01:19:11.090615178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8acfe54808,Uid:8ed860fee0c2cfb053318ecd34d773bf,Namespace:kube-system,Attempt:0,}" Nov 8 01:19:11.114039 containerd[1823]: time="2025-11-08T01:19:11.113984886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8acfe54808,Uid:817c7a33eecb9286ced884c8fff83be2,Namespace:kube-system,Attempt:0,}" Nov 8 01:19:11.123134 containerd[1823]: time="2025-11-08T01:19:11.123120335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8acfe54808,Uid:ddcbb5510ef164eb94d20503edec2033,Namespace:kube-system,Attempt:0,}" Nov 8 01:19:11.223420 kubelet[2658]: E1108 01:19:11.223224 2658 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8acfe54808?timeout=10s\": dial tcp 139.178.94.41:6443: connect: connection refused" interval="800ms" Nov 8 01:19:11.423761 kubelet[2658]: I1108 01:19:11.423690 2658 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:11.424644 kubelet[2658]: E1108 01:19:11.424459 2658 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.41:6443/api/v1/nodes\": dial tcp 139.178.94.41:6443: connect: connection refused" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:11.548259 kubelet[2658]: W1108 01:19:11.548168 2658 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.41:6443: connect: connection refused Nov 8 01:19:11.548259 kubelet[2658]: E1108 01:19:11.548211 2658 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:11.636361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293698715.mount: Deactivated successfully. Nov 8 01:19:11.638191 containerd[1823]: time="2025-11-08T01:19:11.638145762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:19:11.638368 containerd[1823]: time="2025-11-08T01:19:11.638319381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 01:19:11.638867 containerd[1823]: time="2025-11-08T01:19:11.638832092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:19:11.639339 containerd[1823]: time="2025-11-08T01:19:11.639295676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 01:19:11.639386 containerd[1823]: time="2025-11-08T01:19:11.639376092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:19:11.639831 containerd[1823]: time="2025-11-08T01:19:11.639792183Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 01:19:11.639865 containerd[1823]: time="2025-11-08T01:19:11.639808146Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:19:11.641840 containerd[1823]: time="2025-11-08T01:19:11.641799641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:19:11.642677 containerd[1823]: time="2025-11-08T01:19:11.642635692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.589385ms" Nov 8 01:19:11.643014 containerd[1823]: time="2025-11-08T01:19:11.642971997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.288169ms" Nov 8 01:19:11.644505 containerd[1823]: time="2025-11-08T01:19:11.644446214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.286421ms" Nov 8 01:19:11.690375 kubelet[2658]: W1108 01:19:11.690335 2658 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.41:6443: connect: connection refused Nov 8 01:19:11.690375 kubelet[2658]: E1108 01:19:11.690377 2658 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.41:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:19:11.741031 containerd[1823]: time="2025-11-08T01:19:11.740979367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:11.741031 containerd[1823]: time="2025-11-08T01:19:11.741015660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:11.741178 containerd[1823]: time="2025-11-08T01:19:11.741151178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:11.741214 containerd[1823]: time="2025-11-08T01:19:11.741177058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:11.741214 containerd[1823]: time="2025-11-08T01:19:11.741184800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:11.741248 containerd[1823]: time="2025-11-08T01:19:11.741229423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:11.741248 containerd[1823]: time="2025-11-08T01:19:11.741226019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:11.741291 containerd[1823]: time="2025-11-08T01:19:11.741272955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:11.741855 containerd[1823]: time="2025-11-08T01:19:11.741821671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:11.741855 containerd[1823]: time="2025-11-08T01:19:11.741846738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:11.741897 containerd[1823]: time="2025-11-08T01:19:11.741857455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:11.741920 containerd[1823]: time="2025-11-08T01:19:11.741899176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:11.762731 systemd[1]: Started cri-containerd-29c449b5f45dae4bfb235a30d02477cff742c3d85f3aa4fe5f572dc3e5815b8f.scope - libcontainer container 29c449b5f45dae4bfb235a30d02477cff742c3d85f3aa4fe5f572dc3e5815b8f. Nov 8 01:19:11.763614 systemd[1]: Started cri-containerd-40b0b0d5fbb6821b13c1b1f690b77178320e56f373d89dd179956605fa862466.scope - libcontainer container 40b0b0d5fbb6821b13c1b1f690b77178320e56f373d89dd179956605fa862466. Nov 8 01:19:11.764430 systemd[1]: Started cri-containerd-9a5c3c27610b658bd032a272400416758770a6aedc67abd70bd8b6527a5ff420.scope - libcontainer container 9a5c3c27610b658bd032a272400416758770a6aedc67abd70bd8b6527a5ff420. Nov 8 01:19:11.789156 containerd[1823]: time="2025-11-08T01:19:11.789125425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8acfe54808,Uid:8ed860fee0c2cfb053318ecd34d773bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"29c449b5f45dae4bfb235a30d02477cff742c3d85f3aa4fe5f572dc3e5815b8f\"" Nov 8 01:19:11.789667 containerd[1823]: time="2025-11-08T01:19:11.789645693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8acfe54808,Uid:817c7a33eecb9286ced884c8fff83be2,Namespace:kube-system,Attempt:0,} returns sandbox id \"40b0b0d5fbb6821b13c1b1f690b77178320e56f373d89dd179956605fa862466\"" Nov 8 01:19:11.790725 containerd[1823]: time="2025-11-08T01:19:11.790704961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8acfe54808,Uid:ddcbb5510ef164eb94d20503edec2033,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a5c3c27610b658bd032a272400416758770a6aedc67abd70bd8b6527a5ff420\"" Nov 8 01:19:11.790918 containerd[1823]: time="2025-11-08T01:19:11.790899958Z" level=info msg="CreateContainer within sandbox \"40b0b0d5fbb6821b13c1b1f690b77178320e56f373d89dd179956605fa862466\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 01:19:11.790971 containerd[1823]: time="2025-11-08T01:19:11.790946538Z" level=info msg="CreateContainer within sandbox \"29c449b5f45dae4bfb235a30d02477cff742c3d85f3aa4fe5f572dc3e5815b8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 01:19:11.791720 containerd[1823]: time="2025-11-08T01:19:11.791705504Z" level=info msg="CreateContainer within sandbox \"9a5c3c27610b658bd032a272400416758770a6aedc67abd70bd8b6527a5ff420\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 01:19:11.803698 containerd[1823]: time="2025-11-08T01:19:11.803631169Z" level=info msg="CreateContainer within sandbox \"29c449b5f45dae4bfb235a30d02477cff742c3d85f3aa4fe5f572dc3e5815b8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37543871212c851547673b67e6d2efefced09336cca80bd50c4ec2b46c6c6417\"" Nov 8 01:19:11.803950 containerd[1823]: time="2025-11-08T01:19:11.803907105Z" level=info msg="StartContainer for \"37543871212c851547673b67e6d2efefced09336cca80bd50c4ec2b46c6c6417\"" Nov 8 01:19:11.804803 containerd[1823]: time="2025-11-08T01:19:11.804761657Z" level=info msg="CreateContainer within sandbox \"40b0b0d5fbb6821b13c1b1f690b77178320e56f373d89dd179956605fa862466\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c2b83e786926b9a8909377016f5665d20ec5085079ff6156e05b1f0d3caf45ca\"" Nov 8 01:19:11.804925 containerd[1823]: time="2025-11-08T01:19:11.804911994Z" level=info msg="StartContainer for \"c2b83e786926b9a8909377016f5665d20ec5085079ff6156e05b1f0d3caf45ca\"" Nov 8 01:19:11.808024 containerd[1823]: time="2025-11-08T01:19:11.808004560Z" level=info msg="CreateContainer within sandbox \"9a5c3c27610b658bd032a272400416758770a6aedc67abd70bd8b6527a5ff420\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c4edd4ed5ca4f130a96cdbd215b65c8f6d7afb83fc342629f8b9416936a14e21\"" Nov 8 01:19:11.808282 containerd[1823]: time="2025-11-08T01:19:11.808268766Z" level=info msg="StartContainer for \"c4edd4ed5ca4f130a96cdbd215b65c8f6d7afb83fc342629f8b9416936a14e21\"" Nov 8 01:19:11.825820 systemd[1]: Started cri-containerd-37543871212c851547673b67e6d2efefced09336cca80bd50c4ec2b46c6c6417.scope - libcontainer container 37543871212c851547673b67e6d2efefced09336cca80bd50c4ec2b46c6c6417. Nov 8 01:19:11.826428 systemd[1]: Started cri-containerd-c2b83e786926b9a8909377016f5665d20ec5085079ff6156e05b1f0d3caf45ca.scope - libcontainer container c2b83e786926b9a8909377016f5665d20ec5085079ff6156e05b1f0d3caf45ca. Nov 8 01:19:11.827960 systemd[1]: Started cri-containerd-c4edd4ed5ca4f130a96cdbd215b65c8f6d7afb83fc342629f8b9416936a14e21.scope - libcontainer container c4edd4ed5ca4f130a96cdbd215b65c8f6d7afb83fc342629f8b9416936a14e21. Nov 8 01:19:11.849824 containerd[1823]: time="2025-11-08T01:19:11.849799725Z" level=info msg="StartContainer for \"c2b83e786926b9a8909377016f5665d20ec5085079ff6156e05b1f0d3caf45ca\" returns successfully" Nov 8 01:19:11.849909 containerd[1823]: time="2025-11-08T01:19:11.849800082Z" level=info msg="StartContainer for \"37543871212c851547673b67e6d2efefced09336cca80bd50c4ec2b46c6c6417\" returns successfully" Nov 8 01:19:11.852663 containerd[1823]: time="2025-11-08T01:19:11.852641581Z" level=info msg="StartContainer for \"c4edd4ed5ca4f130a96cdbd215b65c8f6d7afb83fc342629f8b9416936a14e21\" returns successfully" Nov 8 01:19:12.228060 kubelet[2658]: I1108 01:19:12.227140 2658 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.419981 kubelet[2658]: E1108 01:19:12.419960 2658 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-8acfe54808\" not found" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.520700 kubelet[2658]: I1108 01:19:12.520612 2658 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.520700 kubelet[2658]: E1108 01:19:12.520633 2658 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-8acfe54808\": node \"ci-4081.3.6-n-8acfe54808\" not found" Nov 8 01:19:12.584227 kubelet[2658]: I1108 01:19:12.584113 2658 apiserver.go:52] "Watching apiserver" Nov 8 01:19:12.616163 kubelet[2658]: I1108 01:19:12.616090 2658 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 01:19:12.616163 kubelet[2658]: I1108 01:19:12.616129 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.626784 kubelet[2658]: E1108 01:19:12.626710 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8acfe54808\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.627077 kubelet[2658]: I1108 01:19:12.626794 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.631577 kubelet[2658]: E1108 01:19:12.631518 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.631577 kubelet[2658]: I1108 01:19:12.631578 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.637302 kubelet[2658]: E1108 01:19:12.637234 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.644375 kubelet[2658]: I1108 01:19:12.644346 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.645922 kubelet[2658]: I1108 01:19:12.645898 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.646730 kubelet[2658]: E1108 01:19:12.646700 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8acfe54808\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.647632 kubelet[2658]: I1108 01:19:12.647611 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.648774 kubelet[2658]: E1108 01:19:12.648713 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:12.649827 kubelet[2658]: E1108 01:19:12.649756 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:13.650332 kubelet[2658]: I1108 01:19:13.650280 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:13.651617 kubelet[2658]: I1108 01:19:13.650394 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:13.662194 kubelet[2658]: W1108 01:19:13.660758 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:13.662194 kubelet[2658]: W1108 01:19:13.661565 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:14.768576 systemd[1]: Reloading requested from client PID 2973 ('systemctl') (unit session-11.scope)... Nov 8 01:19:14.768584 systemd[1]: Reloading... Nov 8 01:19:14.817569 zram_generator::config[3012]: No configuration found. Nov 8 01:19:14.884869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:19:14.953592 systemd[1]: Reloading finished in 184 ms. Nov 8 01:19:14.979309 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:19:14.985161 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 01:19:14.985272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:14.993799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:19:15.231361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:19:15.233656 (kubelet)[3076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 01:19:15.256576 kubelet[3076]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:19:15.256576 kubelet[3076]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 01:19:15.256576 kubelet[3076]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:19:15.256874 kubelet[3076]: I1108 01:19:15.256618 3076 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 01:19:15.260076 kubelet[3076]: I1108 01:19:15.260036 3076 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 01:19:15.260076 kubelet[3076]: I1108 01:19:15.260046 3076 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 01:19:15.260195 kubelet[3076]: I1108 01:19:15.260165 3076 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 01:19:15.260870 kubelet[3076]: I1108 01:19:15.260859 3076 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 01:19:15.262058 kubelet[3076]: I1108 01:19:15.262025 3076 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 01:19:15.264406 kubelet[3076]: E1108 01:19:15.264369 3076 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 01:19:15.264406 kubelet[3076]: I1108 01:19:15.264398 3076 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 01:19:15.271355 kubelet[3076]: I1108 01:19:15.271318 3076 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 01:19:15.271424 kubelet[3076]: I1108 01:19:15.271408 3076 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 01:19:15.271584 kubelet[3076]: I1108 01:19:15.271423 3076 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8acfe54808","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 01:19:15.271584 kubelet[3076]: I1108 01:19:15.271558 3076 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 01:19:15.271584 kubelet[3076]: I1108 01:19:15.271565 3076 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 01:19:15.271680 kubelet[3076]: I1108 01:19:15.271593 3076 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:19:15.271701 kubelet[3076]: I1108 01:19:15.271689 3076 kubelet.go:446] "Attempting to sync node with API server" Nov 8 01:19:15.271718 kubelet[3076]: I1108 01:19:15.271700 3076 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 01:19:15.271718 kubelet[3076]: I1108 01:19:15.271710 3076 kubelet.go:352] "Adding apiserver pod source" Nov 8 01:19:15.271718 kubelet[3076]: I1108 01:19:15.271718 3076 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 01:19:15.272105 kubelet[3076]: I1108 01:19:15.272091 3076 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 01:19:15.272377 kubelet[3076]: I1108 01:19:15.272340 3076 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 01:19:15.272713 kubelet[3076]: I1108 01:19:15.272676 3076 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 01:19:15.272713 kubelet[3076]: I1108 01:19:15.272697 3076 server.go:1287] "Started kubelet" Nov 8 01:19:15.272802 kubelet[3076]: I1108 01:19:15.272747 3076 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 01:19:15.273657 kubelet[3076]: I1108 01:19:15.272768 3076 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 01:19:15.273657 kubelet[3076]: I1108 01:19:15.273622 3076 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 01:19:15.274556 kubelet[3076]: E1108 01:19:15.274548 3076 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 01:19:15.274632 kubelet[3076]: I1108 01:19:15.274619 3076 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 01:19:15.274632 kubelet[3076]: I1108 01:19:15.274628 3076 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 01:19:15.274708 kubelet[3076]: I1108 01:19:15.274648 3076 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 01:19:15.274708 kubelet[3076]: I1108 01:19:15.274668 3076 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 01:19:15.274708 kubelet[3076]: E1108 01:19:15.274673 3076 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8acfe54808\" not found" Nov 8 01:19:15.274708 kubelet[3076]: I1108 01:19:15.274697 3076 server.go:479] "Adding debug handlers to kubelet server" Nov 8 01:19:15.274815 kubelet[3076]: I1108 01:19:15.274759 3076 reconciler.go:26] "Reconciler: start to sync state" Nov 8 01:19:15.275045 kubelet[3076]: I1108 01:19:15.275037 3076 factory.go:221] Registration of the systemd container factory successfully Nov 8 01:19:15.275088 kubelet[3076]: I1108 01:19:15.275077 3076 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 01:19:15.275570 kubelet[3076]: I1108 01:19:15.275561 3076 factory.go:221] Registration of the containerd container factory successfully Nov 8 01:19:15.281300 kubelet[3076]: I1108 01:19:15.281273 3076 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 01:19:15.281795 kubelet[3076]: I1108 01:19:15.281787 3076 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 01:19:15.281849 kubelet[3076]: I1108 01:19:15.281801 3076 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 01:19:15.281849 kubelet[3076]: I1108 01:19:15.281815 3076 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 01:19:15.281849 kubelet[3076]: I1108 01:19:15.281822 3076 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 01:19:15.281913 kubelet[3076]: E1108 01:19:15.281856 3076 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 01:19:15.289463 kubelet[3076]: I1108 01:19:15.289418 3076 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 01:19:15.289463 kubelet[3076]: I1108 01:19:15.289430 3076 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 01:19:15.289463 kubelet[3076]: I1108 01:19:15.289441 3076 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:19:15.289569 kubelet[3076]: I1108 01:19:15.289534 3076 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 01:19:15.289569 kubelet[3076]: I1108 01:19:15.289541 3076 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 01:19:15.289569 kubelet[3076]: I1108 01:19:15.289552 3076 policy_none.go:49] "None policy: Start" Nov 8 01:19:15.289569 kubelet[3076]: I1108 01:19:15.289558 3076 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 01:19:15.289569 kubelet[3076]: I1108 01:19:15.289563 3076 state_mem.go:35] "Initializing new in-memory state store" Nov 8 01:19:15.289640 kubelet[3076]: I1108 01:19:15.289620 3076 state_mem.go:75] "Updated machine memory state" Nov 8 01:19:15.291388 kubelet[3076]: I1108 01:19:15.291352 3076 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 01:19:15.291445 kubelet[3076]: I1108 01:19:15.291439 3076 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 01:19:15.291465 kubelet[3076]: I1108 01:19:15.291448 3076 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 01:19:15.291582 kubelet[3076]: I1108 01:19:15.291528 3076 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 01:19:15.291807 kubelet[3076]: E1108 01:19:15.291797 3076 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 01:19:15.383885 kubelet[3076]: I1108 01:19:15.383756 3076 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.383885 kubelet[3076]: I1108 01:19:15.383867 3076 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.384265 kubelet[3076]: I1108 01:19:15.383969 3076 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.391109 kubelet[3076]: W1108 01:19:15.391020 3076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:15.391588 kubelet[3076]: W1108 01:19:15.391504 3076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:15.391588 kubelet[3076]: W1108 01:19:15.391569 3076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:15.391891 kubelet[3076]: E1108 01:19:15.391610 3076 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8acfe54808\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.391891 kubelet[3076]: E1108 01:19:15.391682 3076 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.398416 kubelet[3076]: I1108 01:19:15.398317 3076 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.407588 kubelet[3076]: I1108 01:19:15.407545 3076 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.407788 kubelet[3076]: I1108 01:19:15.407691 3076 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.576909 kubelet[3076]: I1108 01:19:15.576656 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.576909 kubelet[3076]: I1108 01:19:15.576762 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ed860fee0c2cfb053318ecd34d773bf-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8acfe54808\" (UID: \"8ed860fee0c2cfb053318ecd34d773bf\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.576909 kubelet[3076]: I1108 01:19:15.576846 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/817c7a33eecb9286ced884c8fff83be2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" (UID: \"817c7a33eecb9286ced884c8fff83be2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.576909 kubelet[3076]: I1108 01:19:15.576898 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/817c7a33eecb9286ced884c8fff83be2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" (UID: \"817c7a33eecb9286ced884c8fff83be2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.577432 kubelet[3076]: I1108 01:19:15.576955 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.577432 kubelet[3076]: I1108 01:19:15.577005 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.577432 kubelet[3076]: I1108 01:19:15.577051 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.577432 kubelet[3076]: I1108 01:19:15.577108 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddcbb5510ef164eb94d20503edec2033-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" (UID: \"ddcbb5510ef164eb94d20503edec2033\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:15.577432 kubelet[3076]: I1108 01:19:15.577156 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/817c7a33eecb9286ced884c8fff83be2-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8acfe54808\" (UID: \"817c7a33eecb9286ced884c8fff83be2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:16.272654 kubelet[3076]: I1108 01:19:16.272635 3076 apiserver.go:52] "Watching apiserver" Nov 8 01:19:16.274775 kubelet[3076]: I1108 01:19:16.274739 3076 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 01:19:16.287330 kubelet[3076]: I1108 01:19:16.287236 3076 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:16.287621 kubelet[3076]: I1108 01:19:16.287420 3076 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:16.295548 kubelet[3076]: W1108 01:19:16.295464 3076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:16.295752 kubelet[3076]: E1108 01:19:16.295622 3076 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8acfe54808\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:16.296134 kubelet[3076]: W1108 01:19:16.296044 3076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:19:16.296302 kubelet[3076]: E1108 01:19:16.296144 3076 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8acfe54808\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" Nov 8 01:19:16.310406 kubelet[3076]: I1108 01:19:16.310260 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8acfe54808" podStartSLOduration=3.3102486190000002 podStartE2EDuration="3.310248619s" podCreationTimestamp="2025-11-08 01:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:19:16.310194827 +0000 UTC m=+1.074800689" watchObservedRunningTime="2025-11-08 01:19:16.310248619 +0000 UTC m=+1.074854474" Nov 8 01:19:16.336026 kubelet[3076]: I1108 01:19:16.335967 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8acfe54808" podStartSLOduration=1.335953656 podStartE2EDuration="1.335953656s" podCreationTimestamp="2025-11-08 01:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:19:16.314415505 +0000 UTC m=+1.079021372" watchObservedRunningTime="2025-11-08 01:19:16.335953656 +0000 UTC m=+1.100559514" Nov 8 01:19:16.340560 kubelet[3076]: I1108 01:19:16.340526 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8acfe54808" podStartSLOduration=3.3405137590000002 podStartE2EDuration="3.340513759s" podCreationTimestamp="2025-11-08 01:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:19:16.336049805 +0000 UTC m=+1.100655663" watchObservedRunningTime="2025-11-08 01:19:16.340513759 +0000 UTC m=+1.105119613" Nov 8 01:19:20.592684 kubelet[3076]: I1108 01:19:20.592616 3076 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 01:19:20.594096 kubelet[3076]: I1108 01:19:20.593780 3076 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 01:19:20.594314 containerd[1823]: time="2025-11-08T01:19:20.593295593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 01:19:21.279500 systemd[1]: Created slice kubepods-besteffort-pod2935ff3c_9a0b_4481_b012_b54ac703e007.slice - libcontainer container kubepods-besteffort-pod2935ff3c_9a0b_4481_b012_b54ac703e007.slice. Nov 8 01:19:21.317083 kubelet[3076]: I1108 01:19:21.316974 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2935ff3c-9a0b-4481-b012-b54ac703e007-lib-modules\") pod \"kube-proxy-jqws6\" (UID: \"2935ff3c-9a0b-4481-b012-b54ac703e007\") " pod="kube-system/kube-proxy-jqws6" Nov 8 01:19:21.317334 kubelet[3076]: I1108 01:19:21.317109 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2935ff3c-9a0b-4481-b012-b54ac703e007-kube-proxy\") pod \"kube-proxy-jqws6\" (UID: \"2935ff3c-9a0b-4481-b012-b54ac703e007\") " pod="kube-system/kube-proxy-jqws6" Nov 8 01:19:21.317334 kubelet[3076]: I1108 01:19:21.317194 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2935ff3c-9a0b-4481-b012-b54ac703e007-xtables-lock\") pod \"kube-proxy-jqws6\" (UID: \"2935ff3c-9a0b-4481-b012-b54ac703e007\") " pod="kube-system/kube-proxy-jqws6" Nov 8 01:19:21.317334 kubelet[3076]: I1108 01:19:21.317298 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcl9\" (UniqueName: \"kubernetes.io/projected/2935ff3c-9a0b-4481-b012-b54ac703e007-kube-api-access-dmcl9\") pod \"kube-proxy-jqws6\" (UID: \"2935ff3c-9a0b-4481-b012-b54ac703e007\") " pod="kube-system/kube-proxy-jqws6" Nov 8 01:19:21.597196 containerd[1823]: time="2025-11-08T01:19:21.596995705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqws6,Uid:2935ff3c-9a0b-4481-b012-b54ac703e007,Namespace:kube-system,Attempt:0,}" Nov 8 01:19:21.608060 containerd[1823]: time="2025-11-08T01:19:21.608016655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:21.608264 containerd[1823]: time="2025-11-08T01:19:21.608217360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:21.608264 containerd[1823]: time="2025-11-08T01:19:21.608234900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:21.608307 containerd[1823]: time="2025-11-08T01:19:21.608276520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:21.627803 systemd[1]: Started cri-containerd-60c486b55a546e41da71b8b61408d62cd13c7f7a967f36c3583a4aec0fea1892.scope - libcontainer container 60c486b55a546e41da71b8b61408d62cd13c7f7a967f36c3583a4aec0fea1892. Nov 8 01:19:21.638149 containerd[1823]: time="2025-11-08T01:19:21.638125185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqws6,Uid:2935ff3c-9a0b-4481-b012-b54ac703e007,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c486b55a546e41da71b8b61408d62cd13c7f7a967f36c3583a4aec0fea1892\"" Nov 8 01:19:21.639386 containerd[1823]: time="2025-11-08T01:19:21.639373618Z" level=info msg="CreateContainer within sandbox \"60c486b55a546e41da71b8b61408d62cd13c7f7a967f36c3583a4aec0fea1892\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 01:19:21.646187 containerd[1823]: time="2025-11-08T01:19:21.646147120Z" level=info msg="CreateContainer within sandbox \"60c486b55a546e41da71b8b61408d62cd13c7f7a967f36c3583a4aec0fea1892\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1fdfd6b497db928ec33eab4aa75eca225c0f3b3a8c2bef0d63ed0b7885abeae9\"" Nov 8 01:19:21.646404 containerd[1823]: time="2025-11-08T01:19:21.646392583Z" level=info msg="StartContainer for \"1fdfd6b497db928ec33eab4aa75eca225c0f3b3a8c2bef0d63ed0b7885abeae9\"" Nov 8 01:19:21.680785 systemd[1]: Started cri-containerd-1fdfd6b497db928ec33eab4aa75eca225c0f3b3a8c2bef0d63ed0b7885abeae9.scope - libcontainer container 1fdfd6b497db928ec33eab4aa75eca225c0f3b3a8c2bef0d63ed0b7885abeae9. Nov 8 01:19:21.703460 containerd[1823]: time="2025-11-08T01:19:21.703419165Z" level=info msg="StartContainer for \"1fdfd6b497db928ec33eab4aa75eca225c0f3b3a8c2bef0d63ed0b7885abeae9\" returns successfully" Nov 8 01:19:21.724829 systemd[1]: Created slice kubepods-besteffort-podc10e8725_e8f1_4e19_8654_e9bb044f7584.slice - libcontainer container kubepods-besteffort-podc10e8725_e8f1_4e19_8654_e9bb044f7584.slice. Nov 8 01:19:21.820518 kubelet[3076]: I1108 01:19:21.820393 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4qw\" (UniqueName: \"kubernetes.io/projected/c10e8725-e8f1-4e19-8654-e9bb044f7584-kube-api-access-dg4qw\") pod \"tigera-operator-7dcd859c48-dnk46\" (UID: \"c10e8725-e8f1-4e19-8654-e9bb044f7584\") " pod="tigera-operator/tigera-operator-7dcd859c48-dnk46" Nov 8 01:19:21.820518 kubelet[3076]: I1108 01:19:21.820513 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c10e8725-e8f1-4e19-8654-e9bb044f7584-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dnk46\" (UID: \"c10e8725-e8f1-4e19-8654-e9bb044f7584\") " pod="tigera-operator/tigera-operator-7dcd859c48-dnk46" Nov 8 01:19:22.027982 containerd[1823]: time="2025-11-08T01:19:22.027868419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dnk46,Uid:c10e8725-e8f1-4e19-8654-e9bb044f7584,Namespace:tigera-operator,Attempt:0,}" Nov 8 01:19:22.038161 containerd[1823]: time="2025-11-08T01:19:22.038119845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:22.038161 containerd[1823]: time="2025-11-08T01:19:22.038146189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:22.038161 containerd[1823]: time="2025-11-08T01:19:22.038153434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:22.038279 containerd[1823]: time="2025-11-08T01:19:22.038193819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:22.059723 systemd[1]: Started cri-containerd-f5c9ac42286f190b3126ecb70b6866f94e8c4aa7451903d1cf8823d575ff5b92.scope - libcontainer container f5c9ac42286f190b3126ecb70b6866f94e8c4aa7451903d1cf8823d575ff5b92. Nov 8 01:19:22.082604 containerd[1823]: time="2025-11-08T01:19:22.082581862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dnk46,Uid:c10e8725-e8f1-4e19-8654-e9bb044f7584,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f5c9ac42286f190b3126ecb70b6866f94e8c4aa7451903d1cf8823d575ff5b92\"" Nov 8 01:19:22.083363 containerd[1823]: time="2025-11-08T01:19:22.083349870Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 01:19:22.444846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363073425.mount: Deactivated successfully. Nov 8 01:19:23.509690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370852372.mount: Deactivated successfully. Nov 8 01:19:23.933150 containerd[1823]: time="2025-11-08T01:19:23.933126242Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:23.933373 containerd[1823]: time="2025-11-08T01:19:23.933338427Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 01:19:23.934146 containerd[1823]: time="2025-11-08T01:19:23.933936469Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:23.935950 containerd[1823]: time="2025-11-08T01:19:23.935906783Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:23.936323 containerd[1823]: time="2025-11-08T01:19:23.936303866Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.85293478s" Nov 8 01:19:23.936376 containerd[1823]: time="2025-11-08T01:19:23.936322689Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 01:19:23.937269 containerd[1823]: time="2025-11-08T01:19:23.937254162Z" level=info msg="CreateContainer within sandbox \"f5c9ac42286f190b3126ecb70b6866f94e8c4aa7451903d1cf8823d575ff5b92\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 01:19:23.941106 containerd[1823]: time="2025-11-08T01:19:23.941091883Z" level=info msg="CreateContainer within sandbox \"f5c9ac42286f190b3126ecb70b6866f94e8c4aa7451903d1cf8823d575ff5b92\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"02e14c97a4e954a5a7fc93fe461e4020fa0d38474d303d391ef56668591de06d\"" Nov 8 01:19:23.941308 containerd[1823]: time="2025-11-08T01:19:23.941294998Z" level=info msg="StartContainer for \"02e14c97a4e954a5a7fc93fe461e4020fa0d38474d303d391ef56668591de06d\"" Nov 8 01:19:23.968730 systemd[1]: Started cri-containerd-02e14c97a4e954a5a7fc93fe461e4020fa0d38474d303d391ef56668591de06d.scope - libcontainer container 02e14c97a4e954a5a7fc93fe461e4020fa0d38474d303d391ef56668591de06d. Nov 8 01:19:23.980879 containerd[1823]: time="2025-11-08T01:19:23.980850762Z" level=info msg="StartContainer for \"02e14c97a4e954a5a7fc93fe461e4020fa0d38474d303d391ef56668591de06d\" returns successfully" Nov 8 01:19:24.326672 kubelet[3076]: I1108 01:19:24.326399 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqws6" podStartSLOduration=3.326360692 podStartE2EDuration="3.326360692s" podCreationTimestamp="2025-11-08 01:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:19:22.341120637 +0000 UTC m=+7.105726654" watchObservedRunningTime="2025-11-08 01:19:24.326360692 +0000 UTC m=+9.090966641" Nov 8 01:19:24.327701 kubelet[3076]: I1108 01:19:24.326748 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dnk46" podStartSLOduration=1.473091353 podStartE2EDuration="3.326722953s" podCreationTimestamp="2025-11-08 01:19:21 +0000 UTC" firstStartedPulling="2025-11-08 01:19:22.08310415 +0000 UTC m=+6.847710008" lastFinishedPulling="2025-11-08 01:19:23.936735754 +0000 UTC m=+8.701341608" observedRunningTime="2025-11-08 01:19:24.326699832 +0000 UTC m=+9.091305798" watchObservedRunningTime="2025-11-08 01:19:24.326722953 +0000 UTC m=+9.091328860" Nov 8 01:19:27.577988 update_engine[1818]: I20251108 01:19:27.577822 1818 update_attempter.cc:509] Updating boot flags... Nov 8 01:19:27.619525 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (3564) Nov 8 01:19:27.648511 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (3563) Nov 8 01:19:28.713699 sudo[2096]: pam_unix(sudo:session): session closed for user root Nov 8 01:19:28.714695 sshd[2093]: pam_unix(sshd:session): session closed for user core Nov 8 01:19:28.716488 systemd[1]: sshd@8-139.178.94.41:22-139.178.68.195:57306.service: Deactivated successfully. Nov 8 01:19:28.717413 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 01:19:28.717515 systemd[1]: session-11.scope: Consumed 3.452s CPU time, 167.6M memory peak, 0B memory swap peak. Nov 8 01:19:28.718090 systemd-logind[1816]: Session 11 logged out. Waiting for processes to exit. Nov 8 01:19:28.718621 systemd-logind[1816]: Removed session 11. Nov 8 01:19:32.725307 systemd[1]: Created slice kubepods-besteffort-podd58dc192_12b2_457d_b769_a24dd96828ed.slice - libcontainer container kubepods-besteffort-podd58dc192_12b2_457d_b769_a24dd96828ed.slice. Nov 8 01:19:32.793334 kubelet[3076]: I1108 01:19:32.793228 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d58dc192-12b2-457d-b769-a24dd96828ed-typha-certs\") pod \"calico-typha-6fd46dc96-zptrq\" (UID: \"d58dc192-12b2-457d-b769-a24dd96828ed\") " pod="calico-system/calico-typha-6fd46dc96-zptrq" Nov 8 01:19:32.793334 kubelet[3076]: I1108 01:19:32.793321 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqr2w\" (UniqueName: \"kubernetes.io/projected/d58dc192-12b2-457d-b769-a24dd96828ed-kube-api-access-bqr2w\") pod \"calico-typha-6fd46dc96-zptrq\" (UID: \"d58dc192-12b2-457d-b769-a24dd96828ed\") " pod="calico-system/calico-typha-6fd46dc96-zptrq" Nov 8 01:19:32.794455 kubelet[3076]: I1108 01:19:32.793387 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58dc192-12b2-457d-b769-a24dd96828ed-tigera-ca-bundle\") pod \"calico-typha-6fd46dc96-zptrq\" (UID: \"d58dc192-12b2-457d-b769-a24dd96828ed\") " pod="calico-system/calico-typha-6fd46dc96-zptrq" Nov 8 01:19:32.931433 systemd[1]: Created slice kubepods-besteffort-podedb2eb02_2839_4c3e_97cc_07952850c60e.slice - libcontainer container kubepods-besteffort-podedb2eb02_2839_4c3e_97cc_07952850c60e.slice. Nov 8 01:19:32.995736 kubelet[3076]: I1108 01:19:32.995529 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-flexvol-driver-host\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.995736 kubelet[3076]: I1108 01:19:32.995616 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-var-run-calico\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.995736 kubelet[3076]: I1108 01:19:32.995670 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-cni-bin-dir\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.995736 kubelet[3076]: I1108 01:19:32.995717 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/edb2eb02-2839-4c3e-97cc-07952850c60e-node-certs\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996227 kubelet[3076]: I1108 01:19:32.995771 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-cni-log-dir\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996227 kubelet[3076]: I1108 01:19:32.995817 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-var-lib-calico\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996227 kubelet[3076]: I1108 01:19:32.995870 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edb2eb02-2839-4c3e-97cc-07952850c60e-tigera-ca-bundle\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996227 kubelet[3076]: I1108 01:19:32.995912 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-lib-modules\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996227 kubelet[3076]: I1108 01:19:32.995956 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-policysync\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996697 kubelet[3076]: I1108 01:19:32.996004 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mql6g\" (UniqueName: \"kubernetes.io/projected/edb2eb02-2839-4c3e-97cc-07952850c60e-kube-api-access-mql6g\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996697 kubelet[3076]: I1108 01:19:32.996047 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-cni-net-dir\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:32.996697 kubelet[3076]: I1108 01:19:32.996091 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb2eb02-2839-4c3e-97cc-07952850c60e-xtables-lock\") pod \"calico-node-956wh\" (UID: \"edb2eb02-2839-4c3e-97cc-07952850c60e\") " pod="calico-system/calico-node-956wh" Nov 8 01:19:33.028991 containerd[1823]: time="2025-11-08T01:19:33.028908739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd46dc96-zptrq,Uid:d58dc192-12b2-457d-b769-a24dd96828ed,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:33.039757 containerd[1823]: time="2025-11-08T01:19:33.039711296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:33.039975 containerd[1823]: time="2025-11-08T01:19:33.039932947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:33.039975 containerd[1823]: time="2025-11-08T01:19:33.039942954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:33.040052 containerd[1823]: time="2025-11-08T01:19:33.040013141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:33.059680 systemd[1]: Started cri-containerd-21f1afec21247d424187aee65d508e29353688feff118e11b80844d9de6ed136.scope - libcontainer container 21f1afec21247d424187aee65d508e29353688feff118e11b80844d9de6ed136. Nov 8 01:19:33.083576 kubelet[3076]: E1108 01:19:33.083544 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:33.095061 containerd[1823]: time="2025-11-08T01:19:33.095034962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd46dc96-zptrq,Uid:d58dc192-12b2-457d-b769-a24dd96828ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"21f1afec21247d424187aee65d508e29353688feff118e11b80844d9de6ed136\"" Nov 8 01:19:33.096028 containerd[1823]: time="2025-11-08T01:19:33.096010545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 01:19:33.097116 kubelet[3076]: I1108 01:19:33.097102 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57117d3-8237-4f8a-aa85-534eb9568949-kubelet-dir\") pod \"csi-node-driver-rhnl5\" (UID: \"b57117d3-8237-4f8a-aa85-534eb9568949\") " pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:33.097152 kubelet[3076]: I1108 01:19:33.097122 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b57117d3-8237-4f8a-aa85-534eb9568949-varrun\") pod \"csi-node-driver-rhnl5\" (UID: \"b57117d3-8237-4f8a-aa85-534eb9568949\") " pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:33.097152 kubelet[3076]: I1108 01:19:33.097133 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b57117d3-8237-4f8a-aa85-534eb9568949-registration-dir\") pod \"csi-node-driver-rhnl5\" (UID: \"b57117d3-8237-4f8a-aa85-534eb9568949\") " pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:33.097214 kubelet[3076]: I1108 01:19:33.097163 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b57117d3-8237-4f8a-aa85-534eb9568949-socket-dir\") pod \"csi-node-driver-rhnl5\" (UID: \"b57117d3-8237-4f8a-aa85-534eb9568949\") " pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:33.097254 kubelet[3076]: I1108 01:19:33.097226 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8skx\" (UniqueName: \"kubernetes.io/projected/b57117d3-8237-4f8a-aa85-534eb9568949-kube-api-access-f8skx\") pod \"csi-node-driver-rhnl5\" (UID: \"b57117d3-8237-4f8a-aa85-534eb9568949\") " pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:33.097453 kubelet[3076]: E1108 01:19:33.097444 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.097453 kubelet[3076]: W1108 01:19:33.097452 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.097529 kubelet[3076]: E1108 01:19:33.097477 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.097653 kubelet[3076]: E1108 01:19:33.097644 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.097653 kubelet[3076]: W1108 01:19:33.097651 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.097754 kubelet[3076]: E1108 01:19:33.097660 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.097785 kubelet[3076]: E1108 01:19:33.097777 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.097785 kubelet[3076]: W1108 01:19:33.097782 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.097833 kubelet[3076]: E1108 01:19:33.097788 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.097922 kubelet[3076]: E1108 01:19:33.097900 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.097922 kubelet[3076]: W1108 01:19:33.097905 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.097922 kubelet[3076]: E1108 01:19:33.097910 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098094 kubelet[3076]: E1108 01:19:33.098087 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098094 kubelet[3076]: W1108 01:19:33.098093 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098140 kubelet[3076]: E1108 01:19:33.098099 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098194 kubelet[3076]: E1108 01:19:33.098188 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098194 kubelet[3076]: W1108 01:19:33.098193 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098255 kubelet[3076]: E1108 01:19:33.098199 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098335 kubelet[3076]: E1108 01:19:33.098326 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098366 kubelet[3076]: W1108 01:19:33.098335 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098366 kubelet[3076]: E1108 01:19:33.098347 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098448 kubelet[3076]: E1108 01:19:33.098441 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098490 kubelet[3076]: W1108 01:19:33.098447 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098490 kubelet[3076]: E1108 01:19:33.098457 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098559 kubelet[3076]: E1108 01:19:33.098552 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098593 kubelet[3076]: W1108 01:19:33.098558 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098593 kubelet[3076]: E1108 01:19:33.098569 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098651 kubelet[3076]: E1108 01:19:33.098645 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098651 kubelet[3076]: W1108 01:19:33.098650 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098711 kubelet[3076]: E1108 01:19:33.098662 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098741 kubelet[3076]: E1108 01:19:33.098733 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098741 kubelet[3076]: W1108 01:19:33.098739 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098803 kubelet[3076]: E1108 01:19:33.098748 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098873 kubelet[3076]: E1108 01:19:33.098867 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.098873 kubelet[3076]: W1108 01:19:33.098873 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.098940 kubelet[3076]: E1108 01:19:33.098881 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.098978 kubelet[3076]: E1108 01:19:33.098971 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.099009 kubelet[3076]: W1108 01:19:33.098977 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.099009 kubelet[3076]: E1108 01:19:33.098986 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.099092 kubelet[3076]: E1108 01:19:33.099086 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.099092 kubelet[3076]: W1108 01:19:33.099092 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.099155 kubelet[3076]: E1108 01:19:33.099100 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.099195 kubelet[3076]: E1108 01:19:33.099189 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.099229 kubelet[3076]: W1108 01:19:33.099195 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.099229 kubelet[3076]: E1108 01:19:33.099203 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.099303 kubelet[3076]: E1108 01:19:33.099296 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.099303 kubelet[3076]: W1108 01:19:33.099302 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.099365 kubelet[3076]: E1108 01:19:33.099309 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.099411 kubelet[3076]: E1108 01:19:33.099404 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.099411 kubelet[3076]: W1108 01:19:33.099410 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.099483 kubelet[3076]: E1108 01:19:33.099417 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.099536 kubelet[3076]: E1108 01:19:33.099530 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.099606 kubelet[3076]: W1108 01:19:33.099536 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.099606 kubelet[3076]: E1108 01:19:33.099543 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.101974 kubelet[3076]: E1108 01:19:33.101968 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.101974 kubelet[3076]: W1108 01:19:33.101974 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.102022 kubelet[3076]: E1108 01:19:33.101980 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.199025 kubelet[3076]: E1108 01:19:33.198967 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.199025 kubelet[3076]: W1108 01:19:33.199012 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.199600 kubelet[3076]: E1108 01:19:33.199053 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.199840 kubelet[3076]: E1108 01:19:33.199781 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.199840 kubelet[3076]: W1108 01:19:33.199808 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.200200 kubelet[3076]: E1108 01:19:33.199849 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.200445 kubelet[3076]: E1108 01:19:33.200405 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.200445 kubelet[3076]: W1108 01:19:33.200436 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.200858 kubelet[3076]: E1108 01:19:33.200470 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.201047 kubelet[3076]: E1108 01:19:33.201025 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.201219 kubelet[3076]: W1108 01:19:33.201053 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.201219 kubelet[3076]: E1108 01:19:33.201147 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.201600 kubelet[3076]: E1108 01:19:33.201561 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.201754 kubelet[3076]: W1108 01:19:33.201604 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.201754 kubelet[3076]: E1108 01:19:33.201682 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.202219 kubelet[3076]: E1108 01:19:33.202185 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.202339 kubelet[3076]: W1108 01:19:33.202221 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.202339 kubelet[3076]: E1108 01:19:33.202300 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.202892 kubelet[3076]: E1108 01:19:33.202855 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.203017 kubelet[3076]: W1108 01:19:33.202899 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.203017 kubelet[3076]: E1108 01:19:33.202976 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.203564 kubelet[3076]: E1108 01:19:33.203513 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.203691 kubelet[3076]: W1108 01:19:33.203562 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.203834 kubelet[3076]: E1108 01:19:33.203683 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.204276 kubelet[3076]: E1108 01:19:33.204204 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.204276 kubelet[3076]: W1108 01:19:33.204242 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.204656 kubelet[3076]: E1108 01:19:33.204320 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.204898 kubelet[3076]: E1108 01:19:33.204846 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.204898 kubelet[3076]: W1108 01:19:33.204877 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.205249 kubelet[3076]: E1108 01:19:33.204952 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.205404 kubelet[3076]: E1108 01:19:33.205373 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.205546 kubelet[3076]: W1108 01:19:33.205403 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.205546 kubelet[3076]: E1108 01:19:33.205489 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.205950 kubelet[3076]: E1108 01:19:33.205899 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.205950 kubelet[3076]: W1108 01:19:33.205928 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.206168 kubelet[3076]: E1108 01:19:33.205992 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.206414 kubelet[3076]: E1108 01:19:33.206384 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.206582 kubelet[3076]: W1108 01:19:33.206414 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.206582 kubelet[3076]: E1108 01:19:33.206493 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.206971 kubelet[3076]: E1108 01:19:33.206912 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.206971 kubelet[3076]: W1108 01:19:33.206948 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.207185 kubelet[3076]: E1108 01:19:33.207034 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.207560 kubelet[3076]: E1108 01:19:33.207506 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.207560 kubelet[3076]: W1108 01:19:33.207548 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.207798 kubelet[3076]: E1108 01:19:33.207626 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.208151 kubelet[3076]: E1108 01:19:33.208116 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.208151 kubelet[3076]: W1108 01:19:33.208146 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.208396 kubelet[3076]: E1108 01:19:33.208216 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.208680 kubelet[3076]: E1108 01:19:33.208631 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.208680 kubelet[3076]: W1108 01:19:33.208658 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.208941 kubelet[3076]: E1108 01:19:33.208733 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.209206 kubelet[3076]: E1108 01:19:33.209170 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.209206 kubelet[3076]: W1108 01:19:33.209197 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.209646 kubelet[3076]: E1108 01:19:33.209266 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.209824 kubelet[3076]: E1108 01:19:33.209681 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.209824 kubelet[3076]: W1108 01:19:33.209717 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.210160 kubelet[3076]: E1108 01:19:33.209834 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.210541 kubelet[3076]: E1108 01:19:33.210466 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.210541 kubelet[3076]: W1108 01:19:33.210533 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.210852 kubelet[3076]: E1108 01:19:33.210637 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.211136 kubelet[3076]: E1108 01:19:33.211105 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.211136 kubelet[3076]: W1108 01:19:33.211134 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.211350 kubelet[3076]: E1108 01:19:33.211178 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.211743 kubelet[3076]: E1108 01:19:33.211690 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.211743 kubelet[3076]: W1108 01:19:33.211718 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.212075 kubelet[3076]: E1108 01:19:33.211810 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.212208 kubelet[3076]: E1108 01:19:33.212175 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.212308 kubelet[3076]: W1108 01:19:33.212210 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.212308 kubelet[3076]: E1108 01:19:33.212281 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.212816 kubelet[3076]: E1108 01:19:33.212757 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.212816 kubelet[3076]: W1108 01:19:33.212786 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.213047 kubelet[3076]: E1108 01:19:33.212824 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.213389 kubelet[3076]: E1108 01:19:33.213360 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.213549 kubelet[3076]: W1108 01:19:33.213390 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.213549 kubelet[3076]: E1108 01:19:33.213421 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.239002 containerd[1823]: time="2025-11-08T01:19:33.238942886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-956wh,Uid:edb2eb02-2839-4c3e-97cc-07952850c60e,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:33.241908 kubelet[3076]: E1108 01:19:33.241866 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:33.241908 kubelet[3076]: W1108 01:19:33.241880 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:33.241908 kubelet[3076]: E1108 01:19:33.241894 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:33.251906 containerd[1823]: time="2025-11-08T01:19:33.251830196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:33.251906 containerd[1823]: time="2025-11-08T01:19:33.251858130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:33.251906 containerd[1823]: time="2025-11-08T01:19:33.251865472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:33.252001 containerd[1823]: time="2025-11-08T01:19:33.251911294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:33.265663 systemd[1]: Started cri-containerd-8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83.scope - libcontainer container 8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83. Nov 8 01:19:33.275042 containerd[1823]: time="2025-11-08T01:19:33.274989186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-956wh,Uid:edb2eb02-2839-4c3e-97cc-07952850c60e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\"" Nov 8 01:19:35.282235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257381860.mount: Deactivated successfully. Nov 8 01:19:35.282519 kubelet[3076]: E1108 01:19:35.282495 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:35.639636 containerd[1823]: time="2025-11-08T01:19:35.639583769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:35.639851 containerd[1823]: time="2025-11-08T01:19:35.639794188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 01:19:35.640135 containerd[1823]: time="2025-11-08T01:19:35.640122945Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:35.641036 containerd[1823]: time="2025-11-08T01:19:35.641025645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:35.641415 containerd[1823]: time="2025-11-08T01:19:35.641404545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.545371846s" Nov 8 01:19:35.641438 containerd[1823]: time="2025-11-08T01:19:35.641418667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 01:19:35.641886 containerd[1823]: time="2025-11-08T01:19:35.641877736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 01:19:35.644757 containerd[1823]: time="2025-11-08T01:19:35.644740732Z" level=info msg="CreateContainer within sandbox \"21f1afec21247d424187aee65d508e29353688feff118e11b80844d9de6ed136\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 01:19:35.648904 containerd[1823]: time="2025-11-08T01:19:35.648886287Z" level=info msg="CreateContainer within sandbox \"21f1afec21247d424187aee65d508e29353688feff118e11b80844d9de6ed136\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2dc4bed296930deea96f28dbe4c753745f334f161a16403421c08fd94711b80\"" Nov 8 01:19:35.649084 containerd[1823]: time="2025-11-08T01:19:35.649070963Z" level=info msg="StartContainer for \"a2dc4bed296930deea96f28dbe4c753745f334f161a16403421c08fd94711b80\"" Nov 8 01:19:35.675843 systemd[1]: Started cri-containerd-a2dc4bed296930deea96f28dbe4c753745f334f161a16403421c08fd94711b80.scope - libcontainer container a2dc4bed296930deea96f28dbe4c753745f334f161a16403421c08fd94711b80. Nov 8 01:19:35.702908 containerd[1823]: time="2025-11-08T01:19:35.702851884Z" level=info msg="StartContainer for \"a2dc4bed296930deea96f28dbe4c753745f334f161a16403421c08fd94711b80\" returns successfully" Nov 8 01:19:36.356685 kubelet[3076]: I1108 01:19:36.356565 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fd46dc96-zptrq" podStartSLOduration=1.810547683 podStartE2EDuration="4.356526115s" podCreationTimestamp="2025-11-08 01:19:32 +0000 UTC" firstStartedPulling="2025-11-08 01:19:33.095821493 +0000 UTC m=+17.860427360" lastFinishedPulling="2025-11-08 01:19:35.641799935 +0000 UTC m=+20.406405792" observedRunningTime="2025-11-08 01:19:36.356128191 +0000 UTC m=+21.120734138" watchObservedRunningTime="2025-11-08 01:19:36.356526115 +0000 UTC m=+21.121132099" Nov 8 01:19:36.405089 kubelet[3076]: E1108 01:19:36.405022 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.405089 kubelet[3076]: W1108 01:19:36.405075 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.405515 kubelet[3076]: E1108 01:19:36.405135 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.405734 kubelet[3076]: E1108 01:19:36.405690 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.405734 kubelet[3076]: W1108 01:19:36.405729 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.406107 kubelet[3076]: E1108 01:19:36.405771 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.406333 kubelet[3076]: E1108 01:19:36.406298 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.406535 kubelet[3076]: W1108 01:19:36.406333 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.406535 kubelet[3076]: E1108 01:19:36.406373 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.407050 kubelet[3076]: E1108 01:19:36.407009 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.407050 kubelet[3076]: W1108 01:19:36.407042 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.407406 kubelet[3076]: E1108 01:19:36.407083 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.407664 kubelet[3076]: E1108 01:19:36.407625 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.407664 kubelet[3076]: W1108 01:19:36.407659 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.408015 kubelet[3076]: E1108 01:19:36.407699 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.408299 kubelet[3076]: E1108 01:19:36.408257 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.408299 kubelet[3076]: W1108 01:19:36.408291 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.408693 kubelet[3076]: E1108 01:19:36.408332 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.408972 kubelet[3076]: E1108 01:19:36.408929 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.408972 kubelet[3076]: W1108 01:19:36.408964 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.409303 kubelet[3076]: E1108 01:19:36.409004 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.409570 kubelet[3076]: E1108 01:19:36.409532 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.409570 kubelet[3076]: W1108 01:19:36.409567 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.409923 kubelet[3076]: E1108 01:19:36.409604 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.410238 kubelet[3076]: E1108 01:19:36.410201 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.410238 kubelet[3076]: W1108 01:19:36.410236 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.410640 kubelet[3076]: E1108 01:19:36.410312 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.410945 kubelet[3076]: E1108 01:19:36.410908 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.411157 kubelet[3076]: W1108 01:19:36.410942 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.411157 kubelet[3076]: E1108 01:19:36.410983 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.411587 kubelet[3076]: E1108 01:19:36.411543 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.411587 kubelet[3076]: W1108 01:19:36.411578 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.411915 kubelet[3076]: E1108 01:19:36.411616 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.412224 kubelet[3076]: E1108 01:19:36.412183 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.412224 kubelet[3076]: W1108 01:19:36.412217 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.412598 kubelet[3076]: E1108 01:19:36.412257 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.412898 kubelet[3076]: E1108 01:19:36.412859 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.412898 kubelet[3076]: W1108 01:19:36.412893 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.413252 kubelet[3076]: E1108 01:19:36.412933 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.413501 kubelet[3076]: E1108 01:19:36.413438 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.413684 kubelet[3076]: W1108 01:19:36.413499 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.413684 kubelet[3076]: E1108 01:19:36.413547 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.414189 kubelet[3076]: E1108 01:19:36.414150 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.414189 kubelet[3076]: W1108 01:19:36.414184 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.414506 kubelet[3076]: E1108 01:19:36.414226 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.425777 kubelet[3076]: E1108 01:19:36.425725 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.425777 kubelet[3076]: W1108 01:19:36.425764 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.426124 kubelet[3076]: E1108 01:19:36.425802 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.426503 kubelet[3076]: E1108 01:19:36.426451 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.426672 kubelet[3076]: W1108 01:19:36.426506 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.426672 kubelet[3076]: E1108 01:19:36.426554 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.427254 kubelet[3076]: E1108 01:19:36.427194 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.427254 kubelet[3076]: W1108 01:19:36.427237 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.427527 kubelet[3076]: E1108 01:19:36.427281 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.427897 kubelet[3076]: E1108 01:19:36.427861 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.428018 kubelet[3076]: W1108 01:19:36.427898 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.428018 kubelet[3076]: E1108 01:19:36.427942 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.428594 kubelet[3076]: E1108 01:19:36.428555 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.428594 kubelet[3076]: W1108 01:19:36.428585 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.428871 kubelet[3076]: E1108 01:19:36.428686 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.429105 kubelet[3076]: E1108 01:19:36.429062 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.429105 kubelet[3076]: W1108 01:19:36.429090 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.429366 kubelet[3076]: E1108 01:19:36.429154 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.429677 kubelet[3076]: E1108 01:19:36.429640 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.429805 kubelet[3076]: W1108 01:19:36.429683 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.429956 kubelet[3076]: E1108 01:19:36.429783 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.430403 kubelet[3076]: E1108 01:19:36.430369 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.430403 kubelet[3076]: W1108 01:19:36.430398 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.430682 kubelet[3076]: E1108 01:19:36.430434 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.431026 kubelet[3076]: E1108 01:19:36.430994 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.431146 kubelet[3076]: W1108 01:19:36.431026 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.431146 kubelet[3076]: E1108 01:19:36.431065 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.431570 kubelet[3076]: E1108 01:19:36.431540 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.431678 kubelet[3076]: W1108 01:19:36.431570 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.431678 kubelet[3076]: E1108 01:19:36.431602 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.432124 kubelet[3076]: E1108 01:19:36.432097 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.432233 kubelet[3076]: W1108 01:19:36.432125 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.432233 kubelet[3076]: E1108 01:19:36.432158 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.432716 kubelet[3076]: E1108 01:19:36.432688 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.432857 kubelet[3076]: W1108 01:19:36.432717 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.432968 kubelet[3076]: E1108 01:19:36.432829 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.433155 kubelet[3076]: E1108 01:19:36.433128 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.433289 kubelet[3076]: W1108 01:19:36.433156 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.433289 kubelet[3076]: E1108 01:19:36.433214 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.433587 kubelet[3076]: E1108 01:19:36.433559 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.433587 kubelet[3076]: W1108 01:19:36.433585 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.433774 kubelet[3076]: E1108 01:19:36.433616 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.434068 kubelet[3076]: E1108 01:19:36.434041 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.434179 kubelet[3076]: W1108 01:19:36.434068 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.434179 kubelet[3076]: E1108 01:19:36.434100 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.434666 kubelet[3076]: E1108 01:19:36.434605 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.434666 kubelet[3076]: W1108 01:19:36.434637 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.434871 kubelet[3076]: E1108 01:19:36.434669 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.435270 kubelet[3076]: E1108 01:19:36.435233 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.435403 kubelet[3076]: W1108 01:19:36.435274 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.435403 kubelet[3076]: E1108 01:19:36.435317 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.435883 kubelet[3076]: E1108 01:19:36.435827 3076 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:19:36.435883 kubelet[3076]: W1108 01:19:36.435856 3076 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:19:36.436093 kubelet[3076]: E1108 01:19:36.435884 3076 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:19:36.972247 containerd[1823]: time="2025-11-08T01:19:36.972222984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:36.972485 containerd[1823]: time="2025-11-08T01:19:36.972414567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 01:19:36.972811 containerd[1823]: time="2025-11-08T01:19:36.972798207Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:36.973752 containerd[1823]: time="2025-11-08T01:19:36.973739588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:36.974549 containerd[1823]: time="2025-11-08T01:19:36.974502485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.332610708s" Nov 8 01:19:36.974549 containerd[1823]: time="2025-11-08T01:19:36.974523502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 01:19:36.975491 containerd[1823]: time="2025-11-08T01:19:36.975477464Z" level=info msg="CreateContainer within sandbox \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 01:19:36.980506 containerd[1823]: time="2025-11-08T01:19:36.980484723Z" level=info msg="CreateContainer within sandbox \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead\"" Nov 8 01:19:36.980777 containerd[1823]: time="2025-11-08T01:19:36.980764552Z" level=info msg="StartContainer for \"b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead\"" Nov 8 01:19:37.001604 systemd[1]: Started cri-containerd-b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead.scope - libcontainer container b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead. Nov 8 01:19:37.018507 systemd[1]: cri-containerd-b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead.scope: Deactivated successfully. Nov 8 01:19:37.027231 containerd[1823]: time="2025-11-08T01:19:37.027177041Z" level=info msg="StartContainer for \"b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead\" returns successfully" Nov 8 01:19:37.272635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead-rootfs.mount: Deactivated successfully. Nov 8 01:19:37.283369 kubelet[3076]: E1108 01:19:37.283275 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:37.459248 containerd[1823]: time="2025-11-08T01:19:37.459181461Z" level=info msg="shim disconnected" id=b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead namespace=k8s.io Nov 8 01:19:37.459248 containerd[1823]: time="2025-11-08T01:19:37.459217936Z" level=warning msg="cleaning up after shim disconnected" id=b0537f7ac967cf3305085b1a1b6a1d468b44a1925874f58d187bbb9d155aaead namespace=k8s.io Nov 8 01:19:37.459248 containerd[1823]: time="2025-11-08T01:19:37.459226336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 01:19:38.348129 containerd[1823]: time="2025-11-08T01:19:38.348044360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 01:19:39.283528 kubelet[3076]: E1108 01:19:39.283389 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:40.468844 containerd[1823]: time="2025-11-08T01:19:40.468795009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:40.469080 containerd[1823]: time="2025-11-08T01:19:40.469024644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 01:19:40.469396 containerd[1823]: time="2025-11-08T01:19:40.469357003Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:40.470342 containerd[1823]: time="2025-11-08T01:19:40.470302943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:40.470752 containerd[1823]: time="2025-11-08T01:19:40.470710365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.122590493s" Nov 8 01:19:40.470752 containerd[1823]: time="2025-11-08T01:19:40.470727511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 01:19:40.472183 containerd[1823]: time="2025-11-08T01:19:40.472166790Z" level=info msg="CreateContainer within sandbox \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 01:19:40.476948 containerd[1823]: time="2025-11-08T01:19:40.476897805Z" level=info msg="CreateContainer within sandbox \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a\"" Nov 8 01:19:40.477154 containerd[1823]: time="2025-11-08T01:19:40.477140541Z" level=info msg="StartContainer for \"3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a\"" Nov 8 01:19:40.523944 systemd[1]: Started cri-containerd-3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a.scope - libcontainer container 3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a. Nov 8 01:19:40.583784 containerd[1823]: time="2025-11-08T01:19:40.583748113Z" level=info msg="StartContainer for \"3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a\" returns successfully" Nov 8 01:19:41.171794 systemd[1]: cri-containerd-3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a.scope: Deactivated successfully. Nov 8 01:19:41.180152 kubelet[3076]: I1108 01:19:41.180134 3076 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 01:19:41.182845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a-rootfs.mount: Deactivated successfully. Nov 8 01:19:41.197389 systemd[1]: Created slice kubepods-besteffort-pode679c708_5dc2_455f_8f76_0a5b47442761.slice - libcontainer container kubepods-besteffort-pode679c708_5dc2_455f_8f76_0a5b47442761.slice. Nov 8 01:19:41.200534 systemd[1]: Created slice kubepods-besteffort-pod52c8a02c_ccf4_4fe7_85ad_567be91064d6.slice - libcontainer container kubepods-besteffort-pod52c8a02c_ccf4_4fe7_85ad_567be91064d6.slice. Nov 8 01:19:41.204140 systemd[1]: Created slice kubepods-burstable-pod94b31311_6cc0_4ac6_9640_c851d1b5747b.slice - libcontainer container kubepods-burstable-pod94b31311_6cc0_4ac6_9640_c851d1b5747b.slice. Nov 8 01:19:41.208139 systemd[1]: Created slice kubepods-besteffort-pod905477ed_861d_42e3_890a_431b3428dc2e.slice - libcontainer container kubepods-besteffort-pod905477ed_861d_42e3_890a_431b3428dc2e.slice. Nov 8 01:19:41.211270 systemd[1]: Created slice kubepods-besteffort-podbb14ec11_f019_40a0_9b63_589cf025cfb4.slice - libcontainer container kubepods-besteffort-podbb14ec11_f019_40a0_9b63_589cf025cfb4.slice. Nov 8 01:19:41.214302 systemd[1]: Created slice kubepods-burstable-pod4cca2fc5_207d_4bfe_9520_a30e2cf67473.slice - libcontainer container kubepods-burstable-pod4cca2fc5_207d_4bfe_9520_a30e2cf67473.slice. Nov 8 01:19:41.217421 systemd[1]: Created slice kubepods-besteffort-pod71f5fc0d_399b_4a93_8104_f8dd3ea1c5df.slice - libcontainer container kubepods-besteffort-pod71f5fc0d_399b_4a93_8104_f8dd3ea1c5df.slice. Nov 8 01:19:41.264787 kubelet[3076]: I1108 01:19:41.264703 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jnb\" (UniqueName: \"kubernetes.io/projected/71f5fc0d-399b-4a93-8104-f8dd3ea1c5df-kube-api-access-t8jnb\") pod \"goldmane-666569f655-dmnhl\" (UID: \"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df\") " pod="calico-system/goldmane-666569f655-dmnhl" Nov 8 01:19:41.264787 kubelet[3076]: I1108 01:19:41.264763 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e679c708-5dc2-455f-8f76-0a5b47442761-calico-apiserver-certs\") pod \"calico-apiserver-d6d97687b-82v26\" (UID: \"e679c708-5dc2-455f-8f76-0a5b47442761\") " pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" Nov 8 01:19:41.264787 kubelet[3076]: I1108 01:19:41.264790 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-ca-bundle\") pod \"whisker-5b656c9f86-xhj4p\" (UID: \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\") " pod="calico-system/whisker-5b656c9f86-xhj4p" Nov 8 01:19:41.265076 kubelet[3076]: I1108 01:19:41.264814 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5jc8\" (UniqueName: \"kubernetes.io/projected/905477ed-861d-42e3-890a-431b3428dc2e-kube-api-access-z5jc8\") pod \"calico-kube-controllers-7c65d4465b-6mb2v\" (UID: \"905477ed-861d-42e3-890a-431b3428dc2e\") " pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" Nov 8 01:19:41.265076 kubelet[3076]: I1108 01:19:41.264838 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2x2\" (UniqueName: \"kubernetes.io/projected/e679c708-5dc2-455f-8f76-0a5b47442761-kube-api-access-ws2x2\") pod \"calico-apiserver-d6d97687b-82v26\" (UID: \"e679c708-5dc2-455f-8f76-0a5b47442761\") " pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" Nov 8 01:19:41.265076 kubelet[3076]: I1108 01:19:41.264864 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94b31311-6cc0-4ac6-9640-c851d1b5747b-config-volume\") pod \"coredns-668d6bf9bc-8s8cb\" (UID: \"94b31311-6cc0-4ac6-9640-c851d1b5747b\") " pod="kube-system/coredns-668d6bf9bc-8s8cb" Nov 8 01:19:41.265076 kubelet[3076]: I1108 01:19:41.264888 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78br2\" (UniqueName: \"kubernetes.io/projected/4cca2fc5-207d-4bfe-9520-a30e2cf67473-kube-api-access-78br2\") pod \"coredns-668d6bf9bc-k4vsk\" (UID: \"4cca2fc5-207d-4bfe-9520-a30e2cf67473\") " pod="kube-system/coredns-668d6bf9bc-k4vsk" Nov 8 01:19:41.265076 kubelet[3076]: I1108 01:19:41.264912 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwb4j\" (UniqueName: \"kubernetes.io/projected/52c8a02c-ccf4-4fe7-85ad-567be91064d6-kube-api-access-mwb4j\") pod \"whisker-5b656c9f86-xhj4p\" (UID: \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\") " pod="calico-system/whisker-5b656c9f86-xhj4p" Nov 8 01:19:41.265304 kubelet[3076]: I1108 01:19:41.264935 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb14ec11-f019-40a0-9b63-589cf025cfb4-calico-apiserver-certs\") pod \"calico-apiserver-d6d97687b-lt4rt\" (UID: \"bb14ec11-f019-40a0-9b63-589cf025cfb4\") " pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" Nov 8 01:19:41.265304 kubelet[3076]: I1108 01:19:41.264992 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5755\" (UniqueName: \"kubernetes.io/projected/bb14ec11-f019-40a0-9b63-589cf025cfb4-kube-api-access-n5755\") pod \"calico-apiserver-d6d97687b-lt4rt\" (UID: \"bb14ec11-f019-40a0-9b63-589cf025cfb4\") " pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" Nov 8 01:19:41.265304 kubelet[3076]: I1108 01:19:41.265044 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cca2fc5-207d-4bfe-9520-a30e2cf67473-config-volume\") pod \"coredns-668d6bf9bc-k4vsk\" (UID: \"4cca2fc5-207d-4bfe-9520-a30e2cf67473\") " pod="kube-system/coredns-668d6bf9bc-k4vsk" Nov 8 01:19:41.265304 kubelet[3076]: I1108 01:19:41.265074 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/905477ed-861d-42e3-890a-431b3428dc2e-tigera-ca-bundle\") pod \"calico-kube-controllers-7c65d4465b-6mb2v\" (UID: \"905477ed-861d-42e3-890a-431b3428dc2e\") " pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" Nov 8 01:19:41.265304 kubelet[3076]: I1108 01:19:41.265108 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zp2q\" (UniqueName: \"kubernetes.io/projected/94b31311-6cc0-4ac6-9640-c851d1b5747b-kube-api-access-9zp2q\") pod \"coredns-668d6bf9bc-8s8cb\" (UID: \"94b31311-6cc0-4ac6-9640-c851d1b5747b\") " pod="kube-system/coredns-668d6bf9bc-8s8cb" Nov 8 01:19:41.265546 kubelet[3076]: I1108 01:19:41.265135 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-backend-key-pair\") pod \"whisker-5b656c9f86-xhj4p\" (UID: \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\") " pod="calico-system/whisker-5b656c9f86-xhj4p" Nov 8 01:19:41.265546 kubelet[3076]: I1108 01:19:41.265173 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/71f5fc0d-399b-4a93-8104-f8dd3ea1c5df-goldmane-key-pair\") pod \"goldmane-666569f655-dmnhl\" (UID: \"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df\") " pod="calico-system/goldmane-666569f655-dmnhl" Nov 8 01:19:41.265546 kubelet[3076]: I1108 01:19:41.265197 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f5fc0d-399b-4a93-8104-f8dd3ea1c5df-config\") pod \"goldmane-666569f655-dmnhl\" (UID: \"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df\") " pod="calico-system/goldmane-666569f655-dmnhl" Nov 8 01:19:41.265546 kubelet[3076]: I1108 01:19:41.265226 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71f5fc0d-399b-4a93-8104-f8dd3ea1c5df-goldmane-ca-bundle\") pod \"goldmane-666569f655-dmnhl\" (UID: \"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df\") " pod="calico-system/goldmane-666569f655-dmnhl" Nov 8 01:19:41.298262 systemd[1]: Created slice kubepods-besteffort-podb57117d3_8237_4f8a_aa85_534eb9568949.slice - libcontainer container kubepods-besteffort-podb57117d3_8237_4f8a_aa85_534eb9568949.slice. Nov 8 01:19:41.303883 containerd[1823]: time="2025-11-08T01:19:41.303790004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhnl5,Uid:b57117d3-8237-4f8a-aa85-534eb9568949,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:41.500725 containerd[1823]: time="2025-11-08T01:19:41.500527711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-82v26,Uid:e679c708-5dc2-455f-8f76-0a5b47442761,Namespace:calico-apiserver,Attempt:0,}" Nov 8 01:19:41.503530 containerd[1823]: time="2025-11-08T01:19:41.503463275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b656c9f86-xhj4p,Uid:52c8a02c-ccf4-4fe7-85ad-567be91064d6,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:41.508179 containerd[1823]: time="2025-11-08T01:19:41.508089947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s8cb,Uid:94b31311-6cc0-4ac6-9640-c851d1b5747b,Namespace:kube-system,Attempt:0,}" Nov 8 01:19:41.510912 containerd[1823]: time="2025-11-08T01:19:41.510876497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65d4465b-6mb2v,Uid:905477ed-861d-42e3-890a-431b3428dc2e,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:41.513358 containerd[1823]: time="2025-11-08T01:19:41.513329855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-lt4rt,Uid:bb14ec11-f019-40a0-9b63-589cf025cfb4,Namespace:calico-apiserver,Attempt:0,}" Nov 8 01:19:41.516752 containerd[1823]: time="2025-11-08T01:19:41.516740659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4vsk,Uid:4cca2fc5-207d-4bfe-9520-a30e2cf67473,Namespace:kube-system,Attempt:0,}" Nov 8 01:19:41.519284 containerd[1823]: time="2025-11-08T01:19:41.519272535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmnhl,Uid:71f5fc0d-399b-4a93-8104-f8dd3ea1c5df,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:41.558674 containerd[1823]: time="2025-11-08T01:19:41.558569556Z" level=info msg="shim disconnected" id=3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a namespace=k8s.io Nov 8 01:19:41.558674 containerd[1823]: time="2025-11-08T01:19:41.558647814Z" level=warning msg="cleaning up after shim disconnected" id=3c6b6a1ed1b1307057f51746eb5e81942e78b656307cd648d927fb43bdaa179a namespace=k8s.io Nov 8 01:19:41.558674 containerd[1823]: time="2025-11-08T01:19:41.558653812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 01:19:41.599512 containerd[1823]: time="2025-11-08T01:19:41.599432439Z" level=error msg="Failed to destroy network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.599776 containerd[1823]: time="2025-11-08T01:19:41.599754137Z" level=error msg="encountered an error cleaning up failed sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.599819 containerd[1823]: time="2025-11-08T01:19:41.599802491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhnl5,Uid:b57117d3-8237-4f8a-aa85-534eb9568949,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.600020 kubelet[3076]: E1108 01:19:41.599988 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.600094 kubelet[3076]: E1108 01:19:41.600054 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:41.600094 kubelet[3076]: E1108 01:19:41.600072 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rhnl5" Nov 8 01:19:41.600141 kubelet[3076]: E1108 01:19:41.600102 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:41.606143 containerd[1823]: time="2025-11-08T01:19:41.606108475Z" level=error msg="Failed to destroy network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606397 containerd[1823]: time="2025-11-08T01:19:41.606377609Z" level=error msg="encountered an error cleaning up failed sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606441 containerd[1823]: time="2025-11-08T01:19:41.606424833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-82v26,Uid:e679c708-5dc2-455f-8f76-0a5b47442761,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606506 containerd[1823]: time="2025-11-08T01:19:41.606482274Z" level=error msg="Failed to destroy network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606617 kubelet[3076]: E1108 01:19:41.606595 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606660 kubelet[3076]: E1108 01:19:41.606635 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" Nov 8 01:19:41.606660 kubelet[3076]: E1108 01:19:41.606651 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" Nov 8 01:19:41.606700 kubelet[3076]: E1108 01:19:41.606681 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:19:41.606740 containerd[1823]: time="2025-11-08T01:19:41.606651580Z" level=error msg="encountered an error cleaning up failed sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606740 containerd[1823]: time="2025-11-08T01:19:41.606681193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s8cb,Uid:94b31311-6cc0-4ac6-9640-c851d1b5747b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606794 kubelet[3076]: E1108 01:19:41.606776 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.606815 kubelet[3076]: E1108 01:19:41.606804 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8s8cb" Nov 8 01:19:41.606836 kubelet[3076]: E1108 01:19:41.606820 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8s8cb" Nov 8 01:19:41.606857 kubelet[3076]: E1108 01:19:41.606841 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8s8cb_kube-system(94b31311-6cc0-4ac6-9640-c851d1b5747b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8s8cb_kube-system(94b31311-6cc0-4ac6-9640-c851d1b5747b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8s8cb" podUID="94b31311-6cc0-4ac6-9640-c851d1b5747b" Nov 8 01:19:41.609116 containerd[1823]: time="2025-11-08T01:19:41.609057779Z" level=error msg="Failed to destroy network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609371 containerd[1823]: time="2025-11-08T01:19:41.609351511Z" level=error msg="encountered an error cleaning up failed sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609421 containerd[1823]: time="2025-11-08T01:19:41.609393316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmnhl,Uid:71f5fc0d-399b-4a93-8104-f8dd3ea1c5df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609495 containerd[1823]: time="2025-11-08T01:19:41.609429097Z" level=error msg="Failed to destroy network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609614 kubelet[3076]: E1108 01:19:41.609563 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609614 kubelet[3076]: E1108 01:19:41.609606 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dmnhl" Nov 8 01:19:41.609668 kubelet[3076]: E1108 01:19:41.609626 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dmnhl" Nov 8 01:19:41.609689 containerd[1823]: time="2025-11-08T01:19:41.609636029Z" level=error msg="encountered an error cleaning up failed sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609689 containerd[1823]: time="2025-11-08T01:19:41.609660011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65d4465b-6mb2v,Uid:905477ed-861d-42e3-890a-431b3428dc2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609741 kubelet[3076]: E1108 01:19:41.609658 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:19:41.609775 kubelet[3076]: E1108 01:19:41.609734 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.609775 kubelet[3076]: E1108 01:19:41.609762 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" Nov 8 01:19:41.609812 kubelet[3076]: E1108 01:19:41.609776 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" Nov 8 01:19:41.609834 kubelet[3076]: E1108 01:19:41.609803 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:19:41.610046 containerd[1823]: time="2025-11-08T01:19:41.610028739Z" level=error msg="Failed to destroy network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.610186 containerd[1823]: time="2025-11-08T01:19:41.610173161Z" level=error msg="encountered an error cleaning up failed sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.610211 containerd[1823]: time="2025-11-08T01:19:41.610199663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4vsk,Uid:4cca2fc5-207d-4bfe-9520-a30e2cf67473,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.610284 kubelet[3076]: E1108 01:19:41.610270 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.610308 kubelet[3076]: E1108 01:19:41.610291 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4vsk" Nov 8 01:19:41.610308 kubelet[3076]: E1108 01:19:41.610301 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4vsk" Nov 8 01:19:41.610347 kubelet[3076]: E1108 01:19:41.610323 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k4vsk_kube-system(4cca2fc5-207d-4bfe-9520-a30e2cf67473)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k4vsk_kube-system(4cca2fc5-207d-4bfe-9520-a30e2cf67473)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4vsk" podUID="4cca2fc5-207d-4bfe-9520-a30e2cf67473" Nov 8 01:19:41.611780 containerd[1823]: time="2025-11-08T01:19:41.611763755Z" level=error msg="Failed to destroy network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.611950 containerd[1823]: time="2025-11-08T01:19:41.611938124Z" level=error msg="encountered an error cleaning up failed sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.611974 containerd[1823]: time="2025-11-08T01:19:41.611959974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b656c9f86-xhj4p,Uid:52c8a02c-ccf4-4fe7-85ad-567be91064d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.612071 kubelet[3076]: E1108 01:19:41.612032 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.612071 kubelet[3076]: E1108 01:19:41.612052 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b656c9f86-xhj4p" Nov 8 01:19:41.612071 kubelet[3076]: E1108 01:19:41.612065 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b656c9f86-xhj4p" Nov 8 01:19:41.612133 kubelet[3076]: E1108 01:19:41.612083 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b656c9f86-xhj4p_calico-system(52c8a02c-ccf4-4fe7-85ad-567be91064d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b656c9f86-xhj4p_calico-system(52c8a02c-ccf4-4fe7-85ad-567be91064d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b656c9f86-xhj4p" podUID="52c8a02c-ccf4-4fe7-85ad-567be91064d6" Nov 8 01:19:41.614037 containerd[1823]: time="2025-11-08T01:19:41.614022243Z" level=error msg="Failed to destroy network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.614178 containerd[1823]: time="2025-11-08T01:19:41.614163287Z" level=error msg="encountered an error cleaning up failed sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.614200 containerd[1823]: time="2025-11-08T01:19:41.614189180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-lt4rt,Uid:bb14ec11-f019-40a0-9b63-589cf025cfb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.614277 kubelet[3076]: E1108 01:19:41.614261 3076 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:41.614304 kubelet[3076]: E1108 01:19:41.614289 3076 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" Nov 8 01:19:41.614304 kubelet[3076]: E1108 01:19:41.614300 3076 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" Nov 8 01:19:41.614341 kubelet[3076]: E1108 01:19:41.614321 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:19:42.358593 kubelet[3076]: I1108 01:19:42.358418 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:19:42.359967 containerd[1823]: time="2025-11-08T01:19:42.359888931Z" level=info msg="StopPodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\"" Nov 8 01:19:42.360355 containerd[1823]: time="2025-11-08T01:19:42.360302178Z" level=info msg="Ensure that sandbox c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e in task-service has been cleanup successfully" Nov 8 01:19:42.364907 kubelet[3076]: I1108 01:19:42.364837 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:19:42.365266 containerd[1823]: time="2025-11-08T01:19:42.364900668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 01:19:42.365770 containerd[1823]: time="2025-11-08T01:19:42.365755887Z" level=info msg="StopPodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\"" Nov 8 01:19:42.365916 containerd[1823]: time="2025-11-08T01:19:42.365875645Z" level=info msg="Ensure that sandbox 52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8 in task-service has been cleanup successfully" Nov 8 01:19:42.366049 kubelet[3076]: I1108 01:19:42.366037 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:19:42.366302 containerd[1823]: time="2025-11-08T01:19:42.366290743Z" level=info msg="StopPodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\"" Nov 8 01:19:42.366404 containerd[1823]: time="2025-11-08T01:19:42.366391934Z" level=info msg="Ensure that sandbox 621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30 in task-service has been cleanup successfully" Nov 8 01:19:42.366582 kubelet[3076]: I1108 01:19:42.366567 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:19:42.366794 containerd[1823]: time="2025-11-08T01:19:42.366779854Z" level=info msg="StopPodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\"" Nov 8 01:19:42.366884 containerd[1823]: time="2025-11-08T01:19:42.366873202Z" level=info msg="Ensure that sandbox 1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f in task-service has been cleanup successfully" Nov 8 01:19:42.367131 kubelet[3076]: I1108 01:19:42.367117 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:19:42.367437 containerd[1823]: time="2025-11-08T01:19:42.367415463Z" level=info msg="StopPodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\"" Nov 8 01:19:42.367588 containerd[1823]: time="2025-11-08T01:19:42.367571931Z" level=info msg="Ensure that sandbox dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943 in task-service has been cleanup successfully" Nov 8 01:19:42.367630 kubelet[3076]: I1108 01:19:42.367620 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:19:42.367934 containerd[1823]: time="2025-11-08T01:19:42.367915931Z" level=info msg="StopPodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\"" Nov 8 01:19:42.368056 containerd[1823]: time="2025-11-08T01:19:42.368044883Z" level=info msg="Ensure that sandbox d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938 in task-service has been cleanup successfully" Nov 8 01:19:42.368267 kubelet[3076]: I1108 01:19:42.368256 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:19:42.368620 containerd[1823]: time="2025-11-08T01:19:42.368604287Z" level=info msg="StopPodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\"" Nov 8 01:19:42.368804 containerd[1823]: time="2025-11-08T01:19:42.368724868Z" level=info msg="Ensure that sandbox d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a in task-service has been cleanup successfully" Nov 8 01:19:42.368876 kubelet[3076]: I1108 01:19:42.368861 3076 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:19:42.369265 containerd[1823]: time="2025-11-08T01:19:42.369236952Z" level=info msg="StopPodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\"" Nov 8 01:19:42.369434 containerd[1823]: time="2025-11-08T01:19:42.369421528Z" level=info msg="Ensure that sandbox 71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56 in task-service has been cleanup successfully" Nov 8 01:19:42.383302 containerd[1823]: time="2025-11-08T01:19:42.383271840Z" level=error msg="StopPodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" failed" error="failed to destroy network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.383395 containerd[1823]: time="2025-11-08T01:19:42.383272405Z" level=error msg="StopPodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" failed" error="failed to destroy network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.383395 containerd[1823]: time="2025-11-08T01:19:42.383272494Z" level=error msg="StopPodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" failed" error="failed to destroy network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.383436 kubelet[3076]: E1108 01:19:42.383410 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:19:42.383470 kubelet[3076]: E1108 01:19:42.383438 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:19:42.383470 kubelet[3076]: E1108 01:19:42.383450 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:19:42.383551 containerd[1823]: time="2025-11-08T01:19:42.383430216Z" level=error msg="StopPodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" failed" error="failed to destroy network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.383551 containerd[1823]: time="2025-11-08T01:19:42.383508392Z" level=error msg="StopPodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" failed" error="failed to destroy network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.383588 kubelet[3076]: E1108 01:19:42.383455 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30"} Nov 8 01:19:42.383588 kubelet[3076]: E1108 01:19:42.383504 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:19:42.383588 kubelet[3076]: E1108 01:19:42.383509 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb14ec11-f019-40a0-9b63-589cf025cfb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.383588 kubelet[3076]: E1108 01:19:42.383518 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943"} Nov 8 01:19:42.383588 kubelet[3076]: E1108 01:19:42.383524 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb14ec11-f019-40a0-9b63-589cf025cfb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:19:42.383708 kubelet[3076]: E1108 01:19:42.383463 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8"} Nov 8 01:19:42.383708 kubelet[3076]: E1108 01:19:42.383476 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e"} Nov 8 01:19:42.383708 kubelet[3076]: E1108 01:19:42.383547 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b57117d3-8237-4f8a-aa85-534eb9568949\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.383708 kubelet[3076]: E1108 01:19:42.383549 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"905477ed-861d-42e3-890a-431b3428dc2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.383708 kubelet[3076]: E1108 01:19:42.383557 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b57117d3-8237-4f8a-aa85-534eb9568949\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:42.383822 kubelet[3076]: E1108 01:19:42.383560 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"905477ed-861d-42e3-890a-431b3428dc2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:19:42.383822 kubelet[3076]: E1108 01:19:42.383534 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.383822 kubelet[3076]: E1108 01:19:42.383578 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b656c9f86-xhj4p" podUID="52c8a02c-ccf4-4fe7-85ad-567be91064d6" Nov 8 01:19:42.383907 kubelet[3076]: E1108 01:19:42.383589 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:19:42.383907 kubelet[3076]: E1108 01:19:42.383604 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938"} Nov 8 01:19:42.383907 kubelet[3076]: E1108 01:19:42.383616 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e679c708-5dc2-455f-8f76-0a5b47442761\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.383907 kubelet[3076]: E1108 01:19:42.383626 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e679c708-5dc2-455f-8f76-0a5b47442761\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:19:42.385544 containerd[1823]: time="2025-11-08T01:19:42.385509507Z" level=error msg="StopPodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" failed" error="failed to destroy network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.385686 kubelet[3076]: E1108 01:19:42.385639 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:19:42.385713 kubelet[3076]: E1108 01:19:42.385691 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f"} Nov 8 01:19:42.385713 kubelet[3076]: E1108 01:19:42.385707 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94b31311-6cc0-4ac6-9640-c851d1b5747b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.385778 kubelet[3076]: E1108 01:19:42.385718 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94b31311-6cc0-4ac6-9640-c851d1b5747b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8s8cb" podUID="94b31311-6cc0-4ac6-9640-c851d1b5747b" Nov 8 01:19:42.385967 containerd[1823]: time="2025-11-08T01:19:42.385950808Z" level=error msg="StopPodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" failed" error="failed to destroy network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.386024 kubelet[3076]: E1108 01:19:42.386012 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:19:42.386045 kubelet[3076]: E1108 01:19:42.386030 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a"} Nov 8 01:19:42.386065 kubelet[3076]: E1108 01:19:42.386045 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.386065 kubelet[3076]: E1108 01:19:42.386055 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:19:42.386259 containerd[1823]: time="2025-11-08T01:19:42.386245896Z" level=error msg="StopPodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" failed" error="failed to destroy network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:19:42.386334 kubelet[3076]: E1108 01:19:42.386326 3076 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:19:42.386357 kubelet[3076]: E1108 01:19:42.386337 3076 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56"} Nov 8 01:19:42.386357 kubelet[3076]: E1108 01:19:42.386348 3076 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cca2fc5-207d-4bfe-9520-a30e2cf67473\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:19:42.386403 kubelet[3076]: E1108 01:19:42.386358 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cca2fc5-207d-4bfe-9520-a30e2cf67473\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4vsk" podUID="4cca2fc5-207d-4bfe-9520-a30e2cf67473" Nov 8 01:19:42.487249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a-shm.mount: Deactivated successfully. Nov 8 01:19:42.487684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56-shm.mount: Deactivated successfully. Nov 8 01:19:42.488001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30-shm.mount: Deactivated successfully. Nov 8 01:19:42.488301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f-shm.mount: Deactivated successfully. Nov 8 01:19:42.488622 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943-shm.mount: Deactivated successfully. Nov 8 01:19:42.488928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938-shm.mount: Deactivated successfully. Nov 8 01:19:42.489244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8-shm.mount: Deactivated successfully. Nov 8 01:19:42.489593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e-shm.mount: Deactivated successfully. Nov 8 01:19:45.546029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894815166.mount: Deactivated successfully. Nov 8 01:19:45.562649 containerd[1823]: time="2025-11-08T01:19:45.562603016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:45.562809 containerd[1823]: time="2025-11-08T01:19:45.562784935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 01:19:45.563155 containerd[1823]: time="2025-11-08T01:19:45.563143282Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:45.563987 containerd[1823]: time="2025-11-08T01:19:45.563971222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:19:45.564354 containerd[1823]: time="2025-11-08T01:19:45.564336538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.199371496s" Nov 8 01:19:45.564354 containerd[1823]: time="2025-11-08T01:19:45.564352043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 01:19:45.567668 containerd[1823]: time="2025-11-08T01:19:45.567651421Z" level=info msg="CreateContainer within sandbox \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 01:19:45.589974 containerd[1823]: time="2025-11-08T01:19:45.589927612Z" level=info msg="CreateContainer within sandbox \"8e0d3b2f15473c7fbdf86f56a1ddc17ada55d003954571ed1e30259342227a83\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"08b49b61efe95b5062ab56bfd82b43f9a9405f9a45590dd9d51728e3c22de233\"" Nov 8 01:19:45.590305 containerd[1823]: time="2025-11-08T01:19:45.590254516Z" level=info msg="StartContainer for \"08b49b61efe95b5062ab56bfd82b43f9a9405f9a45590dd9d51728e3c22de233\"" Nov 8 01:19:45.625882 systemd[1]: Started cri-containerd-08b49b61efe95b5062ab56bfd82b43f9a9405f9a45590dd9d51728e3c22de233.scope - libcontainer container 08b49b61efe95b5062ab56bfd82b43f9a9405f9a45590dd9d51728e3c22de233. Nov 8 01:19:45.654838 containerd[1823]: time="2025-11-08T01:19:45.654810135Z" level=info msg="StartContainer for \"08b49b61efe95b5062ab56bfd82b43f9a9405f9a45590dd9d51728e3c22de233\" returns successfully" Nov 8 01:19:45.723377 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 01:19:45.723430 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 01:19:45.765049 containerd[1823]: time="2025-11-08T01:19:45.765019479Z" level=info msg="StopPodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\"" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.790 [INFO][4613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.790 [INFO][4613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" iface="eth0" netns="/var/run/netns/cni-06f7ea7d-592a-fa6d-38f9-2f399bde9cd5" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.790 [INFO][4613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" iface="eth0" netns="/var/run/netns/cni-06f7ea7d-592a-fa6d-38f9-2f399bde9cd5" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.790 [INFO][4613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" iface="eth0" netns="/var/run/netns/cni-06f7ea7d-592a-fa6d-38f9-2f399bde9cd5" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.790 [INFO][4613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.790 [INFO][4613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.801 [INFO][4640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.801 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.801 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.804 [WARNING][4640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.804 [INFO][4640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.805 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:45.808643 containerd[1823]: 2025-11-08 01:19:45.807 [INFO][4613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:19:45.808945 containerd[1823]: time="2025-11-08T01:19:45.808700355Z" level=info msg="TearDown network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" successfully" Nov 8 01:19:45.808945 containerd[1823]: time="2025-11-08T01:19:45.808722655Z" level=info msg="StopPodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" returns successfully" Nov 8 01:19:45.899744 kubelet[3076]: I1108 01:19:45.899640 3076 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwb4j\" (UniqueName: \"kubernetes.io/projected/52c8a02c-ccf4-4fe7-85ad-567be91064d6-kube-api-access-mwb4j\") pod \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\" (UID: \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\") " Nov 8 01:19:45.899744 kubelet[3076]: I1108 01:19:45.899752 3076 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-backend-key-pair\") pod \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\" (UID: \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\") " Nov 8 01:19:45.900688 kubelet[3076]: I1108 01:19:45.899834 3076 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-ca-bundle\") pod \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\" (UID: \"52c8a02c-ccf4-4fe7-85ad-567be91064d6\") " Nov 8 01:19:45.900806 kubelet[3076]: I1108 01:19:45.900704 3076 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "52c8a02c-ccf4-4fe7-85ad-567be91064d6" (UID: "52c8a02c-ccf4-4fe7-85ad-567be91064d6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 01:19:45.905353 kubelet[3076]: I1108 01:19:45.905251 3076 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c8a02c-ccf4-4fe7-85ad-567be91064d6-kube-api-access-mwb4j" (OuterVolumeSpecName: "kube-api-access-mwb4j") pod "52c8a02c-ccf4-4fe7-85ad-567be91064d6" (UID: "52c8a02c-ccf4-4fe7-85ad-567be91064d6"). InnerVolumeSpecName "kube-api-access-mwb4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 01:19:45.905353 kubelet[3076]: I1108 01:19:45.905285 3076 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "52c8a02c-ccf4-4fe7-85ad-567be91064d6" (UID: "52c8a02c-ccf4-4fe7-85ad-567be91064d6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 01:19:46.000643 kubelet[3076]: I1108 01:19:46.000541 3076 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-8acfe54808\" DevicePath \"\"" Nov 8 01:19:46.000643 kubelet[3076]: I1108 01:19:46.000607 3076 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52c8a02c-ccf4-4fe7-85ad-567be91064d6-whisker-ca-bundle\") on node \"ci-4081.3.6-n-8acfe54808\" DevicePath \"\"" Nov 8 01:19:46.000643 kubelet[3076]: I1108 01:19:46.000637 3076 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwb4j\" (UniqueName: \"kubernetes.io/projected/52c8a02c-ccf4-4fe7-85ad-567be91064d6-kube-api-access-mwb4j\") on node \"ci-4081.3.6-n-8acfe54808\" DevicePath \"\"" Nov 8 01:19:46.389785 systemd[1]: Removed slice kubepods-besteffort-pod52c8a02c_ccf4_4fe7_85ad_567be91064d6.slice - libcontainer container kubepods-besteffort-pod52c8a02c_ccf4_4fe7_85ad_567be91064d6.slice. Nov 8 01:19:46.397349 kubelet[3076]: I1108 01:19:46.397318 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-956wh" podStartSLOduration=2.108412934 podStartE2EDuration="14.397306782s" podCreationTimestamp="2025-11-08 01:19:32 +0000 UTC" firstStartedPulling="2025-11-08 01:19:33.275838004 +0000 UTC m=+18.040443863" lastFinishedPulling="2025-11-08 01:19:45.564731856 +0000 UTC m=+30.329337711" observedRunningTime="2025-11-08 01:19:46.396893056 +0000 UTC m=+31.161498918" watchObservedRunningTime="2025-11-08 01:19:46.397306782 +0000 UTC m=+31.161912637" Nov 8 01:19:46.418329 systemd[1]: Created slice kubepods-besteffort-podd30571cb_e438_453f_8b20_303beb52e470.slice - libcontainer container kubepods-besteffort-podd30571cb_e438_453f_8b20_303beb52e470.slice. Nov 8 01:19:46.504514 kubelet[3076]: I1108 01:19:46.504335 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d30571cb-e438-453f-8b20-303beb52e470-whisker-ca-bundle\") pod \"whisker-6668fb7f88-v6px7\" (UID: \"d30571cb-e438-453f-8b20-303beb52e470\") " pod="calico-system/whisker-6668fb7f88-v6px7" Nov 8 01:19:46.504514 kubelet[3076]: I1108 01:19:46.504457 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt56d\" (UniqueName: \"kubernetes.io/projected/d30571cb-e438-453f-8b20-303beb52e470-kube-api-access-kt56d\") pod \"whisker-6668fb7f88-v6px7\" (UID: \"d30571cb-e438-453f-8b20-303beb52e470\") " pod="calico-system/whisker-6668fb7f88-v6px7" Nov 8 01:19:46.504885 kubelet[3076]: I1108 01:19:46.504574 3076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d30571cb-e438-453f-8b20-303beb52e470-whisker-backend-key-pair\") pod \"whisker-6668fb7f88-v6px7\" (UID: \"d30571cb-e438-453f-8b20-303beb52e470\") " pod="calico-system/whisker-6668fb7f88-v6px7" Nov 8 01:19:46.562714 systemd[1]: run-netns-cni\x2d06f7ea7d\x2d592a\x2dfa6d\x2d38f9\x2d2f399bde9cd5.mount: Deactivated successfully. Nov 8 01:19:46.563073 systemd[1]: var-lib-kubelet-pods-52c8a02c\x2dccf4\x2d4fe7\x2d85ad\x2d567be91064d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwb4j.mount: Deactivated successfully. Nov 8 01:19:46.563394 systemd[1]: var-lib-kubelet-pods-52c8a02c\x2dccf4\x2d4fe7\x2d85ad\x2d567be91064d6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 01:19:46.721848 containerd[1823]: time="2025-11-08T01:19:46.721659489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6668fb7f88-v6px7,Uid:d30571cb-e438-453f-8b20-303beb52e470,Namespace:calico-system,Attempt:0,}" Nov 8 01:19:46.871942 systemd-networkd[1517]: cali9438ed01242: Link UP Nov 8 01:19:46.872136 systemd-networkd[1517]: cali9438ed01242: Gained carrier Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.737 [INFO][4676] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.744 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0 whisker-6668fb7f88- calico-system d30571cb-e438-453f-8b20-303beb52e470 861 0 2025-11-08 01:19:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6668fb7f88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 whisker-6668fb7f88-v6px7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9438ed01242 [] [] }} ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.745 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.798 [INFO][4696] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" HandleID="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.799 [INFO][4696] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" HandleID="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000714530), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8acfe54808", "pod":"whisker-6668fb7f88-v6px7", "timestamp":"2025-11-08 01:19:46.798883078 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.799 [INFO][4696] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.799 [INFO][4696] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.799 [INFO][4696] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.814 [INFO][4696] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.823 [INFO][4696] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.832 [INFO][4696] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.836 [INFO][4696] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.841 [INFO][4696] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.842 [INFO][4696] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.845 [INFO][4696] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.862 [INFO][4696] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.866 [INFO][4696] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.65/26] block=192.168.24.64/26 handle="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.866 [INFO][4696] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.65/26] handle="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.866 [INFO][4696] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:46.882158 containerd[1823]: 2025-11-08 01:19:46.866 [INFO][4696] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.65/26] IPv6=[] ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" HandleID="k8s-pod-network.e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.882838 containerd[1823]: 2025-11-08 01:19:46.867 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0", GenerateName:"whisker-6668fb7f88-", Namespace:"calico-system", SelfLink:"", UID:"d30571cb-e438-453f-8b20-303beb52e470", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6668fb7f88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"whisker-6668fb7f88-v6px7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9438ed01242", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:46.882838 containerd[1823]: 2025-11-08 01:19:46.867 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.65/32] ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.882838 containerd[1823]: 2025-11-08 01:19:46.867 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9438ed01242 ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.882838 containerd[1823]: 2025-11-08 01:19:46.872 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.882838 containerd[1823]: 2025-11-08 01:19:46.872 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0", GenerateName:"whisker-6668fb7f88-", Namespace:"calico-system", SelfLink:"", UID:"d30571cb-e438-453f-8b20-303beb52e470", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6668fb7f88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f", Pod:"whisker-6668fb7f88-v6px7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9438ed01242", MAC:"06:3f:05:0d:9a:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:46.882838 containerd[1823]: 2025-11-08 01:19:46.879 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f" Namespace="calico-system" Pod="whisker-6668fb7f88-v6px7" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--6668fb7f88--v6px7-eth0" Nov 8 01:19:46.891713 containerd[1823]: time="2025-11-08T01:19:46.891671974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:46.891713 containerd[1823]: time="2025-11-08T01:19:46.891704557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:46.891713 containerd[1823]: time="2025-11-08T01:19:46.891712118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:46.891861 containerd[1823]: time="2025-11-08T01:19:46.891756820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:46.906933 systemd[1]: Started cri-containerd-e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f.scope - libcontainer container e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f. Nov 8 01:19:46.947810 containerd[1823]: time="2025-11-08T01:19:46.947780087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6668fb7f88-v6px7,Uid:d30571cb-e438-453f-8b20-303beb52e470,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9bb67234d917f8edc7c417de64f78ba1889f56fd0bfa289c1850e1d93855e0f\"" Nov 8 01:19:46.948521 containerd[1823]: time="2025-11-08T01:19:46.948509474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:19:46.952479 kernel: bpftool[4910]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 01:19:47.109755 systemd-networkd[1517]: vxlan.calico: Link UP Nov 8 01:19:47.109758 systemd-networkd[1517]: vxlan.calico: Gained carrier Nov 8 01:19:47.284189 kubelet[3076]: I1108 01:19:47.284169 3076 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c8a02c-ccf4-4fe7-85ad-567be91064d6" path="/var/lib/kubelet/pods/52c8a02c-ccf4-4fe7-85ad-567be91064d6/volumes" Nov 8 01:19:47.324438 containerd[1823]: time="2025-11-08T01:19:47.324414677Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:47.324911 containerd[1823]: time="2025-11-08T01:19:47.324893171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:19:47.324964 containerd[1823]: time="2025-11-08T01:19:47.324942115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:19:47.325121 kubelet[3076]: E1108 01:19:47.325089 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:19:47.325154 kubelet[3076]: E1108 01:19:47.325131 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:19:47.325295 kubelet[3076]: E1108 01:19:47.325277 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f499b8e984d248faa7e4f138b84f7ce2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:47.327414 containerd[1823]: time="2025-11-08T01:19:47.327377447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:19:47.384994 kubelet[3076]: I1108 01:19:47.384939 3076 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 01:19:47.700696 containerd[1823]: time="2025-11-08T01:19:47.700459389Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:47.701383 containerd[1823]: time="2025-11-08T01:19:47.701351643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:19:47.701465 containerd[1823]: time="2025-11-08T01:19:47.701416501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:19:47.701586 kubelet[3076]: E1108 01:19:47.701518 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:19:47.701629 kubelet[3076]: E1108 01:19:47.701587 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:19:47.701729 kubelet[3076]: E1108 01:19:47.701652 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:47.702833 kubelet[3076]: E1108 01:19:47.702795 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:19:48.391197 kubelet[3076]: E1108 01:19:48.391044 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:19:48.849790 systemd-networkd[1517]: cali9438ed01242: Gained IPv6LL Nov 8 01:19:49.105813 systemd-networkd[1517]: vxlan.calico: Gained IPv6LL Nov 8 01:19:50.862539 kubelet[3076]: I1108 01:19:50.862369 3076 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 01:19:53.283007 containerd[1823]: time="2025-11-08T01:19:53.282895682Z" level=info msg="StopPodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\"" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.305 [INFO][5111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.305 [INFO][5111] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" iface="eth0" netns="/var/run/netns/cni-2a25ca27-b12f-aefe-7fd8-b86360de1099" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.305 [INFO][5111] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" iface="eth0" netns="/var/run/netns/cni-2a25ca27-b12f-aefe-7fd8-b86360de1099" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.305 [INFO][5111] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" iface="eth0" netns="/var/run/netns/cni-2a25ca27-b12f-aefe-7fd8-b86360de1099" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.305 [INFO][5111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.305 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.315 [INFO][5127] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.315 [INFO][5127] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.315 [INFO][5127] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.319 [WARNING][5127] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.319 [INFO][5127] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.319 [INFO][5127] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:53.321501 containerd[1823]: 2025-11-08 01:19:53.320 [INFO][5111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:19:53.321797 containerd[1823]: time="2025-11-08T01:19:53.321580795Z" level=info msg="TearDown network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" successfully" Nov 8 01:19:53.321797 containerd[1823]: time="2025-11-08T01:19:53.321596963Z" level=info msg="StopPodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" returns successfully" Nov 8 01:19:53.322014 containerd[1823]: time="2025-11-08T01:19:53.321974740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-lt4rt,Uid:bb14ec11-f019-40a0-9b63-589cf025cfb4,Namespace:calico-apiserver,Attempt:1,}" Nov 8 01:19:53.323329 systemd[1]: run-netns-cni\x2d2a25ca27\x2db12f\x2daefe\x2d7fd8\x2db86360de1099.mount: Deactivated successfully. Nov 8 01:19:53.372622 systemd-networkd[1517]: calidd2f6e7c867: Link UP Nov 8 01:19:53.372774 systemd-networkd[1517]: calidd2f6e7c867: Gained carrier Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.341 [INFO][5143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0 calico-apiserver-d6d97687b- calico-apiserver bb14ec11-f019-40a0-9b63-589cf025cfb4 897 0 2025-11-08 01:19:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6d97687b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 calico-apiserver-d6d97687b-lt4rt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd2f6e7c867 [] [] }} ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.341 [INFO][5143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.354 [INFO][5163] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" HandleID="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.354 [INFO][5163] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" HandleID="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8acfe54808", "pod":"calico-apiserver-d6d97687b-lt4rt", "timestamp":"2025-11-08 01:19:53.35405161 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.354 [INFO][5163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.354 [INFO][5163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.354 [INFO][5163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.357 [INFO][5163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.360 [INFO][5163] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.362 [INFO][5163] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.363 [INFO][5163] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.364 [INFO][5163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.364 [INFO][5163] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.365 [INFO][5163] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925 Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.367 [INFO][5163] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.370 [INFO][5163] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.66/26] block=192.168.24.64/26 handle="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.370 [INFO][5163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.66/26] handle="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.370 [INFO][5163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:53.400162 containerd[1823]: 2025-11-08 01:19:53.370 [INFO][5163] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.66/26] IPv6=[] ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" HandleID="k8s-pod-network.2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.402351 containerd[1823]: 2025-11-08 01:19:53.371 [INFO][5143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb14ec11-f019-40a0-9b63-589cf025cfb4", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"calico-apiserver-d6d97687b-lt4rt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd2f6e7c867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:53.402351 containerd[1823]: 2025-11-08 01:19:53.371 [INFO][5143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.66/32] ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.402351 containerd[1823]: 2025-11-08 01:19:53.371 [INFO][5143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd2f6e7c867 ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.402351 containerd[1823]: 2025-11-08 01:19:53.372 [INFO][5143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.402351 containerd[1823]: 2025-11-08 01:19:53.373 [INFO][5143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb14ec11-f019-40a0-9b63-589cf025cfb4", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925", Pod:"calico-apiserver-d6d97687b-lt4rt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd2f6e7c867", MAC:"f2:f3:ba:2c:99:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:53.402351 containerd[1823]: 2025-11-08 01:19:53.393 [INFO][5143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-lt4rt" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:19:53.414216 containerd[1823]: time="2025-11-08T01:19:53.414166259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:53.414216 containerd[1823]: time="2025-11-08T01:19:53.414211112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:53.414426 containerd[1823]: time="2025-11-08T01:19:53.414413987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:53.414476 containerd[1823]: time="2025-11-08T01:19:53.414463046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:53.435661 systemd[1]: Started cri-containerd-2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925.scope - libcontainer container 2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925. Nov 8 01:19:53.458631 containerd[1823]: time="2025-11-08T01:19:53.458607600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-lt4rt,Uid:bb14ec11-f019-40a0-9b63-589cf025cfb4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925\"" Nov 8 01:19:53.459355 containerd[1823]: time="2025-11-08T01:19:53.459343220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:19:53.836120 containerd[1823]: time="2025-11-08T01:19:53.835996703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:53.837033 containerd[1823]: time="2025-11-08T01:19:53.836960597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:19:53.837070 containerd[1823]: time="2025-11-08T01:19:53.837025601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:19:53.837183 kubelet[3076]: E1108 01:19:53.837129 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:19:53.837183 kubelet[3076]: E1108 01:19:53.837167 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:19:53.837391 kubelet[3076]: E1108 01:19:53.837274 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:53.838788 kubelet[3076]: E1108 01:19:53.838742 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:19:54.284055 containerd[1823]: time="2025-11-08T01:19:54.283951058Z" level=info msg="StopPodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\"" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.351 [INFO][5243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.352 [INFO][5243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" iface="eth0" netns="/var/run/netns/cni-c9456cce-84e6-4a22-99c9-7d85beec8190" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.352 [INFO][5243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" iface="eth0" netns="/var/run/netns/cni-c9456cce-84e6-4a22-99c9-7d85beec8190" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.353 [INFO][5243] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" iface="eth0" netns="/var/run/netns/cni-c9456cce-84e6-4a22-99c9-7d85beec8190" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.353 [INFO][5243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.353 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.396 [INFO][5261] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.396 [INFO][5261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.396 [INFO][5261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.402 [WARNING][5261] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.402 [INFO][5261] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.403 [INFO][5261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:54.406273 containerd[1823]: 2025-11-08 01:19:54.405 [INFO][5243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:19:54.406850 containerd[1823]: time="2025-11-08T01:19:54.406373714Z" level=info msg="TearDown network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" successfully" Nov 8 01:19:54.406850 containerd[1823]: time="2025-11-08T01:19:54.406394313Z" level=info msg="StopPodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" returns successfully" Nov 8 01:19:54.406850 containerd[1823]: time="2025-11-08T01:19:54.406770970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmnhl,Uid:71f5fc0d-399b-4a93-8104-f8dd3ea1c5df,Namespace:calico-system,Attempt:1,}" Nov 8 01:19:54.407554 kubelet[3076]: E1108 01:19:54.407536 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:19:54.409136 systemd[1]: run-netns-cni\x2dc9456cce\x2d84e6\x2d4a22\x2d99c9\x2d7d85beec8190.mount: Deactivated successfully. Nov 8 01:19:54.472451 systemd-networkd[1517]: calif0cf643ba91: Link UP Nov 8 01:19:54.472570 systemd-networkd[1517]: calif0cf643ba91: Gained carrier Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.442 [INFO][5282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0 goldmane-666569f655- calico-system 71f5fc0d-399b-4a93-8104-f8dd3ea1c5df 907 0 2025-11-08 01:19:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 goldmane-666569f655-dmnhl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif0cf643ba91 [] [] }} ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.443 [INFO][5282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.454 [INFO][5304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" HandleID="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.454 [INFO][5304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" HandleID="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eaa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8acfe54808", "pod":"goldmane-666569f655-dmnhl", "timestamp":"2025-11-08 01:19:54.454634782 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.454 [INFO][5304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.454 [INFO][5304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.454 [INFO][5304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.458 [INFO][5304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.460 [INFO][5304] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.463 [INFO][5304] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.463 [INFO][5304] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.465 [INFO][5304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.465 [INFO][5304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.466 [INFO][5304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.468 [INFO][5304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.470 [INFO][5304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.67/26] block=192.168.24.64/26 handle="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.470 [INFO][5304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.67/26] handle="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.470 [INFO][5304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:54.478430 containerd[1823]: 2025-11-08 01:19:54.470 [INFO][5304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.67/26] IPv6=[] ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" HandleID="k8s-pod-network.d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.478918 containerd[1823]: 2025-11-08 01:19:54.471 [INFO][5282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"goldmane-666569f655-dmnhl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0cf643ba91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:54.478918 containerd[1823]: 2025-11-08 01:19:54.471 [INFO][5282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.67/32] ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.478918 containerd[1823]: 2025-11-08 01:19:54.471 [INFO][5282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0cf643ba91 ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.478918 containerd[1823]: 2025-11-08 01:19:54.472 [INFO][5282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.478918 containerd[1823]: 2025-11-08 01:19:54.472 [INFO][5282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a", Pod:"goldmane-666569f655-dmnhl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0cf643ba91", MAC:"ce:b1:c9:ad:ae:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:54.478918 containerd[1823]: 2025-11-08 01:19:54.477 [INFO][5282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a" Namespace="calico-system" Pod="goldmane-666569f655-dmnhl" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:19:54.486682 containerd[1823]: time="2025-11-08T01:19:54.486601598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:54.486682 containerd[1823]: time="2025-11-08T01:19:54.486674257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:54.486682 containerd[1823]: time="2025-11-08T01:19:54.486681895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:54.486799 containerd[1823]: time="2025-11-08T01:19:54.486726502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:54.511081 systemd[1]: Started cri-containerd-d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a.scope - libcontainer container d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a. Nov 8 01:19:54.594100 containerd[1823]: time="2025-11-08T01:19:54.594068293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmnhl,Uid:71f5fc0d-399b-4a93-8104-f8dd3ea1c5df,Namespace:calico-system,Attempt:1,} returns sandbox id \"d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a\"" Nov 8 01:19:54.595871 containerd[1823]: time="2025-11-08T01:19:54.595201001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:19:54.978434 containerd[1823]: time="2025-11-08T01:19:54.978378806Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:54.979015 containerd[1823]: time="2025-11-08T01:19:54.978967584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:19:54.979046 containerd[1823]: time="2025-11-08T01:19:54.979016859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:19:54.979181 kubelet[3076]: E1108 01:19:54.979108 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:19:54.979181 kubelet[3076]: E1108 01:19:54.979141 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:19:54.979408 kubelet[3076]: E1108 01:19:54.979224 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8jnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:54.980382 kubelet[3076]: E1108 01:19:54.980363 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:19:55.058613 systemd-networkd[1517]: calidd2f6e7c867: Gained IPv6LL Nov 8 01:19:55.286549 containerd[1823]: time="2025-11-08T01:19:55.286266169Z" level=info msg="StopPodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\"" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.332 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.332 [INFO][5379] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" iface="eth0" netns="/var/run/netns/cni-b9e45a1c-baef-df1f-018d-f937428fe5ef" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.333 [INFO][5379] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" iface="eth0" netns="/var/run/netns/cni-b9e45a1c-baef-df1f-018d-f937428fe5ef" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.333 [INFO][5379] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" iface="eth0" netns="/var/run/netns/cni-b9e45a1c-baef-df1f-018d-f937428fe5ef" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.333 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.333 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.345 [INFO][5397] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.345 [INFO][5397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.345 [INFO][5397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.351 [WARNING][5397] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.351 [INFO][5397] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.352 [INFO][5397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:55.354344 containerd[1823]: 2025-11-08 01:19:55.353 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:19:55.354699 containerd[1823]: time="2025-11-08T01:19:55.354394184Z" level=info msg="TearDown network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" successfully" Nov 8 01:19:55.354699 containerd[1823]: time="2025-11-08T01:19:55.354414865Z" level=info msg="StopPodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" returns successfully" Nov 8 01:19:55.354828 containerd[1823]: time="2025-11-08T01:19:55.354808804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-82v26,Uid:e679c708-5dc2-455f-8f76-0a5b47442761,Namespace:calico-apiserver,Attempt:1,}" Nov 8 01:19:55.405016 systemd-networkd[1517]: califb795ee947e: Link UP Nov 8 01:19:55.405134 systemd-networkd[1517]: califb795ee947e: Gained carrier Nov 8 01:19:55.409814 kubelet[3076]: E1108 01:19:55.409789 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:19:55.409814 kubelet[3076]: E1108 01:19:55.409789 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:19:55.410182 systemd[1]: run-netns-cni\x2db9e45a1c\x2dbaef\x2ddf1f\x2d018d\x2df937428fe5ef.mount: Deactivated successfully. Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.374 [INFO][5412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0 calico-apiserver-d6d97687b- calico-apiserver e679c708-5dc2-455f-8f76-0a5b47442761 922 0 2025-11-08 01:19:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6d97687b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 calico-apiserver-d6d97687b-82v26 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb795ee947e [] [] }} ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.374 [INFO][5412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.386 [INFO][5432] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" HandleID="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.386 [INFO][5432] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" HandleID="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00068b960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8acfe54808", "pod":"calico-apiserver-d6d97687b-82v26", "timestamp":"2025-11-08 01:19:55.386258398 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.386 [INFO][5432] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.386 [INFO][5432] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.386 [INFO][5432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.390 [INFO][5432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.393 [INFO][5432] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.395 [INFO][5432] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.396 [INFO][5432] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.397 [INFO][5432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.397 [INFO][5432] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.398 [INFO][5432] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.400 [INFO][5432] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.403 [INFO][5432] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.68/26] block=192.168.24.64/26 handle="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.403 [INFO][5432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.68/26] handle="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.403 [INFO][5432] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:55.411465 containerd[1823]: 2025-11-08 01:19:55.403 [INFO][5432] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.68/26] IPv6=[] ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" HandleID="k8s-pod-network.c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.411853 containerd[1823]: 2025-11-08 01:19:55.404 [INFO][5412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e679c708-5dc2-455f-8f76-0a5b47442761", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"calico-apiserver-d6d97687b-82v26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb795ee947e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:55.411853 containerd[1823]: 2025-11-08 01:19:55.404 [INFO][5412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.68/32] ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.411853 containerd[1823]: 2025-11-08 01:19:55.404 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb795ee947e ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.411853 containerd[1823]: 2025-11-08 01:19:55.405 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.411853 containerd[1823]: 2025-11-08 01:19:55.405 [INFO][5412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e679c708-5dc2-455f-8f76-0a5b47442761", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d", Pod:"calico-apiserver-d6d97687b-82v26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb795ee947e", MAC:"92:8e:6a:f2:08:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:55.411853 containerd[1823]: 2025-11-08 01:19:55.409 [INFO][5412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d" Namespace="calico-apiserver" Pod="calico-apiserver-d6d97687b-82v26" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:19:55.420572 containerd[1823]: time="2025-11-08T01:19:55.420283760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:55.420572 containerd[1823]: time="2025-11-08T01:19:55.420540028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:55.420572 containerd[1823]: time="2025-11-08T01:19:55.420550084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:55.420711 containerd[1823]: time="2025-11-08T01:19:55.420608650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:55.452025 systemd[1]: Started cri-containerd-c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d.scope - libcontainer container c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d. Nov 8 01:19:55.511440 containerd[1823]: time="2025-11-08T01:19:55.511419925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6d97687b-82v26,Uid:e679c708-5dc2-455f-8f76-0a5b47442761,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d\"" Nov 8 01:19:55.512096 containerd[1823]: time="2025-11-08T01:19:55.512083758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:19:55.698828 systemd-networkd[1517]: calif0cf643ba91: Gained IPv6LL Nov 8 01:19:55.885892 containerd[1823]: time="2025-11-08T01:19:55.885769570Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:55.886697 containerd[1823]: time="2025-11-08T01:19:55.886605865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:19:55.886744 containerd[1823]: time="2025-11-08T01:19:55.886692301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:19:55.886828 kubelet[3076]: E1108 01:19:55.886778 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:19:55.886828 kubelet[3076]: E1108 01:19:55.886806 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:19:55.886908 kubelet[3076]: E1108 01:19:55.886886 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:55.888036 kubelet[3076]: E1108 01:19:55.888019 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:19:56.283439 containerd[1823]: time="2025-11-08T01:19:56.283399350Z" level=info msg="StopPodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\"" Nov 8 01:19:56.283439 containerd[1823]: time="2025-11-08T01:19:56.283432014Z" level=info msg="StopPodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\"" Nov 8 01:19:56.283684 containerd[1823]: time="2025-11-08T01:19:56.283431920Z" level=info msg="StopPodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\"" Nov 8 01:19:56.283684 containerd[1823]: time="2025-11-08T01:19:56.283605032Z" level=info msg="StopPodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\"" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" iface="eth0" netns="/var/run/netns/cni-d78dbc04-edc3-b55b-c249-7462eecddc0d" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" iface="eth0" netns="/var/run/netns/cni-d78dbc04-edc3-b55b-c249-7462eecddc0d" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" iface="eth0" netns="/var/run/netns/cni-d78dbc04-edc3-b55b-c249-7462eecddc0d" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.332 [INFO][5603] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.332 [INFO][5603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.332 [INFO][5603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.336 [WARNING][5603] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.336 [INFO][5603] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.337 [INFO][5603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.338503 containerd[1823]: 2025-11-08 01:19:56.337 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:19:56.338904 containerd[1823]: time="2025-11-08T01:19:56.338577844Z" level=info msg="TearDown network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" successfully" Nov 8 01:19:56.338904 containerd[1823]: time="2025-11-08T01:19:56.338594357Z" level=info msg="StopPodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" returns successfully" Nov 8 01:19:56.338995 containerd[1823]: time="2025-11-08T01:19:56.338981501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4vsk,Uid:4cca2fc5-207d-4bfe-9520-a30e2cf67473,Namespace:kube-system,Attempt:1,}" Nov 8 01:19:56.341069 systemd[1]: run-netns-cni\x2dd78dbc04\x2dedc3\x2db55b\x2dc249\x2d7462eecddc0d.mount: Deactivated successfully. Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.319 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.319 [INFO][5541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" iface="eth0" netns="/var/run/netns/cni-c651ede8-6a0a-3637-bb90-e3161b17efbd" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" iface="eth0" netns="/var/run/netns/cni-c651ede8-6a0a-3637-bb90-e3161b17efbd" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" iface="eth0" netns="/var/run/netns/cni-c651ede8-6a0a-3637-bb90-e3161b17efbd" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.320 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.334 [INFO][5601] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.334 [INFO][5601] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.337 [INFO][5601] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.342 [WARNING][5601] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.342 [INFO][5601] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.343 [INFO][5601] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.345382 containerd[1823]: 2025-11-08 01:19:56.344 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:19:56.345737 containerd[1823]: time="2025-11-08T01:19:56.345451566Z" level=info msg="TearDown network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" successfully" Nov 8 01:19:56.345737 containerd[1823]: time="2025-11-08T01:19:56.345469214Z" level=info msg="StopPodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" returns successfully" Nov 8 01:19:56.345906 containerd[1823]: time="2025-11-08T01:19:56.345891074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhnl5,Uid:b57117d3-8237-4f8a-aa85-534eb9568949,Namespace:calico-system,Attempt:1,}" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.321 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.321 [INFO][5542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" iface="eth0" netns="/var/run/netns/cni-fbe61ff2-96cc-d634-f7e5-69f1ac12318d" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.321 [INFO][5542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" iface="eth0" netns="/var/run/netns/cni-fbe61ff2-96cc-d634-f7e5-69f1ac12318d" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.322 [INFO][5542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" iface="eth0" netns="/var/run/netns/cni-fbe61ff2-96cc-d634-f7e5-69f1ac12318d" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.322 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.322 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.336 [INFO][5613] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.336 [INFO][5613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.343 [INFO][5613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.347 [WARNING][5613] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.347 [INFO][5613] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.347 [INFO][5613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.349470 containerd[1823]: 2025-11-08 01:19:56.348 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:19:56.349757 containerd[1823]: time="2025-11-08T01:19:56.349555095Z" level=info msg="TearDown network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" successfully" Nov 8 01:19:56.349757 containerd[1823]: time="2025-11-08T01:19:56.349574109Z" level=info msg="StopPodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" returns successfully" Nov 8 01:19:56.349951 containerd[1823]: time="2025-11-08T01:19:56.349936502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s8cb,Uid:94b31311-6cc0-4ac6-9640-c851d1b5747b,Namespace:kube-system,Attempt:1,}" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.324 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.324 [INFO][5543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" iface="eth0" netns="/var/run/netns/cni-1096cf4f-6e4b-dcf4-6736-a556b90082fc" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.324 [INFO][5543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" iface="eth0" netns="/var/run/netns/cni-1096cf4f-6e4b-dcf4-6736-a556b90082fc" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.325 [INFO][5543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" iface="eth0" netns="/var/run/netns/cni-1096cf4f-6e4b-dcf4-6736-a556b90082fc" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.325 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.325 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.336 [INFO][5619] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.336 [INFO][5619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.347 [INFO][5619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.351 [WARNING][5619] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.351 [INFO][5619] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.351 [INFO][5619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.353467 containerd[1823]: 2025-11-08 01:19:56.352 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:19:56.353773 containerd[1823]: time="2025-11-08T01:19:56.353567981Z" level=info msg="TearDown network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" successfully" Nov 8 01:19:56.353773 containerd[1823]: time="2025-11-08T01:19:56.353589195Z" level=info msg="StopPodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" returns successfully" Nov 8 01:19:56.354030 containerd[1823]: time="2025-11-08T01:19:56.354017385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65d4465b-6mb2v,Uid:905477ed-861d-42e3-890a-431b3428dc2e,Namespace:calico-system,Attempt:1,}" Nov 8 01:19:56.398382 systemd-networkd[1517]: cali4d4e1c1c153: Link UP Nov 8 01:19:56.398529 systemd-networkd[1517]: cali4d4e1c1c153: Gained carrier Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.362 [INFO][5646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0 coredns-668d6bf9bc- kube-system 4cca2fc5-207d-4bfe-9520-a30e2cf67473 943 0 2025-11-08 01:19:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 coredns-668d6bf9bc-k4vsk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d4e1c1c153 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.362 [INFO][5646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.379 [INFO][5722] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" HandleID="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.379 [INFO][5722] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" HandleID="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8acfe54808", "pod":"coredns-668d6bf9bc-k4vsk", "timestamp":"2025-11-08 01:19:56.379134594 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.379 [INFO][5722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.379 [INFO][5722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.379 [INFO][5722] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.382 [INFO][5722] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.385 [INFO][5722] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.387 [INFO][5722] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.388 [INFO][5722] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.389 [INFO][5722] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.390 [INFO][5722] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.390 [INFO][5722] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34 Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.392 [INFO][5722] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.395 [INFO][5722] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.69/26] block=192.168.24.64/26 handle="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.395 [INFO][5722] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.69/26] handle="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.395 [INFO][5722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.403870 containerd[1823]: 2025-11-08 01:19:56.395 [INFO][5722] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.69/26] IPv6=[] ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" HandleID="k8s-pod-network.f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.404372 containerd[1823]: 2025-11-08 01:19:56.396 [INFO][5646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cca2fc5-207d-4bfe-9520-a30e2cf67473", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"coredns-668d6bf9bc-k4vsk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d4e1c1c153", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.404372 containerd[1823]: 2025-11-08 01:19:56.397 [INFO][5646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.69/32] ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.404372 containerd[1823]: 2025-11-08 01:19:56.397 [INFO][5646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d4e1c1c153 ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.404372 containerd[1823]: 2025-11-08 01:19:56.398 [INFO][5646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.404372 containerd[1823]: 2025-11-08 01:19:56.398 [INFO][5646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cca2fc5-207d-4bfe-9520-a30e2cf67473", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34", Pod:"coredns-668d6bf9bc-k4vsk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d4e1c1c153", MAC:"52:59:76:21:51:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.404372 containerd[1823]: 2025-11-08 01:19:56.402 [INFO][5646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4vsk" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:19:56.411821 kubelet[3076]: E1108 01:19:56.411800 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:19:56.411821 kubelet[3076]: E1108 01:19:56.411801 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:19:56.412248 containerd[1823]: time="2025-11-08T01:19:56.412210609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:56.412248 containerd[1823]: time="2025-11-08T01:19:56.412238375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:56.412248 containerd[1823]: time="2025-11-08T01:19:56.412245462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.412345 containerd[1823]: time="2025-11-08T01:19:56.412297846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.413429 systemd[1]: run-netns-cni\x2dfbe61ff2\x2d96cc\x2dd634\x2df7e5\x2d69f1ac12318d.mount: Deactivated successfully. Nov 8 01:19:56.413514 systemd[1]: run-netns-cni\x2d1096cf4f\x2d6e4b\x2ddcf4\x2d6736\x2da556b90082fc.mount: Deactivated successfully. Nov 8 01:19:56.413572 systemd[1]: run-netns-cni\x2dc651ede8\x2d6a0a\x2d3637\x2dbb90\x2de3161b17efbd.mount: Deactivated successfully. Nov 8 01:19:56.431712 systemd[1]: Started cri-containerd-f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34.scope - libcontainer container f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34. Nov 8 01:19:56.454044 containerd[1823]: time="2025-11-08T01:19:56.454020219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4vsk,Uid:4cca2fc5-207d-4bfe-9520-a30e2cf67473,Namespace:kube-system,Attempt:1,} returns sandbox id \"f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34\"" Nov 8 01:19:56.455233 containerd[1823]: time="2025-11-08T01:19:56.455220626Z" level=info msg="CreateContainer within sandbox \"f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 01:19:56.459752 containerd[1823]: time="2025-11-08T01:19:56.459734574Z" level=info msg="CreateContainer within sandbox \"f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51f6448c73c6cf7bc91b75cc7892ad171a5c92f544f4e718bbf15d54ba942f76\"" Nov 8 01:19:56.460010 containerd[1823]: time="2025-11-08T01:19:56.459998515Z" level=info msg="StartContainer for \"51f6448c73c6cf7bc91b75cc7892ad171a5c92f544f4e718bbf15d54ba942f76\"" Nov 8 01:19:56.479635 systemd[1]: Started cri-containerd-51f6448c73c6cf7bc91b75cc7892ad171a5c92f544f4e718bbf15d54ba942f76.scope - libcontainer container 51f6448c73c6cf7bc91b75cc7892ad171a5c92f544f4e718bbf15d54ba942f76. Nov 8 01:19:56.492481 containerd[1823]: time="2025-11-08T01:19:56.492451398Z" level=info msg="StartContainer for \"51f6448c73c6cf7bc91b75cc7892ad171a5c92f544f4e718bbf15d54ba942f76\" returns successfully" Nov 8 01:19:56.499655 systemd-networkd[1517]: cali73a95ba6d53: Link UP Nov 8 01:19:56.499829 systemd-networkd[1517]: cali73a95ba6d53: Gained carrier Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.367 [INFO][5662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0 csi-node-driver- calico-system b57117d3-8237-4f8a-aa85-534eb9568949 942 0 2025-11-08 01:19:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 csi-node-driver-rhnl5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali73a95ba6d53 [] [] }} ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.368 [INFO][5662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.385 [INFO][5737] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" HandleID="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.385 [INFO][5737] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" HandleID="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8acfe54808", "pod":"csi-node-driver-rhnl5", "timestamp":"2025-11-08 01:19:56.385310799 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.385 [INFO][5737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.395 [INFO][5737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.395 [INFO][5737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.484 [INFO][5737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.486 [INFO][5737] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.488 [INFO][5737] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.489 [INFO][5737] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.490 [INFO][5737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.490 [INFO][5737] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.491 [INFO][5737] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0 Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.494 [INFO][5737] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.497 [INFO][5737] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.70/26] block=192.168.24.64/26 handle="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.497 [INFO][5737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.70/26] handle="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.497 [INFO][5737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.507396 containerd[1823]: 2025-11-08 01:19:56.497 [INFO][5737] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.70/26] IPv6=[] ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" HandleID="k8s-pod-network.7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.507862 containerd[1823]: 2025-11-08 01:19:56.498 [INFO][5662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b57117d3-8237-4f8a-aa85-534eb9568949", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"csi-node-driver-rhnl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73a95ba6d53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.507862 containerd[1823]: 2025-11-08 01:19:56.498 [INFO][5662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.70/32] ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.507862 containerd[1823]: 2025-11-08 01:19:56.498 [INFO][5662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73a95ba6d53 ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.507862 containerd[1823]: 2025-11-08 01:19:56.499 [INFO][5662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.507862 containerd[1823]: 2025-11-08 01:19:56.500 [INFO][5662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b57117d3-8237-4f8a-aa85-534eb9568949", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0", Pod:"csi-node-driver-rhnl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73a95ba6d53", MAC:"b2:b1:72:33:39:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.507862 containerd[1823]: 2025-11-08 01:19:56.505 [INFO][5662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0" Namespace="calico-system" Pod="csi-node-driver-rhnl5" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:19:56.523851 containerd[1823]: time="2025-11-08T01:19:56.523586449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:56.523851 containerd[1823]: time="2025-11-08T01:19:56.523840667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:56.523851 containerd[1823]: time="2025-11-08T01:19:56.523848960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.523994 containerd[1823]: time="2025-11-08T01:19:56.523892338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.538632 systemd[1]: Started cri-containerd-7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0.scope - libcontainer container 7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0. Nov 8 01:19:56.549512 containerd[1823]: time="2025-11-08T01:19:56.549489402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhnl5,Uid:b57117d3-8237-4f8a-aa85-534eb9568949,Namespace:calico-system,Attempt:1,} returns sandbox id \"7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0\"" Nov 8 01:19:56.550151 containerd[1823]: time="2025-11-08T01:19:56.550140082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:19:56.599252 systemd-networkd[1517]: calibfe1b8f9d64: Link UP Nov 8 01:19:56.600721 systemd-networkd[1517]: calibfe1b8f9d64: Gained carrier Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.372 [INFO][5681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0 coredns-668d6bf9bc- kube-system 94b31311-6cc0-4ac6-9640-c851d1b5747b 944 0 2025-11-08 01:19:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 coredns-668d6bf9bc-8s8cb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibfe1b8f9d64 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.372 [INFO][5681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.386 [INFO][5743] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" HandleID="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.386 [INFO][5743] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" HandleID="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8acfe54808", "pod":"coredns-668d6bf9bc-8s8cb", "timestamp":"2025-11-08 01:19:56.386086076 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.386 [INFO][5743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.497 [INFO][5743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.497 [INFO][5743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.584 [INFO][5743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.587 [INFO][5743] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.589 [INFO][5743] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.590 [INFO][5743] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.591 [INFO][5743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.591 [INFO][5743] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.592 [INFO][5743] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.594 [INFO][5743] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.597 [INFO][5743] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.71/26] block=192.168.24.64/26 handle="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.597 [INFO][5743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.71/26] handle="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.597 [INFO][5743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.606888 containerd[1823]: 2025-11-08 01:19:56.597 [INFO][5743] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.71/26] IPv6=[] ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" HandleID="k8s-pod-network.45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.607303 containerd[1823]: 2025-11-08 01:19:56.598 [INFO][5681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94b31311-6cc0-4ac6-9640-c851d1b5747b", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"coredns-668d6bf9bc-8s8cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe1b8f9d64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.607303 containerd[1823]: 2025-11-08 01:19:56.598 [INFO][5681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.71/32] ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.607303 containerd[1823]: 2025-11-08 01:19:56.598 [INFO][5681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfe1b8f9d64 ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.607303 containerd[1823]: 2025-11-08 01:19:56.599 [INFO][5681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.607303 containerd[1823]: 2025-11-08 01:19:56.599 [INFO][5681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94b31311-6cc0-4ac6-9640-c851d1b5747b", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b", Pod:"coredns-668d6bf9bc-8s8cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe1b8f9d64", MAC:"06:cb:92:cc:4f:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.607303 containerd[1823]: 2025-11-08 01:19:56.606 [INFO][5681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s8cb" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:19:56.615744 containerd[1823]: time="2025-11-08T01:19:56.615665437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:56.615912 containerd[1823]: time="2025-11-08T01:19:56.615868443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:56.615912 containerd[1823]: time="2025-11-08T01:19:56.615878390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.615967 containerd[1823]: time="2025-11-08T01:19:56.615922883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.634685 systemd[1]: Started cri-containerd-45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b.scope - libcontainer container 45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b. Nov 8 01:19:56.656828 containerd[1823]: time="2025-11-08T01:19:56.656782575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s8cb,Uid:94b31311-6cc0-4ac6-9640-c851d1b5747b,Namespace:kube-system,Attempt:1,} returns sandbox id \"45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b\"" Nov 8 01:19:56.657895 containerd[1823]: time="2025-11-08T01:19:56.657881190Z" level=info msg="CreateContainer within sandbox \"45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 01:19:56.662428 containerd[1823]: time="2025-11-08T01:19:56.662407164Z" level=info msg="CreateContainer within sandbox \"45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7838cf500c29d05a0f64f2eefd66bdb30171d1343a3b37bffb5e3da860b54e48\"" Nov 8 01:19:56.662705 containerd[1823]: time="2025-11-08T01:19:56.662691237Z" level=info msg="StartContainer for \"7838cf500c29d05a0f64f2eefd66bdb30171d1343a3b37bffb5e3da860b54e48\"" Nov 8 01:19:56.689656 systemd[1]: Started cri-containerd-7838cf500c29d05a0f64f2eefd66bdb30171d1343a3b37bffb5e3da860b54e48.scope - libcontainer container 7838cf500c29d05a0f64f2eefd66bdb30171d1343a3b37bffb5e3da860b54e48. Nov 8 01:19:56.699186 systemd-networkd[1517]: califfc42f7f3ae: Link UP Nov 8 01:19:56.699454 systemd-networkd[1517]: califfc42f7f3ae: Gained carrier Nov 8 01:19:56.704243 containerd[1823]: time="2025-11-08T01:19:56.704218601Z" level=info msg="StartContainer for \"7838cf500c29d05a0f64f2eefd66bdb30171d1343a3b37bffb5e3da860b54e48\" returns successfully" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.378 [INFO][5700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0 calico-kube-controllers-7c65d4465b- calico-system 905477ed-861d-42e3-890a-431b3428dc2e 945 0 2025-11-08 01:19:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c65d4465b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-8acfe54808 calico-kube-controllers-7c65d4465b-6mb2v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califfc42f7f3ae [] [] }} ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.379 [INFO][5700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.393 [INFO][5757] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" HandleID="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.393 [INFO][5757] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" HandleID="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8acfe54808", "pod":"calico-kube-controllers-7c65d4465b-6mb2v", "timestamp":"2025-11-08 01:19:56.393545899 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8acfe54808", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.393 [INFO][5757] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.597 [INFO][5757] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.597 [INFO][5757] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8acfe54808' Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.684 [INFO][5757] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.687 [INFO][5757] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.689 [INFO][5757] ipam/ipam.go 511: Trying affinity for 192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.690 [INFO][5757] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.691 [INFO][5757] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.691 [INFO][5757] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.692 [INFO][5757] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.693 [INFO][5757] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.697 [INFO][5757] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.72/26] block=192.168.24.64/26 handle="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.697 [INFO][5757] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.72/26] handle="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" host="ci-4081.3.6-n-8acfe54808" Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.697 [INFO][5757] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:19:56.705656 containerd[1823]: 2025-11-08 01:19:56.697 [INFO][5757] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.72/26] IPv6=[] ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" HandleID="k8s-pod-network.fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.706135 containerd[1823]: 2025-11-08 01:19:56.698 [INFO][5700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0", GenerateName:"calico-kube-controllers-7c65d4465b-", Namespace:"calico-system", SelfLink:"", UID:"905477ed-861d-42e3-890a-431b3428dc2e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c65d4465b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"", Pod:"calico-kube-controllers-7c65d4465b-6mb2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfc42f7f3ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.706135 containerd[1823]: 2025-11-08 01:19:56.698 [INFO][5700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.72/32] ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.706135 containerd[1823]: 2025-11-08 01:19:56.698 [INFO][5700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfc42f7f3ae ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.706135 containerd[1823]: 2025-11-08 01:19:56.699 [INFO][5700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.706135 containerd[1823]: 2025-11-08 01:19:56.699 [INFO][5700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0", GenerateName:"calico-kube-controllers-7c65d4465b-", Namespace:"calico-system", SelfLink:"", UID:"905477ed-861d-42e3-890a-431b3428dc2e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c65d4465b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c", Pod:"calico-kube-controllers-7c65d4465b-6mb2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfc42f7f3ae", MAC:"ee:c6:c0:97:b5:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:19:56.706135 containerd[1823]: 2025-11-08 01:19:56.703 [INFO][5700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c" Namespace="calico-system" Pod="calico-kube-controllers-7c65d4465b-6mb2v" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:19:56.715958 containerd[1823]: time="2025-11-08T01:19:56.715911385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:19:56.715958 containerd[1823]: time="2025-11-08T01:19:56.715946336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:19:56.715958 containerd[1823]: time="2025-11-08T01:19:56.715954050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.716083 containerd[1823]: time="2025-11-08T01:19:56.716002537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:19:56.721575 systemd-networkd[1517]: califb795ee947e: Gained IPv6LL Nov 8 01:19:56.741639 systemd[1]: Started cri-containerd-fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c.scope - libcontainer container fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c. Nov 8 01:19:56.764432 containerd[1823]: time="2025-11-08T01:19:56.764408510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65d4465b-6mb2v,Uid:905477ed-861d-42e3-890a-431b3428dc2e,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c\"" Nov 8 01:19:56.923270 containerd[1823]: time="2025-11-08T01:19:56.923219999Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:56.923759 containerd[1823]: time="2025-11-08T01:19:56.923703417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:19:56.923815 containerd[1823]: time="2025-11-08T01:19:56.923756542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:19:56.923880 kubelet[3076]: E1108 01:19:56.923856 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:19:56.923921 kubelet[3076]: E1108 01:19:56.923890 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:19:56.924059 kubelet[3076]: E1108 01:19:56.924038 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:56.924131 containerd[1823]: time="2025-11-08T01:19:56.924114576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:19:57.305283 containerd[1823]: time="2025-11-08T01:19:57.304172495Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:57.306211 containerd[1823]: time="2025-11-08T01:19:57.305980771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:19:57.306573 containerd[1823]: time="2025-11-08T01:19:57.306109311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:19:57.306941 kubelet[3076]: E1108 01:19:57.306806 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:19:57.306941 kubelet[3076]: E1108 01:19:57.306888 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:19:57.307686 kubelet[3076]: E1108 01:19:57.307325 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:57.308882 kubelet[3076]: E1108 01:19:57.308604 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:19:57.309006 containerd[1823]: time="2025-11-08T01:19:57.308744002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:19:57.432017 kubelet[3076]: E1108 01:19:57.431891 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:19:57.436212 kubelet[3076]: E1108 01:19:57.436152 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:19:57.452883 kubelet[3076]: I1108 01:19:57.452836 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8s8cb" podStartSLOduration=36.452822396 podStartE2EDuration="36.452822396s" podCreationTimestamp="2025-11-08 01:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:19:57.443574874 +0000 UTC m=+42.208180767" watchObservedRunningTime="2025-11-08 01:19:57.452822396 +0000 UTC m=+42.217428262" Nov 8 01:19:57.459977 kubelet[3076]: I1108 01:19:57.459938 3076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k4vsk" podStartSLOduration=36.459922422 podStartE2EDuration="36.459922422s" podCreationTimestamp="2025-11-08 01:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:19:57.459625374 +0000 UTC m=+42.224231244" watchObservedRunningTime="2025-11-08 01:19:57.459922422 +0000 UTC m=+42.224528281" Nov 8 01:19:57.553765 systemd-networkd[1517]: cali73a95ba6d53: Gained IPv6LL Nov 8 01:19:57.693520 containerd[1823]: time="2025-11-08T01:19:57.693389979Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:19:57.694186 containerd[1823]: time="2025-11-08T01:19:57.694139753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:19:57.694263 containerd[1823]: time="2025-11-08T01:19:57.694207894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:19:57.694315 kubelet[3076]: E1108 01:19:57.694294 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:19:57.694355 kubelet[3076]: E1108 01:19:57.694324 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:19:57.694429 kubelet[3076]: E1108 01:19:57.694410 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:19:57.695465 kubelet[3076]: E1108 01:19:57.695448 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:19:58.129637 systemd-networkd[1517]: calibfe1b8f9d64: Gained IPv6LL Nov 8 01:19:58.193663 systemd-networkd[1517]: cali4d4e1c1c153: Gained IPv6LL Nov 8 01:19:58.193849 systemd-networkd[1517]: califfc42f7f3ae: Gained IPv6LL Nov 8 01:19:58.440622 kubelet[3076]: E1108 01:19:58.440334 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:19:58.441590 kubelet[3076]: E1108 01:19:58.441378 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:20:03.285264 containerd[1823]: time="2025-11-08T01:20:03.285168212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:20:04.064292 containerd[1823]: time="2025-11-08T01:20:04.064223918Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:04.064772 containerd[1823]: time="2025-11-08T01:20:04.064753618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:20:04.064822 containerd[1823]: time="2025-11-08T01:20:04.064785145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:20:04.064923 kubelet[3076]: E1108 01:20:04.064866 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:20:04.064923 kubelet[3076]: E1108 01:20:04.064907 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:20:04.065140 kubelet[3076]: E1108 01:20:04.065002 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f499b8e984d248faa7e4f138b84f7ce2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:04.066485 containerd[1823]: time="2025-11-08T01:20:04.066467182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:20:04.412522 containerd[1823]: time="2025-11-08T01:20:04.412391524Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:04.413349 containerd[1823]: time="2025-11-08T01:20:04.413271444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:20:04.413378 containerd[1823]: time="2025-11-08T01:20:04.413343807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:20:04.413443 kubelet[3076]: E1108 01:20:04.413417 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:20:04.413536 kubelet[3076]: E1108 01:20:04.413451 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:20:04.413625 kubelet[3076]: E1108 01:20:04.413559 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:04.414709 kubelet[3076]: E1108 01:20:04.414663 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:20:09.285822 containerd[1823]: time="2025-11-08T01:20:09.285741806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:20:09.628215 containerd[1823]: time="2025-11-08T01:20:09.628131455Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:09.629145 containerd[1823]: time="2025-11-08T01:20:09.629059035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:20:09.629176 containerd[1823]: time="2025-11-08T01:20:09.629153262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:20:09.629293 kubelet[3076]: E1108 01:20:09.629268 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:09.629488 kubelet[3076]: E1108 01:20:09.629304 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:09.629488 kubelet[3076]: E1108 01:20:09.629380 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:09.630508 kubelet[3076]: E1108 01:20:09.630496 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:20:11.290926 containerd[1823]: time="2025-11-08T01:20:11.290836036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:20:11.662233 containerd[1823]: time="2025-11-08T01:20:11.662153866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:11.663063 containerd[1823]: time="2025-11-08T01:20:11.663001439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:20:11.663096 containerd[1823]: time="2025-11-08T01:20:11.663065901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:20:11.663188 kubelet[3076]: E1108 01:20:11.663161 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:20:11.663360 kubelet[3076]: E1108 01:20:11.663190 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:20:11.663360 kubelet[3076]: E1108 01:20:11.663336 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8jnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:11.663451 containerd[1823]: time="2025-11-08T01:20:11.663387076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:20:11.664468 kubelet[3076]: E1108 01:20:11.664457 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:20:12.019642 containerd[1823]: time="2025-11-08T01:20:12.019422002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:12.020298 containerd[1823]: time="2025-11-08T01:20:12.020224523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:20:12.020332 containerd[1823]: time="2025-11-08T01:20:12.020300714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:20:12.020398 kubelet[3076]: E1108 01:20:12.020373 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:20:12.020440 kubelet[3076]: E1108 01:20:12.020409 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:20:12.020602 kubelet[3076]: E1108 01:20:12.020581 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:12.020693 containerd[1823]: time="2025-11-08T01:20:12.020672200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:20:12.357578 containerd[1823]: time="2025-11-08T01:20:12.357499318Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:12.358222 containerd[1823]: time="2025-11-08T01:20:12.358136780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:20:12.358222 containerd[1823]: time="2025-11-08T01:20:12.358201515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:20:12.358324 kubelet[3076]: E1108 01:20:12.358272 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:20:12.358324 kubelet[3076]: E1108 01:20:12.358301 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:20:12.358476 kubelet[3076]: E1108 01:20:12.358449 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:12.358599 containerd[1823]: time="2025-11-08T01:20:12.358565385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:20:12.360252 kubelet[3076]: E1108 01:20:12.360240 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:20:12.718798 containerd[1823]: time="2025-11-08T01:20:12.718587071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:12.719359 containerd[1823]: time="2025-11-08T01:20:12.719329555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:20:12.719428 containerd[1823]: time="2025-11-08T01:20:12.719351173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:20:12.719506 kubelet[3076]: E1108 01:20:12.719469 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:20:12.719740 kubelet[3076]: E1108 01:20:12.719514 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:20:12.719740 kubelet[3076]: E1108 01:20:12.719669 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:12.719849 containerd[1823]: time="2025-11-08T01:20:12.719731375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:20:12.720806 kubelet[3076]: E1108 01:20:12.720791 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:20:13.078068 containerd[1823]: time="2025-11-08T01:20:13.077824644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:13.078609 containerd[1823]: time="2025-11-08T01:20:13.078580364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:20:13.078675 containerd[1823]: time="2025-11-08T01:20:13.078645494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:20:13.078790 kubelet[3076]: E1108 01:20:13.078751 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:13.078840 kubelet[3076]: E1108 01:20:13.078798 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:13.078908 kubelet[3076]: E1108 01:20:13.078887 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:13.080004 kubelet[3076]: E1108 01:20:13.079987 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:20:15.279043 containerd[1823]: time="2025-11-08T01:20:15.278947621Z" level=info msg="StopPodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\"" Nov 8 01:20:15.291980 kubelet[3076]: E1108 01:20:15.291888 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.341 [WARNING][6105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0", GenerateName:"calico-kube-controllers-7c65d4465b-", Namespace:"calico-system", SelfLink:"", UID:"905477ed-861d-42e3-890a-431b3428dc2e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c65d4465b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c", Pod:"calico-kube-controllers-7c65d4465b-6mb2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfc42f7f3ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.341 [INFO][6105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.341 [INFO][6105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" iface="eth0" netns="" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.341 [INFO][6105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.341 [INFO][6105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.359 [INFO][6123] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.360 [INFO][6123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.360 [INFO][6123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.366 [WARNING][6123] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.366 [INFO][6123] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.367 [INFO][6123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.370258 containerd[1823]: 2025-11-08 01:20:15.368 [INFO][6105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.370806 containerd[1823]: time="2025-11-08T01:20:15.370259217Z" level=info msg="TearDown network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" successfully" Nov 8 01:20:15.370806 containerd[1823]: time="2025-11-08T01:20:15.370289809Z" level=info msg="StopPodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" returns successfully" Nov 8 01:20:15.370872 containerd[1823]: time="2025-11-08T01:20:15.370803546Z" level=info msg="RemovePodSandbox for \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\"" Nov 8 01:20:15.370872 containerd[1823]: time="2025-11-08T01:20:15.370834002Z" level=info msg="Forcibly stopping sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\"" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.402 [WARNING][6149] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0", GenerateName:"calico-kube-controllers-7c65d4465b-", Namespace:"calico-system", SelfLink:"", UID:"905477ed-861d-42e3-890a-431b3428dc2e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c65d4465b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"fd4903fa5605973f4168a1122a266f845477f9981d701663bb9b899feb8a9f9c", Pod:"calico-kube-controllers-7c65d4465b-6mb2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfc42f7f3ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.402 [INFO][6149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.402 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" iface="eth0" netns="" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.402 [INFO][6149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.402 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.421 [INFO][6167] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.421 [INFO][6167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.421 [INFO][6167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.428 [WARNING][6167] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.428 [INFO][6167] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" HandleID="k8s-pod-network.52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--kube--controllers--7c65d4465b--6mb2v-eth0" Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.429 [INFO][6167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.432115 containerd[1823]: 2025-11-08 01:20:15.430 [INFO][6149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8" Nov 8 01:20:15.432577 containerd[1823]: time="2025-11-08T01:20:15.432156012Z" level=info msg="TearDown network for sandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" successfully" Nov 8 01:20:15.433939 containerd[1823]: time="2025-11-08T01:20:15.433927233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.433975 containerd[1823]: time="2025-11-08T01:20:15.433955366Z" level=info msg="RemovePodSandbox \"52967611b1523d15183c72b35fd1d746943d53d20100623e1972f13c4d687ba8\" returns successfully" Nov 8 01:20:15.434271 containerd[1823]: time="2025-11-08T01:20:15.434257434Z" level=info msg="StopPodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\"" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.451 [WARNING][6193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a", Pod:"goldmane-666569f655-dmnhl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0cf643ba91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.451 [INFO][6193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.451 [INFO][6193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" iface="eth0" netns="" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.451 [INFO][6193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.451 [INFO][6193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.461 [INFO][6208] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.461 [INFO][6208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.461 [INFO][6208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.465 [WARNING][6208] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.465 [INFO][6208] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.466 [INFO][6208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.467955 containerd[1823]: 2025-11-08 01:20:15.467 [INFO][6193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.467955 containerd[1823]: time="2025-11-08T01:20:15.467943572Z" level=info msg="TearDown network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" successfully" Nov 8 01:20:15.467955 containerd[1823]: time="2025-11-08T01:20:15.467959707Z" level=info msg="StopPodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" returns successfully" Nov 8 01:20:15.468385 containerd[1823]: time="2025-11-08T01:20:15.468258873Z" level=info msg="RemovePodSandbox for \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\"" Nov 8 01:20:15.468385 containerd[1823]: time="2025-11-08T01:20:15.468274663Z" level=info msg="Forcibly stopping sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\"" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.485 [WARNING][6233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"71f5fc0d-399b-4a93-8104-f8dd3ea1c5df", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"d44767dc05b1879928797b6649f389c344b076589325ee4581dce2647f0bbe7a", Pod:"goldmane-666569f655-dmnhl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0cf643ba91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.485 [INFO][6233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.485 [INFO][6233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" iface="eth0" netns="" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.485 [INFO][6233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.485 [INFO][6233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.495 [INFO][6249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.495 [INFO][6249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.495 [INFO][6249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.499 [WARNING][6249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.499 [INFO][6249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" HandleID="k8s-pod-network.d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Workload="ci--4081.3.6--n--8acfe54808-k8s-goldmane--666569f655--dmnhl-eth0" Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.501 [INFO][6249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.502673 containerd[1823]: 2025-11-08 01:20:15.501 [INFO][6233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a" Nov 8 01:20:15.502673 containerd[1823]: time="2025-11-08T01:20:15.502671401Z" level=info msg="TearDown network for sandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" successfully" Nov 8 01:20:15.504690 containerd[1823]: time="2025-11-08T01:20:15.504666629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.504737 containerd[1823]: time="2025-11-08T01:20:15.504703876Z" level=info msg="RemovePodSandbox \"d5d77b94f607cd110b5cd8da19a9750bd5391dd749052477e1157e7186dea50a\" returns successfully" Nov 8 01:20:15.504969 containerd[1823]: time="2025-11-08T01:20:15.504958043Z" level=info msg="StopPodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\"" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.522 [WARNING][6274] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.522 [INFO][6274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.522 [INFO][6274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" iface="eth0" netns="" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.522 [INFO][6274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.522 [INFO][6274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.532 [INFO][6288] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.532 [INFO][6288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.532 [INFO][6288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.536 [WARNING][6288] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.537 [INFO][6288] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.538 [INFO][6288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.539719 containerd[1823]: 2025-11-08 01:20:15.538 [INFO][6274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.539971 containerd[1823]: time="2025-11-08T01:20:15.539713625Z" level=info msg="TearDown network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" successfully" Nov 8 01:20:15.539971 containerd[1823]: time="2025-11-08T01:20:15.539730480Z" level=info msg="StopPodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" returns successfully" Nov 8 01:20:15.540032 containerd[1823]: time="2025-11-08T01:20:15.540020969Z" level=info msg="RemovePodSandbox for \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\"" Nov 8 01:20:15.540060 containerd[1823]: time="2025-11-08T01:20:15.540037746Z" level=info msg="Forcibly stopping sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\"" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.557 [WARNING][6315] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" WorkloadEndpoint="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.557 [INFO][6315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.557 [INFO][6315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" iface="eth0" netns="" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.557 [INFO][6315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.557 [INFO][6315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.567 [INFO][6332] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.567 [INFO][6332] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.567 [INFO][6332] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.571 [WARNING][6332] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.571 [INFO][6332] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" HandleID="k8s-pod-network.dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Workload="ci--4081.3.6--n--8acfe54808-k8s-whisker--5b656c9f86--xhj4p-eth0" Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.572 [INFO][6332] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.573924 containerd[1823]: 2025-11-08 01:20:15.573 [INFO][6315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943" Nov 8 01:20:15.574184 containerd[1823]: time="2025-11-08T01:20:15.573931429Z" level=info msg="TearDown network for sandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" successfully" Nov 8 01:20:15.575403 containerd[1823]: time="2025-11-08T01:20:15.575388938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.575441 containerd[1823]: time="2025-11-08T01:20:15.575414437Z" level=info msg="RemovePodSandbox \"dd101ccb60de921100f49f9134c9a5775fdaa63bd9547c0d8989f67406b41943\" returns successfully" Nov 8 01:20:15.575712 containerd[1823]: time="2025-11-08T01:20:15.575697684Z" level=info msg="StopPodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\"" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.593 [WARNING][6356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cca2fc5-207d-4bfe-9520-a30e2cf67473", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34", Pod:"coredns-668d6bf9bc-k4vsk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d4e1c1c153", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.594 [INFO][6356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.594 [INFO][6356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" iface="eth0" netns="" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.594 [INFO][6356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.594 [INFO][6356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.604 [INFO][6370] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.604 [INFO][6370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.604 [INFO][6370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.608 [WARNING][6370] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.608 [INFO][6370] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.609 [INFO][6370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.611122 containerd[1823]: 2025-11-08 01:20:15.610 [INFO][6356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.611496 containerd[1823]: time="2025-11-08T01:20:15.611150761Z" level=info msg="TearDown network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" successfully" Nov 8 01:20:15.611496 containerd[1823]: time="2025-11-08T01:20:15.611171186Z" level=info msg="StopPodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" returns successfully" Nov 8 01:20:15.611496 containerd[1823]: time="2025-11-08T01:20:15.611464015Z" level=info msg="RemovePodSandbox for \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\"" Nov 8 01:20:15.611496 containerd[1823]: time="2025-11-08T01:20:15.611491069Z" level=info msg="Forcibly stopping sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\"" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.630 [WARNING][6395] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cca2fc5-207d-4bfe-9520-a30e2cf67473", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"f2c6927e5993d745114560aae8f01dacf17575ac71b5025b5903564480453c34", Pod:"coredns-668d6bf9bc-k4vsk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d4e1c1c153", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.630 [INFO][6395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.630 [INFO][6395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" iface="eth0" netns="" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.630 [INFO][6395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.630 [INFO][6395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.641 [INFO][6414] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.641 [INFO][6414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.641 [INFO][6414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.645 [WARNING][6414] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.645 [INFO][6414] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" HandleID="k8s-pod-network.71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--k4vsk-eth0" Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.647 [INFO][6414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.648774 containerd[1823]: 2025-11-08 01:20:15.647 [INFO][6395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56" Nov 8 01:20:15.649124 containerd[1823]: time="2025-11-08T01:20:15.648796858Z" level=info msg="TearDown network for sandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" successfully" Nov 8 01:20:15.650387 containerd[1823]: time="2025-11-08T01:20:15.650372322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.650419 containerd[1823]: time="2025-11-08T01:20:15.650398358Z" level=info msg="RemovePodSandbox \"71140fbb207e0bcb7b820b7bb714e68bc014a20f50dbfb3ccb1046f552d5ce56\" returns successfully" Nov 8 01:20:15.650679 containerd[1823]: time="2025-11-08T01:20:15.650667510Z" level=info msg="StopPodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\"" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.669 [WARNING][6436] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b57117d3-8237-4f8a-aa85-534eb9568949", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0", Pod:"csi-node-driver-rhnl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73a95ba6d53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.669 [INFO][6436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.669 [INFO][6436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" iface="eth0" netns="" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.669 [INFO][6436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.669 [INFO][6436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.679 [INFO][6450] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.679 [INFO][6450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.679 [INFO][6450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.684 [WARNING][6450] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.684 [INFO][6450] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.685 [INFO][6450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.687356 containerd[1823]: 2025-11-08 01:20:15.686 [INFO][6436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.687356 containerd[1823]: time="2025-11-08T01:20:15.687348771Z" level=info msg="TearDown network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" successfully" Nov 8 01:20:15.687689 containerd[1823]: time="2025-11-08T01:20:15.687365665Z" level=info msg="StopPodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" returns successfully" Nov 8 01:20:15.687689 containerd[1823]: time="2025-11-08T01:20:15.687635309Z" level=info msg="RemovePodSandbox for \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\"" Nov 8 01:20:15.687689 containerd[1823]: time="2025-11-08T01:20:15.687651575Z" level=info msg="Forcibly stopping sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\"" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.713 [WARNING][6475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b57117d3-8237-4f8a-aa85-534eb9568949", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"7db6b3d4272c53227d46d93c72420841fdba3c5ddde35a9378c4c7000f1c6eb0", Pod:"csi-node-driver-rhnl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73a95ba6d53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.713 [INFO][6475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.713 [INFO][6475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" iface="eth0" netns="" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.713 [INFO][6475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.713 [INFO][6475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.723 [INFO][6492] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.723 [INFO][6492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.723 [INFO][6492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.727 [WARNING][6492] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.727 [INFO][6492] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" HandleID="k8s-pod-network.c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Workload="ci--4081.3.6--n--8acfe54808-k8s-csi--node--driver--rhnl5-eth0" Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.729 [INFO][6492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.730645 containerd[1823]: 2025-11-08 01:20:15.729 [INFO][6475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e" Nov 8 01:20:15.730645 containerd[1823]: time="2025-11-08T01:20:15.730621925Z" level=info msg="TearDown network for sandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" successfully" Nov 8 01:20:15.732196 containerd[1823]: time="2025-11-08T01:20:15.732153542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.732196 containerd[1823]: time="2025-11-08T01:20:15.732178930Z" level=info msg="RemovePodSandbox \"c23ccb3f481f7ac2cb70e2a185fddcffcc01a9e22b20d93516dd42cd04bf7e4e\" returns successfully" Nov 8 01:20:15.732445 containerd[1823]: time="2025-11-08T01:20:15.732400805Z" level=info msg="StopPodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\"" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.752 [WARNING][6520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e679c708-5dc2-455f-8f76-0a5b47442761", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d", Pod:"calico-apiserver-d6d97687b-82v26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb795ee947e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.753 [INFO][6520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.753 [INFO][6520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" iface="eth0" netns="" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.753 [INFO][6520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.753 [INFO][6520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.763 [INFO][6535] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.763 [INFO][6535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.763 [INFO][6535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.767 [WARNING][6535] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.767 [INFO][6535] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.769 [INFO][6535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.771047 containerd[1823]: 2025-11-08 01:20:15.770 [INFO][6520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.771333 containerd[1823]: time="2025-11-08T01:20:15.771051884Z" level=info msg="TearDown network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" successfully" Nov 8 01:20:15.771333 containerd[1823]: time="2025-11-08T01:20:15.771067404Z" level=info msg="StopPodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" returns successfully" Nov 8 01:20:15.771372 containerd[1823]: time="2025-11-08T01:20:15.771336770Z" level=info msg="RemovePodSandbox for \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\"" Nov 8 01:20:15.771372 containerd[1823]: time="2025-11-08T01:20:15.771353183Z" level=info msg="Forcibly stopping sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\"" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.789 [WARNING][6561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e679c708-5dc2-455f-8f76-0a5b47442761", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"c605a94ae9038480f73ffad235af3591e3af4383e17f31600daa51cda4eee32d", Pod:"calico-apiserver-d6d97687b-82v26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb795ee947e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.789 [INFO][6561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.789 [INFO][6561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" iface="eth0" netns="" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.789 [INFO][6561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.789 [INFO][6561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.799 [INFO][6579] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.799 [INFO][6579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.799 [INFO][6579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.804 [WARNING][6579] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.804 [INFO][6579] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" HandleID="k8s-pod-network.d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--82v26-eth0" Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.805 [INFO][6579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.807157 containerd[1823]: 2025-11-08 01:20:15.806 [INFO][6561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938" Nov 8 01:20:15.807157 containerd[1823]: time="2025-11-08T01:20:15.807107342Z" level=info msg="TearDown network for sandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" successfully" Nov 8 01:20:15.808623 containerd[1823]: time="2025-11-08T01:20:15.808582182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.808623 containerd[1823]: time="2025-11-08T01:20:15.808607430Z" level=info msg="RemovePodSandbox \"d69af856e8d666ad0a915f1771cea05f0f3849c007e6a6a72c76b07367eb6938\" returns successfully" Nov 8 01:20:15.808861 containerd[1823]: time="2025-11-08T01:20:15.808848883Z" level=info msg="StopPodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\"" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.827 [WARNING][6605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94b31311-6cc0-4ac6-9640-c851d1b5747b", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b", Pod:"coredns-668d6bf9bc-8s8cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe1b8f9d64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.827 [INFO][6605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.827 [INFO][6605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" iface="eth0" netns="" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.827 [INFO][6605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.827 [INFO][6605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.837 [INFO][6619] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.837 [INFO][6619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.837 [INFO][6619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.841 [WARNING][6619] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.841 [INFO][6619] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.842 [INFO][6619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.844486 containerd[1823]: 2025-11-08 01:20:15.843 [INFO][6605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.844779 containerd[1823]: time="2025-11-08T01:20:15.844486395Z" level=info msg="TearDown network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" successfully" Nov 8 01:20:15.844779 containerd[1823]: time="2025-11-08T01:20:15.844503143Z" level=info msg="StopPodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" returns successfully" Nov 8 01:20:15.844812 containerd[1823]: time="2025-11-08T01:20:15.844791731Z" level=info msg="RemovePodSandbox for \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\"" Nov 8 01:20:15.844812 containerd[1823]: time="2025-11-08T01:20:15.844807771Z" level=info msg="Forcibly stopping sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\"" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.862 [WARNING][6646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94b31311-6cc0-4ac6-9640-c851d1b5747b", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"45ae86a40a9555be61bcf71e24b58ca6701875e1c76202efc90ff6ed741e1d2b", Pod:"coredns-668d6bf9bc-8s8cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe1b8f9d64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.862 [INFO][6646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.863 [INFO][6646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" iface="eth0" netns="" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.863 [INFO][6646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.863 [INFO][6646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.872 [INFO][6661] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.873 [INFO][6661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.873 [INFO][6661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.876 [WARNING][6661] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.876 [INFO][6661] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" HandleID="k8s-pod-network.1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Workload="ci--4081.3.6--n--8acfe54808-k8s-coredns--668d6bf9bc--8s8cb-eth0" Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.878 [INFO][6661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.879396 containerd[1823]: 2025-11-08 01:20:15.878 [INFO][6646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f" Nov 8 01:20:15.879700 containerd[1823]: time="2025-11-08T01:20:15.879437554Z" level=info msg="TearDown network for sandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" successfully" Nov 8 01:20:15.880895 containerd[1823]: time="2025-11-08T01:20:15.880880772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.880924 containerd[1823]: time="2025-11-08T01:20:15.880905450Z" level=info msg="RemovePodSandbox \"1cfd32b089cc048b4b8713dc0807cbb25825434bad6866a627652e142b11e21f\" returns successfully" Nov 8 01:20:15.881172 containerd[1823]: time="2025-11-08T01:20:15.881156678Z" level=info msg="StopPodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\"" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.898 [WARNING][6685] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb14ec11-f019-40a0-9b63-589cf025cfb4", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925", Pod:"calico-apiserver-d6d97687b-lt4rt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd2f6e7c867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.898 [INFO][6685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.898 [INFO][6685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" iface="eth0" netns="" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.898 [INFO][6685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.898 [INFO][6685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.909 [INFO][6704] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.909 [INFO][6704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.909 [INFO][6704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.913 [WARNING][6704] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.913 [INFO][6704] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.914 [INFO][6704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.915668 containerd[1823]: 2025-11-08 01:20:15.914 [INFO][6685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.915972 containerd[1823]: time="2025-11-08T01:20:15.915692614Z" level=info msg="TearDown network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" successfully" Nov 8 01:20:15.915972 containerd[1823]: time="2025-11-08T01:20:15.915709319Z" level=info msg="StopPodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" returns successfully" Nov 8 01:20:15.916005 containerd[1823]: time="2025-11-08T01:20:15.915972404Z" level=info msg="RemovePodSandbox for \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\"" Nov 8 01:20:15.916005 containerd[1823]: time="2025-11-08T01:20:15.915988469Z" level=info msg="Forcibly stopping sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\"" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.933 [WARNING][6731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0", GenerateName:"calico-apiserver-d6d97687b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb14ec11-f019-40a0-9b63-589cf025cfb4", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6d97687b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8acfe54808", ContainerID:"2a025f2d10717f6bb95583ab354eb820bd1e127364b7b72643f26790f3a43925", Pod:"calico-apiserver-d6d97687b-lt4rt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd2f6e7c867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.933 [INFO][6731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.933 [INFO][6731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" iface="eth0" netns="" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.933 [INFO][6731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.933 [INFO][6731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.943 [INFO][6749] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.944 [INFO][6749] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.944 [INFO][6749] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.947 [WARNING][6749] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.947 [INFO][6749] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" HandleID="k8s-pod-network.621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Workload="ci--4081.3.6--n--8acfe54808-k8s-calico--apiserver--d6d97687b--lt4rt-eth0" Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.948 [INFO][6749] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:20:15.950363 containerd[1823]: 2025-11-08 01:20:15.949 [INFO][6731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30" Nov 8 01:20:15.950673 containerd[1823]: time="2025-11-08T01:20:15.950392034Z" level=info msg="TearDown network for sandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" successfully" Nov 8 01:20:15.951696 containerd[1823]: time="2025-11-08T01:20:15.951682941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:20:15.951723 containerd[1823]: time="2025-11-08T01:20:15.951710047Z" level=info msg="RemovePodSandbox \"621fe7e83147e59f8fe5d4e9291a93dbc64708c159488560d565d88b39042f30\" returns successfully" Nov 8 01:20:24.284678 kubelet[3076]: E1108 01:20:24.284549 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:20:24.286033 kubelet[3076]: E1108 01:20:24.285613 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:20:25.286061 kubelet[3076]: E1108 01:20:25.285956 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:20:26.284173 kubelet[3076]: E1108 01:20:26.284060 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:20:27.285200 kubelet[3076]: E1108 01:20:27.285122 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:20:29.286009 containerd[1823]: time="2025-11-08T01:20:29.285880753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:20:29.641358 containerd[1823]: time="2025-11-08T01:20:29.641303561Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:29.641862 containerd[1823]: time="2025-11-08T01:20:29.641794390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:20:29.641862 containerd[1823]: time="2025-11-08T01:20:29.641844968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:20:29.642002 kubelet[3076]: E1108 01:20:29.641954 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:20:29.642002 kubelet[3076]: E1108 01:20:29.641989 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:20:29.642248 kubelet[3076]: E1108 01:20:29.642053 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f499b8e984d248faa7e4f138b84f7ce2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:29.643903 containerd[1823]: time="2025-11-08T01:20:29.643805120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:20:29.985488 containerd[1823]: time="2025-11-08T01:20:29.985387991Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:29.985971 containerd[1823]: time="2025-11-08T01:20:29.985912767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:20:29.986006 containerd[1823]: time="2025-11-08T01:20:29.985965757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:20:29.986125 kubelet[3076]: E1108 01:20:29.986076 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:20:29.986125 kubelet[3076]: E1108 01:20:29.986109 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:20:29.986198 kubelet[3076]: E1108 01:20:29.986179 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:29.987337 kubelet[3076]: E1108 01:20:29.987319 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:20:36.283408 containerd[1823]: time="2025-11-08T01:20:36.283373168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:20:36.621361 containerd[1823]: time="2025-11-08T01:20:36.621298973Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:36.621811 containerd[1823]: time="2025-11-08T01:20:36.621762870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:20:36.621865 containerd[1823]: time="2025-11-08T01:20:36.621816173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:20:36.621952 kubelet[3076]: E1108 01:20:36.621928 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:20:36.622183 kubelet[3076]: E1108 01:20:36.621961 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:20:36.622183 kubelet[3076]: E1108 01:20:36.622054 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:36.623764 containerd[1823]: time="2025-11-08T01:20:36.623734110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:20:36.968344 containerd[1823]: time="2025-11-08T01:20:36.968083307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:36.969060 containerd[1823]: time="2025-11-08T01:20:36.968975245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:20:36.969060 containerd[1823]: time="2025-11-08T01:20:36.969041287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:20:36.969228 kubelet[3076]: E1108 01:20:36.969159 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:20:36.969228 kubelet[3076]: E1108 01:20:36.969195 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:20:36.969323 kubelet[3076]: E1108 01:20:36.969301 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:36.970429 kubelet[3076]: E1108 01:20:36.970413 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:20:37.285194 containerd[1823]: time="2025-11-08T01:20:37.284966944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:20:37.624530 containerd[1823]: time="2025-11-08T01:20:37.624426972Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:37.625438 containerd[1823]: time="2025-11-08T01:20:37.625412927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:20:37.625522 containerd[1823]: time="2025-11-08T01:20:37.625479021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:20:37.625646 kubelet[3076]: E1108 01:20:37.625626 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:37.625847 kubelet[3076]: E1108 01:20:37.625656 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:37.625847 kubelet[3076]: E1108 01:20:37.625729 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:37.627788 kubelet[3076]: E1108 01:20:37.627733 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:20:38.283530 containerd[1823]: time="2025-11-08T01:20:38.283506927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:20:38.654051 containerd[1823]: time="2025-11-08T01:20:38.653951813Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:38.655025 containerd[1823]: time="2025-11-08T01:20:38.654998728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:20:38.655124 containerd[1823]: time="2025-11-08T01:20:38.655077914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:20:38.655203 kubelet[3076]: E1108 01:20:38.655158 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:20:38.655203 kubelet[3076]: E1108 01:20:38.655187 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:20:38.655357 kubelet[3076]: E1108 01:20:38.655274 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8jnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:38.656615 kubelet[3076]: E1108 01:20:38.656593 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:20:41.285329 containerd[1823]: time="2025-11-08T01:20:41.285244071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:20:41.618239 containerd[1823]: time="2025-11-08T01:20:41.618112164Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:41.619013 containerd[1823]: time="2025-11-08T01:20:41.618934656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:20:41.619013 containerd[1823]: time="2025-11-08T01:20:41.619002644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:20:41.619173 kubelet[3076]: E1108 01:20:41.619116 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:41.619173 kubelet[3076]: E1108 01:20:41.619151 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:20:41.619435 kubelet[3076]: E1108 01:20:41.619307 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:41.619543 containerd[1823]: time="2025-11-08T01:20:41.619372115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:20:41.620496 kubelet[3076]: E1108 01:20:41.620477 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:20:41.987219 containerd[1823]: time="2025-11-08T01:20:41.986961410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:20:41.987840 containerd[1823]: time="2025-11-08T01:20:41.987774115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:20:41.987879 containerd[1823]: time="2025-11-08T01:20:41.987842128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:20:41.987985 kubelet[3076]: E1108 01:20:41.987926 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:20:41.987985 kubelet[3076]: E1108 01:20:41.987957 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:20:41.988067 kubelet[3076]: E1108 01:20:41.988029 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:20:41.989181 kubelet[3076]: E1108 01:20:41.989168 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:20:42.285718 kubelet[3076]: E1108 01:20:42.285460 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:20:50.284923 kubelet[3076]: E1108 01:20:50.284790 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:20:51.291358 kubelet[3076]: E1108 01:20:51.291273 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:20:51.292426 kubelet[3076]: E1108 01:20:51.292013 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:20:56.284831 kubelet[3076]: E1108 01:20:56.284744 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:20:56.286290 kubelet[3076]: E1108 01:20:56.285658 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:20:57.284951 kubelet[3076]: E1108 01:20:57.284863 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:21:04.285093 kubelet[3076]: E1108 01:21:04.284968 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:21:05.284128 kubelet[3076]: E1108 01:21:05.284075 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:21:05.284594 kubelet[3076]: E1108 01:21:05.284542 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:21:08.283017 kubelet[3076]: E1108 01:21:08.282942 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:21:08.283340 kubelet[3076]: E1108 01:21:08.283246 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:21:09.284604 kubelet[3076]: E1108 01:21:09.284518 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:21:16.283143 kubelet[3076]: E1108 01:21:16.283097 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:21:16.283143 kubelet[3076]: E1108 01:21:16.283097 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:21:16.283614 kubelet[3076]: E1108 01:21:16.283328 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:21:19.284017 containerd[1823]: time="2025-11-08T01:21:19.283926846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:21:19.649003 containerd[1823]: time="2025-11-08T01:21:19.648940863Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:19.649555 containerd[1823]: time="2025-11-08T01:21:19.649459540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:21:19.649555 containerd[1823]: time="2025-11-08T01:21:19.649488940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:21:19.649709 kubelet[3076]: E1108 01:21:19.649671 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:21:19.649924 kubelet[3076]: E1108 01:21:19.649715 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:21:19.649924 kubelet[3076]: E1108 01:21:19.649801 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f499b8e984d248faa7e4f138b84f7ce2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:19.651519 containerd[1823]: time="2025-11-08T01:21:19.651460218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:21:20.013354 containerd[1823]: time="2025-11-08T01:21:20.013095585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:20.018870 containerd[1823]: time="2025-11-08T01:21:20.018804545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:21:20.018910 containerd[1823]: time="2025-11-08T01:21:20.018869949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:21:20.019018 kubelet[3076]: E1108 01:21:20.018949 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:21:20.019018 kubelet[3076]: E1108 01:21:20.018981 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:21:20.019083 kubelet[3076]: E1108 01:21:20.019044 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:20.020266 kubelet[3076]: E1108 01:21:20.020208 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:21:22.285348 containerd[1823]: time="2025-11-08T01:21:22.285220461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:21:22.629503 containerd[1823]: time="2025-11-08T01:21:22.629420241Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:22.630386 containerd[1823]: time="2025-11-08T01:21:22.630309213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:21:22.630386 containerd[1823]: time="2025-11-08T01:21:22.630375512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:21:22.630577 kubelet[3076]: E1108 01:21:22.630526 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:21:22.630577 kubelet[3076]: E1108 01:21:22.630558 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:21:22.630785 kubelet[3076]: E1108 01:21:22.630631 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:22.632359 kubelet[3076]: E1108 01:21:22.632320 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:21:23.285450 containerd[1823]: time="2025-11-08T01:21:23.285344696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:21:23.633740 containerd[1823]: time="2025-11-08T01:21:23.633666409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:23.634350 containerd[1823]: time="2025-11-08T01:21:23.634274798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:21:23.634410 containerd[1823]: time="2025-11-08T01:21:23.634339694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:21:23.634502 kubelet[3076]: E1108 01:21:23.634437 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:21:23.634684 kubelet[3076]: E1108 01:21:23.634504 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:21:23.634684 kubelet[3076]: E1108 01:21:23.634596 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:23.635709 kubelet[3076]: E1108 01:21:23.635676 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:21:28.283296 containerd[1823]: time="2025-11-08T01:21:28.283271895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:21:28.630447 containerd[1823]: time="2025-11-08T01:21:28.630388246Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:28.635033 containerd[1823]: time="2025-11-08T01:21:28.634965101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:21:28.635071 containerd[1823]: time="2025-11-08T01:21:28.635028200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:21:28.635185 kubelet[3076]: E1108 01:21:28.635140 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:21:28.635185 kubelet[3076]: E1108 01:21:28.635171 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:21:28.635400 kubelet[3076]: E1108 01:21:28.635247 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8jnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:28.636521 kubelet[3076]: E1108 01:21:28.636453 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:21:29.282697 containerd[1823]: time="2025-11-08T01:21:29.282653590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:21:29.658040 containerd[1823]: time="2025-11-08T01:21:29.657951961Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:29.659054 containerd[1823]: time="2025-11-08T01:21:29.659013664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:21:29.659094 containerd[1823]: time="2025-11-08T01:21:29.659067711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:21:29.659238 kubelet[3076]: E1108 01:21:29.659215 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:21:29.659369 kubelet[3076]: E1108 01:21:29.659248 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:21:29.659369 kubelet[3076]: E1108 01:21:29.659321 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:29.660566 kubelet[3076]: E1108 01:21:29.660525 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:21:31.285596 containerd[1823]: time="2025-11-08T01:21:31.285494797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:21:31.636164 containerd[1823]: time="2025-11-08T01:21:31.636108622Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:31.636447 containerd[1823]: time="2025-11-08T01:21:31.636431023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:21:31.636567 containerd[1823]: time="2025-11-08T01:21:31.636508807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:21:31.636695 kubelet[3076]: E1108 01:21:31.636620 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:21:31.636695 kubelet[3076]: E1108 01:21:31.636675 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:21:31.636891 kubelet[3076]: E1108 01:21:31.636771 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:31.638374 containerd[1823]: time="2025-11-08T01:21:31.638343521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:21:31.991154 containerd[1823]: time="2025-11-08T01:21:31.990888177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:21:31.991798 containerd[1823]: time="2025-11-08T01:21:31.991708066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:21:31.991889 containerd[1823]: time="2025-11-08T01:21:31.991799970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:21:31.991988 kubelet[3076]: E1108 01:21:31.991913 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:21:31.991988 kubelet[3076]: E1108 01:21:31.991982 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:21:31.992158 kubelet[3076]: E1108 01:21:31.992105 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:21:31.993462 kubelet[3076]: E1108 01:21:31.993400 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:21:32.283391 kubelet[3076]: E1108 01:21:32.283252 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:21:35.284938 kubelet[3076]: E1108 01:21:35.284874 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:21:38.284666 kubelet[3076]: E1108 01:21:38.284527 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:21:42.282557 kubelet[3076]: E1108 01:21:42.282505 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:21:43.284250 kubelet[3076]: E1108 01:21:43.284161 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:21:44.284506 kubelet[3076]: E1108 01:21:44.284415 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:21:47.285515 kubelet[3076]: E1108 01:21:47.285381 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:21:49.285286 kubelet[3076]: E1108 01:21:49.285194 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:21:53.285226 kubelet[3076]: E1108 01:21:53.285140 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:21:54.286330 kubelet[3076]: E1108 01:21:54.286228 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:21:56.282913 kubelet[3076]: E1108 01:21:56.282884 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:21:56.282913 kubelet[3076]: E1108 01:21:56.282896 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:21:59.283884 kubelet[3076]: E1108 01:21:59.283803 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:22:01.285085 kubelet[3076]: E1108 01:22:01.284955 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:22:07.283049 kubelet[3076]: E1108 01:22:07.283016 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:22:07.283049 kubelet[3076]: E1108 01:22:07.283035 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:22:09.283322 kubelet[3076]: E1108 01:22:09.283272 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:22:10.284305 kubelet[3076]: E1108 01:22:10.284219 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:22:12.283900 kubelet[3076]: E1108 01:22:12.283862 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:22:13.283915 kubelet[3076]: E1108 01:22:13.283847 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:22:18.285203 kubelet[3076]: E1108 01:22:18.285093 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:22:19.285000 kubelet[3076]: E1108 01:22:19.284920 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:22:24.283237 kubelet[3076]: E1108 01:22:24.283208 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:22:24.283612 kubelet[3076]: E1108 01:22:24.283391 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:22:25.283390 kubelet[3076]: E1108 01:22:25.283340 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:22:26.285457 kubelet[3076]: E1108 01:22:26.285331 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:22:33.284626 kubelet[3076]: E1108 01:22:33.284505 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:22:34.284710 kubelet[3076]: E1108 01:22:34.284617 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:22:35.287304 kubelet[3076]: E1108 01:22:35.287196 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:22:36.283270 kubelet[3076]: E1108 01:22:36.283244 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:22:40.285354 kubelet[3076]: E1108 01:22:40.285249 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:22:40.286651 containerd[1823]: time="2025-11-08T01:22:40.286056021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:22:40.677666 containerd[1823]: time="2025-11-08T01:22:40.677569145Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:22:40.678603 containerd[1823]: time="2025-11-08T01:22:40.678575408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:22:40.678660 containerd[1823]: time="2025-11-08T01:22:40.678627332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:22:40.678743 kubelet[3076]: E1108 01:22:40.678719 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:22:40.678787 kubelet[3076]: E1108 01:22:40.678752 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:22:40.678857 kubelet[3076]: E1108 01:22:40.678837 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f499b8e984d248faa7e4f138b84f7ce2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:22:40.680354 containerd[1823]: time="2025-11-08T01:22:40.680339771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:22:41.037249 containerd[1823]: time="2025-11-08T01:22:41.036985314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:22:41.037826 containerd[1823]: time="2025-11-08T01:22:41.037753423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:22:41.037864 containerd[1823]: time="2025-11-08T01:22:41.037816953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:22:41.037977 kubelet[3076]: E1108 01:22:41.037926 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:22:41.037977 kubelet[3076]: E1108 01:22:41.037956 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:22:41.038078 kubelet[3076]: E1108 01:22:41.038025 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:22:41.039258 kubelet[3076]: E1108 01:22:41.039210 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:22:46.283592 kubelet[3076]: E1108 01:22:46.283566 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:22:48.288449 kubelet[3076]: E1108 01:22:48.288098 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:22:49.285032 containerd[1823]: time="2025-11-08T01:22:49.285009528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:22:49.677225 containerd[1823]: time="2025-11-08T01:22:49.677089457Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:22:49.678072 containerd[1823]: time="2025-11-08T01:22:49.678002999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:22:49.678109 containerd[1823]: time="2025-11-08T01:22:49.678072644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:22:49.678207 kubelet[3076]: E1108 01:22:49.678152 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:22:49.678207 kubelet[3076]: E1108 01:22:49.678185 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:22:49.678411 kubelet[3076]: E1108 01:22:49.678330 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:22:49.678479 containerd[1823]: time="2025-11-08T01:22:49.678383445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:22:49.679505 kubelet[3076]: E1108 01:22:49.679469 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:22:50.046760 containerd[1823]: time="2025-11-08T01:22:50.046497987Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:22:50.047482 containerd[1823]: time="2025-11-08T01:22:50.047445088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:22:50.047533 containerd[1823]: time="2025-11-08T01:22:50.047516235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:22:50.047649 kubelet[3076]: E1108 01:22:50.047607 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:22:50.047649 kubelet[3076]: E1108 01:22:50.047638 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:22:50.047789 kubelet[3076]: E1108 01:22:50.047714 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:22:50.048905 kubelet[3076]: E1108 01:22:50.048863 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:22:51.285724 containerd[1823]: time="2025-11-08T01:22:51.285630205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:22:51.671387 containerd[1823]: time="2025-11-08T01:22:51.671245997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:22:51.672162 containerd[1823]: time="2025-11-08T01:22:51.672091816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:22:51.672162 containerd[1823]: time="2025-11-08T01:22:51.672144011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:22:51.672279 kubelet[3076]: E1108 01:22:51.672257 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:22:51.672460 kubelet[3076]: E1108 01:22:51.672289 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:22:51.672460 kubelet[3076]: E1108 01:22:51.672359 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:22:51.673543 kubelet[3076]: E1108 01:22:51.673529 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:22:55.291734 kubelet[3076]: E1108 01:22:55.291635 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:23:01.285081 kubelet[3076]: E1108 01:23:01.284971 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:23:01.286067 containerd[1823]: time="2025-11-08T01:23:01.285430001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:23:01.698573 containerd[1823]: time="2025-11-08T01:23:01.698425614Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:23:01.699444 containerd[1823]: time="2025-11-08T01:23:01.699417621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:23:01.699544 containerd[1823]: time="2025-11-08T01:23:01.699512202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:23:01.699626 kubelet[3076]: E1108 01:23:01.699605 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:23:01.699669 kubelet[3076]: E1108 01:23:01.699636 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:23:01.699879 kubelet[3076]: E1108 01:23:01.699827 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:23:01.699962 containerd[1823]: time="2025-11-08T01:23:01.699913182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:23:02.062848 containerd[1823]: time="2025-11-08T01:23:02.062584988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:23:02.063421 containerd[1823]: time="2025-11-08T01:23:02.063391538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:23:02.063545 containerd[1823]: time="2025-11-08T01:23:02.063464258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:23:02.063635 kubelet[3076]: E1108 01:23:02.063576 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:23:02.063635 kubelet[3076]: E1108 01:23:02.063609 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:23:02.063773 kubelet[3076]: E1108 01:23:02.063749 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8jnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:23:02.063868 containerd[1823]: time="2025-11-08T01:23:02.063776709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:23:02.064925 kubelet[3076]: E1108 01:23:02.064913 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:23:02.283421 kubelet[3076]: E1108 01:23:02.283399 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:23:02.442091 containerd[1823]: time="2025-11-08T01:23:02.441967315Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:23:02.442777 containerd[1823]: time="2025-11-08T01:23:02.442749981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:23:02.442877 containerd[1823]: time="2025-11-08T01:23:02.442814021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:23:02.442966 kubelet[3076]: E1108 01:23:02.442909 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:23:02.442966 kubelet[3076]: E1108 01:23:02.442940 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:23:02.443129 kubelet[3076]: E1108 01:23:02.443007 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:23:02.444186 kubelet[3076]: E1108 01:23:02.444140 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:23:04.284637 kubelet[3076]: E1108 01:23:04.284465 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:23:06.285328 kubelet[3076]: E1108 01:23:06.285227 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:23:13.286831 kubelet[3076]: E1108 01:23:13.286694 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:23:15.285635 kubelet[3076]: E1108 01:23:15.285543 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:23:16.284154 kubelet[3076]: E1108 01:23:16.284083 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:23:16.284154 kubelet[3076]: E1108 01:23:16.284083 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:23:18.282738 kubelet[3076]: E1108 01:23:18.282691 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:23:21.285566 kubelet[3076]: E1108 01:23:21.285525 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:23:28.284039 kubelet[3076]: E1108 01:23:28.283985 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:23:28.284039 kubelet[3076]: E1108 01:23:28.283996 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:23:28.284866 kubelet[3076]: E1108 01:23:28.284398 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:23:30.283932 kubelet[3076]: E1108 01:23:30.283880 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:23:30.283932 kubelet[3076]: E1108 01:23:30.283896 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:23:35.284042 kubelet[3076]: E1108 01:23:35.283924 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:23:40.283511 kubelet[3076]: E1108 01:23:40.283488 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:23:40.283511 kubelet[3076]: E1108 01:23:40.283494 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:23:41.285576 kubelet[3076]: E1108 01:23:41.285428 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:23:45.283968 kubelet[3076]: E1108 01:23:45.283926 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:23:45.284437 kubelet[3076]: E1108 01:23:45.284001 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:23:49.286378 kubelet[3076]: E1108 01:23:49.286222 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:23:53.283634 kubelet[3076]: E1108 01:23:53.283551 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:23:55.286098 kubelet[3076]: E1108 01:23:55.285996 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:23:55.287157 kubelet[3076]: E1108 01:23:55.286912 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:23:56.283740 kubelet[3076]: E1108 01:23:56.283677 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:23:59.283877 kubelet[3076]: E1108 01:23:59.283832 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:24:00.286515 kubelet[3076]: E1108 01:24:00.286386 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:24:08.285266 kubelet[3076]: E1108 01:24:08.285165 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:24:08.285266 kubelet[3076]: E1108 01:24:08.285230 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:24:09.285313 kubelet[3076]: E1108 01:24:09.285202 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:24:11.284434 kubelet[3076]: E1108 01:24:11.284337 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:24:11.284434 kubelet[3076]: E1108 01:24:11.284361 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:24:15.286856 kubelet[3076]: E1108 01:24:15.286716 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:24:19.283788 kubelet[3076]: E1108 01:24:19.283677 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:24:22.282921 kubelet[3076]: E1108 01:24:22.282896 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:24:22.282921 kubelet[3076]: E1108 01:24:22.282899 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:24:22.283258 kubelet[3076]: E1108 01:24:22.282942 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:24:24.286565 kubelet[3076]: E1108 01:24:24.286429 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:24:29.286200 kubelet[3076]: E1108 01:24:29.286086 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:24:30.283434 kubelet[3076]: E1108 01:24:30.283410 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:24:33.284160 kubelet[3076]: E1108 01:24:33.284077 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:24:37.283717 kubelet[3076]: E1108 01:24:37.283681 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:24:37.283717 kubelet[3076]: E1108 01:24:37.283684 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:24:37.284176 kubelet[3076]: E1108 01:24:37.283946 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:24:43.285782 kubelet[3076]: E1108 01:24:43.285579 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:24:45.285743 kubelet[3076]: E1108 01:24:45.285651 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:24:47.284505 kubelet[3076]: E1108 01:24:47.284397 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:24:48.285191 kubelet[3076]: E1108 01:24:48.285098 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:24:49.285721 kubelet[3076]: E1108 01:24:49.285657 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:24:50.282983 kubelet[3076]: E1108 01:24:50.282955 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:24:56.283791 kubelet[3076]: E1108 01:24:56.283706 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:24:58.284463 kubelet[3076]: E1108 01:24:58.284325 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:25:00.283335 kubelet[3076]: E1108 01:25:00.283262 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:25:03.283996 kubelet[3076]: E1108 01:25:03.283896 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:25:04.283461 kubelet[3076]: E1108 01:25:04.283435 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:25:04.283678 kubelet[3076]: E1108 01:25:04.283629 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:25:09.291898 kubelet[3076]: E1108 01:25:09.291722 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:25:12.283303 kubelet[3076]: E1108 01:25:12.283272 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:25:13.284729 kubelet[3076]: E1108 01:25:13.284697 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:25:17.284429 kubelet[3076]: E1108 01:25:17.284336 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:25:17.285658 kubelet[3076]: E1108 01:25:17.285518 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:25:19.284236 kubelet[3076]: E1108 01:25:19.284185 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:25:20.285883 kubelet[3076]: E1108 01:25:20.285781 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:25:23.284861 kubelet[3076]: E1108 01:25:23.284769 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:25:28.284764 kubelet[3076]: E1108 01:25:28.284660 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:25:28.286034 kubelet[3076]: E1108 01:25:28.285703 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:25:29.284446 kubelet[3076]: E1108 01:25:29.284328 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:25:33.278222 systemd[1]: Started sshd@9-139.178.94.41:22-139.178.68.195:42016.service - OpenSSH per-connection server daemon (139.178.68.195:42016). Nov 8 01:25:33.285658 containerd[1823]: time="2025-11-08T01:25:33.285540656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:25:33.309041 sshd[7287]: Accepted publickey for core from 139.178.68.195 port 42016 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:33.309859 sshd[7287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:33.312489 systemd-logind[1816]: New session 12 of user core. Nov 8 01:25:33.318769 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 01:25:33.451122 sshd[7287]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:33.452604 systemd[1]: sshd@9-139.178.94.41:22-139.178.68.195:42016.service: Deactivated successfully. Nov 8 01:25:33.453505 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 01:25:33.454113 systemd-logind[1816]: Session 12 logged out. Waiting for processes to exit. Nov 8 01:25:33.454538 systemd-logind[1816]: Removed session 12. Nov 8 01:25:33.671516 containerd[1823]: time="2025-11-08T01:25:33.671445446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:33.672198 containerd[1823]: time="2025-11-08T01:25:33.672176164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:25:33.672279 containerd[1823]: time="2025-11-08T01:25:33.672230632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:25:33.672383 kubelet[3076]: E1108 01:25:33.672353 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:25:33.672595 kubelet[3076]: E1108 01:25:33.672392 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:25:33.672595 kubelet[3076]: E1108 01:25:33.672461 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f499b8e984d248faa7e4f138b84f7ce2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:33.673983 containerd[1823]: time="2025-11-08T01:25:33.673940037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:25:34.039786 containerd[1823]: time="2025-11-08T01:25:34.039677443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:34.040252 containerd[1823]: time="2025-11-08T01:25:34.040158398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:25:34.040252 containerd[1823]: time="2025-11-08T01:25:34.040224152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:25:34.040351 kubelet[3076]: E1108 01:25:34.040331 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:25:34.040389 kubelet[3076]: E1108 01:25:34.040360 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:25:34.040477 kubelet[3076]: E1108 01:25:34.040436 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt56d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6668fb7f88-v6px7_calico-system(d30571cb-e438-453f-8b20-303beb52e470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:34.041722 kubelet[3076]: E1108 01:25:34.041661 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:25:34.283805 containerd[1823]: time="2025-11-08T01:25:34.283782386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:25:34.637302 containerd[1823]: time="2025-11-08T01:25:34.637232178Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:34.637856 containerd[1823]: time="2025-11-08T01:25:34.637782170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:25:34.637908 containerd[1823]: time="2025-11-08T01:25:34.637853205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:25:34.637982 kubelet[3076]: E1108 01:25:34.637951 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:25:34.638032 kubelet[3076]: E1108 01:25:34.637993 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:25:34.638116 kubelet[3076]: E1108 01:25:34.638092 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-lt4rt_calico-apiserver(bb14ec11-f019-40a0-9b63-589cf025cfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:34.639286 kubelet[3076]: E1108 01:25:34.639238 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:25:38.285187 containerd[1823]: time="2025-11-08T01:25:38.285102130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:25:38.465820 systemd[1]: Started sshd@10-139.178.94.41:22-139.178.68.195:42030.service - OpenSSH per-connection server daemon (139.178.68.195:42030). Nov 8 01:25:38.513756 sshd[7320]: Accepted publickey for core from 139.178.68.195 port 42030 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:38.515718 sshd[7320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:38.522171 systemd-logind[1816]: New session 13 of user core. Nov 8 01:25:38.539827 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 01:25:38.637827 containerd[1823]: time="2025-11-08T01:25:38.637717557Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:38.638607 containerd[1823]: time="2025-11-08T01:25:38.638550484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:25:38.638738 containerd[1823]: time="2025-11-08T01:25:38.638625037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:25:38.638828 kubelet[3076]: E1108 01:25:38.638786 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:25:38.639222 kubelet[3076]: E1108 01:25:38.638840 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:25:38.639222 kubelet[3076]: E1108 01:25:38.638989 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5jc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65d4465b-6mb2v_calico-system(905477ed-861d-42e3-890a-431b3428dc2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:38.640214 kubelet[3076]: E1108 01:25:38.640160 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:25:38.675102 sshd[7320]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:38.676888 systemd[1]: sshd@10-139.178.94.41:22-139.178.68.195:42030.service: Deactivated successfully. Nov 8 01:25:38.677772 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 01:25:38.678118 systemd-logind[1816]: Session 13 logged out. Waiting for processes to exit. Nov 8 01:25:38.678577 systemd-logind[1816]: Removed session 13. Nov 8 01:25:39.284728 containerd[1823]: time="2025-11-08T01:25:39.284611698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:25:39.670742 containerd[1823]: time="2025-11-08T01:25:39.670564574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:39.671588 containerd[1823]: time="2025-11-08T01:25:39.671519612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:25:39.671588 containerd[1823]: time="2025-11-08T01:25:39.671572439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:25:39.671710 kubelet[3076]: E1108 01:25:39.671656 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:25:39.671710 kubelet[3076]: E1108 01:25:39.671695 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:25:39.671909 kubelet[3076]: E1108 01:25:39.671797 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d6d97687b-82v26_calico-apiserver(e679c708-5dc2-455f-8f76-0a5b47442761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:39.673024 kubelet[3076]: E1108 01:25:39.672981 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:25:41.285757 kubelet[3076]: E1108 01:25:41.285648 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:25:43.283387 containerd[1823]: time="2025-11-08T01:25:43.283344125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:25:43.665258 containerd[1823]: time="2025-11-08T01:25:43.665167197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:43.666042 containerd[1823]: time="2025-11-08T01:25:43.665971499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:25:43.666099 containerd[1823]: time="2025-11-08T01:25:43.666041641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:25:43.666212 kubelet[3076]: E1108 01:25:43.666154 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:25:43.666212 kubelet[3076]: E1108 01:25:43.666189 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:25:43.666414 kubelet[3076]: E1108 01:25:43.666269 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8jnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmnhl_calico-system(71f5fc0d-399b-4a93-8104-f8dd3ea1c5df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:43.667453 kubelet[3076]: E1108 01:25:43.667436 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:25:43.690904 systemd[1]: Started sshd@11-139.178.94.41:22-139.178.68.195:41768.service - OpenSSH per-connection server daemon (139.178.68.195:41768). Nov 8 01:25:43.717844 sshd[7347]: Accepted publickey for core from 139.178.68.195 port 41768 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:43.718652 sshd[7347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:43.721251 systemd-logind[1816]: New session 14 of user core. Nov 8 01:25:43.732736 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 01:25:43.813583 sshd[7347]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:43.824096 systemd[1]: sshd@11-139.178.94.41:22-139.178.68.195:41768.service: Deactivated successfully. Nov 8 01:25:43.824891 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 01:25:43.825523 systemd-logind[1816]: Session 14 logged out. Waiting for processes to exit. Nov 8 01:25:43.826148 systemd[1]: Started sshd@12-139.178.94.41:22-139.178.68.195:41778.service - OpenSSH per-connection server daemon (139.178.68.195:41778). Nov 8 01:25:43.826581 systemd-logind[1816]: Removed session 14. Nov 8 01:25:43.852758 sshd[7374]: Accepted publickey for core from 139.178.68.195 port 41778 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:43.853952 sshd[7374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:43.856913 systemd-logind[1816]: New session 15 of user core. Nov 8 01:25:43.868566 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 01:25:43.962400 sshd[7374]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:43.972564 systemd[1]: sshd@12-139.178.94.41:22-139.178.68.195:41778.service: Deactivated successfully. Nov 8 01:25:43.973675 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 01:25:43.974300 systemd-logind[1816]: Session 15 logged out. Waiting for processes to exit. Nov 8 01:25:43.974979 systemd[1]: Started sshd@13-139.178.94.41:22-139.178.68.195:41780.service - OpenSSH per-connection server daemon (139.178.68.195:41780). Nov 8 01:25:43.975332 systemd-logind[1816]: Removed session 15. Nov 8 01:25:44.001530 sshd[7400]: Accepted publickey for core from 139.178.68.195 port 41780 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:44.002273 sshd[7400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:44.005186 systemd-logind[1816]: New session 16 of user core. Nov 8 01:25:44.019618 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 01:25:44.145995 sshd[7400]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:44.147630 systemd[1]: sshd@13-139.178.94.41:22-139.178.68.195:41780.service: Deactivated successfully. Nov 8 01:25:44.148545 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 01:25:44.149212 systemd-logind[1816]: Session 16 logged out. Waiting for processes to exit. Nov 8 01:25:44.149783 systemd-logind[1816]: Removed session 16. Nov 8 01:25:45.283682 kubelet[3076]: E1108 01:25:45.283645 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:25:49.174421 systemd[1]: Started sshd@14-139.178.94.41:22-139.178.68.195:41784.service - OpenSSH per-connection server daemon (139.178.68.195:41784). Nov 8 01:25:49.204906 sshd[7432]: Accepted publickey for core from 139.178.68.195 port 41784 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:49.205691 sshd[7432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:49.208428 systemd-logind[1816]: New session 17 of user core. Nov 8 01:25:49.209057 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 01:25:49.339912 sshd[7432]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:49.341741 systemd[1]: sshd@14-139.178.94.41:22-139.178.68.195:41784.service: Deactivated successfully. Nov 8 01:25:49.342853 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 01:25:49.343754 systemd-logind[1816]: Session 17 logged out. Waiting for processes to exit. Nov 8 01:25:49.344435 systemd-logind[1816]: Removed session 17. Nov 8 01:25:50.285797 kubelet[3076]: E1108 01:25:50.285567 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:25:50.286331 kubelet[3076]: E1108 01:25:50.286305 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:25:52.284868 kubelet[3076]: E1108 01:25:52.284729 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:25:54.354183 systemd[1]: Started sshd@15-139.178.94.41:22-139.178.68.195:59318.service - OpenSSH per-connection server daemon (139.178.68.195:59318). Nov 8 01:25:54.384980 sshd[7490]: Accepted publickey for core from 139.178.68.195 port 59318 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:54.386286 sshd[7490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:54.390521 systemd-logind[1816]: New session 18 of user core. Nov 8 01:25:54.405843 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 01:25:54.523581 sshd[7490]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:54.525161 systemd[1]: sshd@15-139.178.94.41:22-139.178.68.195:59318.service: Deactivated successfully. Nov 8 01:25:54.526126 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 01:25:54.526854 systemd-logind[1816]: Session 18 logged out. Waiting for processes to exit. Nov 8 01:25:54.527353 systemd-logind[1816]: Removed session 18. Nov 8 01:25:55.286108 containerd[1823]: time="2025-11-08T01:25:55.286019437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:25:55.635211 containerd[1823]: time="2025-11-08T01:25:55.635183189Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:55.635756 containerd[1823]: time="2025-11-08T01:25:55.635738344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:25:55.635800 containerd[1823]: time="2025-11-08T01:25:55.635784523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:25:55.635916 kubelet[3076]: E1108 01:25:55.635893 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:25:55.636125 kubelet[3076]: E1108 01:25:55.635925 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:25:55.636125 kubelet[3076]: E1108 01:25:55.636000 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:55.637774 containerd[1823]: time="2025-11-08T01:25:55.637763036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:25:55.991930 containerd[1823]: time="2025-11-08T01:25:55.991675222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:25:55.992654 containerd[1823]: time="2025-11-08T01:25:55.992537020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:25:55.992654 containerd[1823]: time="2025-11-08T01:25:55.992601220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:25:55.992799 kubelet[3076]: E1108 01:25:55.992769 3076 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:25:55.992873 kubelet[3076]: E1108 01:25:55.992832 3076 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:25:55.992992 kubelet[3076]: E1108 01:25:55.992950 3076 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhnl5_calico-system(b57117d3-8237-4f8a-aa85-534eb9568949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:25:55.994114 kubelet[3076]: E1108 01:25:55.994065 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:25:56.284310 kubelet[3076]: E1108 01:25:56.284184 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:25:58.283190 kubelet[3076]: E1108 01:25:58.283135 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:25:59.537889 systemd[1]: Started sshd@16-139.178.94.41:22-139.178.68.195:59326.service - OpenSSH per-connection server daemon (139.178.68.195:59326). Nov 8 01:25:59.571285 sshd[7517]: Accepted publickey for core from 139.178.68.195 port 59326 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:25:59.572099 sshd[7517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:25:59.574956 systemd-logind[1816]: New session 19 of user core. Nov 8 01:25:59.596786 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 01:25:59.697761 sshd[7517]: pam_unix(sshd:session): session closed for user core Nov 8 01:25:59.700268 systemd[1]: sshd@16-139.178.94.41:22-139.178.68.195:59326.service: Deactivated successfully. Nov 8 01:25:59.701783 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 01:25:59.702868 systemd-logind[1816]: Session 19 logged out. Waiting for processes to exit. Nov 8 01:25:59.703729 systemd-logind[1816]: Removed session 19. Nov 8 01:26:01.283197 kubelet[3076]: E1108 01:26:01.283160 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:26:04.707953 systemd[1]: Started sshd@17-139.178.94.41:22-139.178.68.195:56764.service - OpenSSH per-connection server daemon (139.178.68.195:56764). Nov 8 01:26:04.735520 sshd[7543]: Accepted publickey for core from 139.178.68.195 port 56764 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:04.736378 sshd[7543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:04.739163 systemd-logind[1816]: New session 20 of user core. Nov 8 01:26:04.755654 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 01:26:04.863217 sshd[7543]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:04.878217 systemd[1]: sshd@17-139.178.94.41:22-139.178.68.195:56764.service: Deactivated successfully. Nov 8 01:26:04.879062 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 01:26:04.879778 systemd-logind[1816]: Session 20 logged out. Waiting for processes to exit. Nov 8 01:26:04.880483 systemd[1]: Started sshd@18-139.178.94.41:22-139.178.68.195:56776.service - OpenSSH per-connection server daemon (139.178.68.195:56776). Nov 8 01:26:04.880893 systemd-logind[1816]: Removed session 20. Nov 8 01:26:04.906804 sshd[7569]: Accepted publickey for core from 139.178.68.195 port 56776 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:04.907512 sshd[7569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:04.910119 systemd-logind[1816]: New session 21 of user core. Nov 8 01:26:04.923718 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 01:26:05.079651 sshd[7569]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:05.095224 systemd[1]: sshd@18-139.178.94.41:22-139.178.68.195:56776.service: Deactivated successfully. Nov 8 01:26:05.096117 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 01:26:05.096928 systemd-logind[1816]: Session 21 logged out. Waiting for processes to exit. Nov 8 01:26:05.097617 systemd[1]: Started sshd@19-139.178.94.41:22-139.178.68.195:56790.service - OpenSSH per-connection server daemon (139.178.68.195:56790). Nov 8 01:26:05.098175 systemd-logind[1816]: Removed session 21. Nov 8 01:26:05.125829 sshd[7594]: Accepted publickey for core from 139.178.68.195 port 56790 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:05.126857 sshd[7594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:05.130292 systemd-logind[1816]: New session 22 of user core. Nov 8 01:26:05.146741 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 01:26:05.283591 kubelet[3076]: E1108 01:26:05.283560 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:26:05.283890 kubelet[3076]: E1108 01:26:05.283601 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:26:05.762666 sshd[7594]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:05.776401 systemd[1]: sshd@19-139.178.94.41:22-139.178.68.195:56790.service: Deactivated successfully. Nov 8 01:26:05.777279 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 01:26:05.778001 systemd-logind[1816]: Session 22 logged out. Waiting for processes to exit. Nov 8 01:26:05.778640 systemd[1]: Started sshd@20-139.178.94.41:22-139.178.68.195:56798.service - OpenSSH per-connection server daemon (139.178.68.195:56798). Nov 8 01:26:05.779165 systemd-logind[1816]: Removed session 22. Nov 8 01:26:05.805321 sshd[7627]: Accepted publickey for core from 139.178.68.195 port 56798 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:05.806160 sshd[7627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:05.809055 systemd-logind[1816]: New session 23 of user core. Nov 8 01:26:05.828668 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 01:26:05.992753 sshd[7627]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:06.015500 systemd[1]: sshd@20-139.178.94.41:22-139.178.68.195:56798.service: Deactivated successfully. Nov 8 01:26:06.016448 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 01:26:06.017206 systemd-logind[1816]: Session 23 logged out. Waiting for processes to exit. Nov 8 01:26:06.017968 systemd[1]: Started sshd@21-139.178.94.41:22-139.178.68.195:56804.service - OpenSSH per-connection server daemon (139.178.68.195:56804). Nov 8 01:26:06.018540 systemd-logind[1816]: Removed session 23. Nov 8 01:26:06.046420 sshd[7654]: Accepted publickey for core from 139.178.68.195 port 56804 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:06.047274 sshd[7654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:06.050316 systemd-logind[1816]: New session 24 of user core. Nov 8 01:26:06.059674 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 01:26:06.144654 sshd[7654]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:06.146251 systemd[1]: sshd@21-139.178.94.41:22-139.178.68.195:56804.service: Deactivated successfully. Nov 8 01:26:06.147200 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 01:26:06.147963 systemd-logind[1816]: Session 24 logged out. Waiting for processes to exit. Nov 8 01:26:06.148459 systemd-logind[1816]: Removed session 24. Nov 8 01:26:07.283603 kubelet[3076]: E1108 01:26:07.283539 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:26:07.284090 kubelet[3076]: E1108 01:26:07.283657 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:26:11.170992 systemd[1]: Started sshd@22-139.178.94.41:22-139.178.68.195:56810.service - OpenSSH per-connection server daemon (139.178.68.195:56810). Nov 8 01:26:11.200439 sshd[7689]: Accepted publickey for core from 139.178.68.195 port 56810 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:11.201144 sshd[7689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:11.203787 systemd-logind[1816]: New session 25 of user core. Nov 8 01:26:11.222726 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 01:26:11.355023 sshd[7689]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:11.357605 systemd[1]: sshd@22-139.178.94.41:22-139.178.68.195:56810.service: Deactivated successfully. Nov 8 01:26:11.359173 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 01:26:11.360403 systemd-logind[1816]: Session 25 logged out. Waiting for processes to exit. Nov 8 01:26:11.361401 systemd-logind[1816]: Removed session 25. Nov 8 01:26:12.283926 kubelet[3076]: E1108 01:26:12.283852 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmnhl" podUID="71f5fc0d-399b-4a93-8104-f8dd3ea1c5df" Nov 8 01:26:16.284504 kubelet[3076]: E1108 01:26:16.284368 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65d4465b-6mb2v" podUID="905477ed-861d-42e3-890a-431b3428dc2e" Nov 8 01:26:16.381410 systemd[1]: Started sshd@23-139.178.94.41:22-139.178.68.195:35850.service - OpenSSH per-connection server daemon (139.178.68.195:35850). Nov 8 01:26:16.417861 sshd[7718]: Accepted publickey for core from 139.178.68.195 port 35850 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:16.419054 sshd[7718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:16.422579 systemd-logind[1816]: New session 26 of user core. Nov 8 01:26:16.431695 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 01:26:16.533396 sshd[7718]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:16.535099 systemd[1]: sshd@23-139.178.94.41:22-139.178.68.195:35850.service: Deactivated successfully. Nov 8 01:26:16.536091 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 01:26:16.536871 systemd-logind[1816]: Session 26 logged out. Waiting for processes to exit. Nov 8 01:26:16.537446 systemd-logind[1816]: Removed session 26. Nov 8 01:26:17.291026 kubelet[3076]: E1108 01:26:17.290943 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-lt4rt" podUID="bb14ec11-f019-40a0-9b63-589cf025cfb4" Nov 8 01:26:19.283857 kubelet[3076]: E1108 01:26:19.283806 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6d97687b-82v26" podUID="e679c708-5dc2-455f-8f76-0a5b47442761" Nov 8 01:26:20.283039 kubelet[3076]: E1108 01:26:20.283013 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhnl5" podUID="b57117d3-8237-4f8a-aa85-534eb9568949" Nov 8 01:26:21.285857 kubelet[3076]: E1108 01:26:21.285750 3076 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6668fb7f88-v6px7" podUID="d30571cb-e438-453f-8b20-303beb52e470" Nov 8 01:26:21.551435 systemd[1]: Started sshd@24-139.178.94.41:22-139.178.68.195:35854.service - OpenSSH per-connection server daemon (139.178.68.195:35854). Nov 8 01:26:21.596167 sshd[7795]: Accepted publickey for core from 139.178.68.195 port 35854 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 01:26:21.598857 sshd[7795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:26:21.609679 systemd-logind[1816]: New session 27 of user core. Nov 8 01:26:21.633952 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 01:26:21.744315 sshd[7795]: pam_unix(sshd:session): session closed for user core Nov 8 01:26:21.746439 systemd[1]: sshd@24-139.178.94.41:22-139.178.68.195:35854.service: Deactivated successfully. Nov 8 01:26:21.747494 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 01:26:21.747968 systemd-logind[1816]: Session 27 logged out. Waiting for processes to exit. Nov 8 01:26:21.748634 systemd-logind[1816]: Removed session 27.