Nov 1 01:15:07.039551 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:15:07.039565 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:15:07.039572 kernel: BIOS-provided physical RAM map: Nov 1 01:15:07.039577 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:15:07.039580 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:15:07.039584 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:15:07.039589 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:15:07.039593 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:15:07.039598 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Nov 1 01:15:07.039602 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Nov 1 01:15:07.039606 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Nov 1 01:15:07.039611 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Nov 1 01:15:07.039615 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 1 01:15:07.039620 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 1 01:15:07.039625 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 1 01:15:07.039630 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 1 01:15:07.039635 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:15:07.039640 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:15:07.039645 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:15:07.039649 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:15:07.039654 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:15:07.039658 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:15:07.039663 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:15:07.039668 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:15:07.039672 kernel: NX (Execute Disable) protection: active Nov 1 01:15:07.039677 kernel: APIC: Static calls initialized Nov 1 01:15:07.039681 kernel: SMBIOS 3.2.1 present. Nov 1 01:15:07.039686 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 1 01:15:07.039692 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:15:07.039697 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:15:07.039701 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:15:07.039706 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:15:07.039711 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:15:07.039716 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 01:15:07.039721 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:15:07.039726 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:15:07.039730 kernel: Using GB pages for direct mapping Nov 1 01:15:07.039736 kernel: ACPI: Early table checksum verification disabled Nov 1 01:15:07.039741 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:15:07.039746 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:15:07.039753 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 1 01:15:07.039758 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:15:07.039763 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 1 01:15:07.039768 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 1 01:15:07.039774 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:15:07.039779 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:15:07.039784 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:15:07.039789 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:15:07.039794 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:15:07.039799 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:15:07.039804 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:15:07.039810 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039815 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:15:07.039820 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:15:07.039826 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039831 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039836 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:15:07.039841 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:15:07.039846 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039851 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039857 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:15:07.039862 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:15:07.039867 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:15:07.039872 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:15:07.039877 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:15:07.039882 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:15:07.039887 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:15:07.039893 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:15:07.039899 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:15:07.039904 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:15:07.039909 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:15:07.039914 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 1 01:15:07.039919 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 1 01:15:07.039924 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 1 01:15:07.039929 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 1 01:15:07.039934 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 1 01:15:07.039939 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 1 01:15:07.039946 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 1 01:15:07.039951 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 1 01:15:07.039956 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 1 01:15:07.039961 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 1 01:15:07.039966 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 1 01:15:07.039971 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 1 01:15:07.039976 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 1 01:15:07.039981 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 1 01:15:07.039986 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 1 01:15:07.039992 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 1 01:15:07.039997 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 1 01:15:07.040002 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 1 01:15:07.040007 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 1 01:15:07.040012 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 1 01:15:07.040017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 1 01:15:07.040022 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 1 01:15:07.040027 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 1 01:15:07.040032 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 1 01:15:07.040038 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 1 01:15:07.040043 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 1 01:15:07.040048 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 1 01:15:07.040053 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 1 01:15:07.040058 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 1 01:15:07.040063 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 1 01:15:07.040068 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 1 01:15:07.040073 kernel: No NUMA configuration found Nov 1 01:15:07.040078 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:15:07.040083 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:15:07.040090 kernel: Zone ranges: Nov 1 01:15:07.040095 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:15:07.040100 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:15:07.040105 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:15:07.040110 kernel: Movable zone start for each node Nov 1 01:15:07.040115 kernel: Early memory node ranges Nov 1 01:15:07.040120 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:15:07.040125 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:15:07.040130 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Nov 1 01:15:07.040136 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Nov 1 01:15:07.040142 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 1 01:15:07.040146 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:15:07.040152 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:15:07.040160 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:15:07.040167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:15:07.040172 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:15:07.040177 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:15:07.040184 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:15:07.040189 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:15:07.040194 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 1 01:15:07.040200 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:15:07.040208 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:15:07.040213 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:15:07.040239 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:15:07.040244 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:15:07.040250 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:15:07.040270 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:15:07.040275 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:15:07.040281 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:15:07.040286 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:15:07.040292 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:15:07.040297 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:15:07.040302 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:15:07.040308 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:15:07.040313 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:15:07.040319 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:15:07.040325 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:15:07.040330 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:15:07.040335 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:15:07.040341 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:15:07.040346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:15:07.040352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:15:07.040357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:15:07.040363 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:15:07.040369 kernel: TSC deadline timer available Nov 1 01:15:07.040375 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:15:07.040380 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:15:07.040386 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:15:07.040391 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:15:07.040397 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:15:07.040402 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:15:07.040408 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:15:07.040413 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:15:07.040420 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:15:07.040426 kernel: random: crng init done Nov 1 01:15:07.040431 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:15:07.040436 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:15:07.040442 kernel: Fallback order for Node 0: 0 Nov 1 01:15:07.040447 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 1 01:15:07.040453 kernel: Policy zone: Normal Nov 1 01:15:07.040458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:15:07.040464 kernel: software IO TLB: area num 16. Nov 1 01:15:07.040470 kernel: Memory: 32720296K/33452980K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 732424K reserved, 0K cma-reserved) Nov 1 01:15:07.040476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:15:07.040481 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:15:07.040486 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:15:07.040492 kernel: Dynamic Preempt: voluntary Nov 1 01:15:07.040497 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:15:07.040503 kernel: rcu: RCU event tracing is enabled. Nov 1 01:15:07.040509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:15:07.040515 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:15:07.040521 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:15:07.040526 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:15:07.040531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:15:07.040537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:15:07.040542 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:15:07.040548 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:15:07.040553 kernel: Console: colour dummy device 80x25 Nov 1 01:15:07.040558 kernel: printk: console [tty0] enabled Nov 1 01:15:07.040564 kernel: printk: console [ttyS1] enabled Nov 1 01:15:07.040570 kernel: ACPI: Core revision 20230628 Nov 1 01:15:07.040576 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:15:07.040581 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:15:07.040586 kernel: DMAR: Host address width 39 Nov 1 01:15:07.040592 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:15:07.040597 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:15:07.040603 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 1 01:15:07.040608 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:15:07.040614 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:15:07.040620 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:15:07.040626 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:15:07.040631 kernel: x2apic enabled Nov 1 01:15:07.040637 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 01:15:07.040642 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:15:07.040648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:15:07.040653 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:15:07.040659 kernel: process: using mwait in idle threads Nov 1 01:15:07.040664 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:15:07.040670 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:15:07.040676 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:15:07.040681 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:15:07.040687 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:15:07.040692 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:15:07.040697 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:15:07.040703 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:15:07.040708 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:15:07.040713 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:15:07.040719 kernel: TAA: Mitigation: TSX disabled Nov 1 01:15:07.040724 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:15:07.040730 kernel: SRBDS: Mitigation: Microcode Nov 1 01:15:07.040736 kernel: GDS: Mitigation: Microcode Nov 1 01:15:07.040741 kernel: active return thunk: its_return_thunk Nov 1 01:15:07.040747 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:15:07.040752 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 01:15:07.040757 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:15:07.040763 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:15:07.040768 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:15:07.040773 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:15:07.040779 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:15:07.040784 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:15:07.040791 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:15:07.040796 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:15:07.040801 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:15:07.040807 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:15:07.040812 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:15:07.040818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:15:07.040823 kernel: landlock: Up and running. Nov 1 01:15:07.040828 kernel: SELinux: Initializing. Nov 1 01:15:07.040834 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.040839 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.040845 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:15:07.040850 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:15:07.040857 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:15:07.040862 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:15:07.040868 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:15:07.040873 kernel: ... version: 4 Nov 1 01:15:07.040879 kernel: ... bit width: 48 Nov 1 01:15:07.040884 kernel: ... generic registers: 4 Nov 1 01:15:07.040890 kernel: ... value mask: 0000ffffffffffff Nov 1 01:15:07.040895 kernel: ... max period: 00007fffffffffff Nov 1 01:15:07.040900 kernel: ... fixed-purpose events: 3 Nov 1 01:15:07.040907 kernel: ... event mask: 000000070000000f Nov 1 01:15:07.040912 kernel: signal: max sigframe size: 2032 Nov 1 01:15:07.040918 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:15:07.040923 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:15:07.040929 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:15:07.040934 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:15:07.040940 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:15:07.040945 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:15:07.040950 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 01:15:07.040957 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:15:07.040963 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:15:07.040968 kernel: smpboot: Max logical packages: 1 Nov 1 01:15:07.040974 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:15:07.040979 kernel: devtmpfs: initialized Nov 1 01:15:07.040985 kernel: x86/mm: Memory block size: 128MB Nov 1 01:15:07.040990 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Nov 1 01:15:07.040996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 1 01:15:07.041002 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:15:07.041008 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:15:07.041013 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:15:07.041018 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:15:07.041024 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:15:07.041029 kernel: audit: type=2000 audit(1761959701.040:1): state=initialized audit_enabled=0 res=1 Nov 1 01:15:07.041035 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:15:07.041040 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:15:07.041046 kernel: cpuidle: using governor menu Nov 1 01:15:07.041052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:15:07.041058 kernel: dca service started, version 1.12.1 Nov 1 01:15:07.041063 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:15:07.041068 kernel: PCI: Using configuration type 1 for base access Nov 1 01:15:07.041074 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:15:07.041079 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:15:07.041085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:15:07.041090 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:15:07.041096 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:15:07.041102 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:15:07.041107 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:15:07.041113 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:15:07.041118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:15:07.041124 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:15:07.041129 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041135 kernel: ACPI: SSDT 0xFFFF933441B54000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:15:07.041140 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041146 kernel: ACPI: SSDT 0xFFFF933441B4A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:15:07.041152 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041157 kernel: ACPI: SSDT 0xFFFF933440247600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:15:07.041163 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041168 kernel: ACPI: SSDT 0xFFFF933441E7A000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:15:07.041173 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041179 kernel: ACPI: SSDT 0xFFFF93344012F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:15:07.041184 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041189 kernel: ACPI: SSDT 0xFFFF933441B50400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:15:07.041195 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 01:15:07.041200 kernel: ACPI: Interpreter enabled Nov 1 01:15:07.041208 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:15:07.041232 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:15:07.041238 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:15:07.041243 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:15:07.041249 kernel: HEST: Table parsing has been initialized. Nov 1 01:15:07.041269 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:15:07.041275 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:15:07.041280 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 01:15:07.041286 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:15:07.041292 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 01:15:07.041298 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 01:15:07.041303 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 01:15:07.041309 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 01:15:07.041314 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 01:15:07.041320 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 01:15:07.041325 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 01:15:07.041330 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 01:15:07.041336 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 01:15:07.041342 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 01:15:07.041348 kernel: ACPI: \PIN_: New power resource Nov 1 01:15:07.041353 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:15:07.041425 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:15:07.041481 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:15:07.041531 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:15:07.041539 kernel: PCI host bridge to bus 0000:00 Nov 1 01:15:07.041592 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:15:07.041638 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:15:07.041680 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:15:07.041724 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:15:07.041766 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:15:07.041809 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:15:07.041871 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:15:07.041931 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:15:07.041983 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.042036 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:15:07.042087 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:15:07.042139 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:15:07.042189 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:15:07.042282 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:15:07.042332 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:15:07.042381 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:15:07.042434 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:15:07.042483 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:15:07.042532 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:15:07.042589 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:15:07.042639 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:15:07.042695 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:15:07.042746 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:15:07.042799 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:15:07.042848 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:15:07.042900 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:15:07.042952 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:15:07.043013 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:15:07.043064 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:15:07.043118 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:15:07.043168 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:15:07.043243 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:15:07.043313 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:15:07.043364 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:15:07.043412 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:15:07.043462 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:15:07.043510 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:15:07.043559 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:15:07.043611 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:15:07.043659 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:15:07.043716 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:15:07.043767 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.043824 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:15:07.043877 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.043931 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:15:07.043981 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.044035 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:15:07.044085 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.044139 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:15:07.044190 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.044283 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:15:07.044333 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:15:07.044387 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:15:07.044440 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:15:07.044491 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:15:07.044544 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:15:07.044598 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:15:07.044649 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:15:07.044704 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:15:07.044756 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:15:07.044806 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:15:07.044860 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:15:07.044909 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:15:07.044961 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:15:07.045017 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:15:07.045067 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:15:07.045118 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:15:07.045168 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:15:07.045244 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:15:07.045309 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:15:07.045360 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:15:07.045409 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:15:07.045460 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:15:07.045510 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:15:07.045566 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:15:07.045618 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:15:07.045670 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:15:07.045722 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:15:07.045773 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:15:07.045824 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.045874 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:15:07.045924 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:15:07.045973 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:15:07.046032 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:15:07.046084 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:15:07.046135 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:15:07.046186 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:15:07.046284 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:15:07.046338 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.046388 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:15:07.046441 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:15:07.046490 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:15:07.046540 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:15:07.046599 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:15:07.046651 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:15:07.046703 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:15:07.046753 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:15:07.046804 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:15:07.046855 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.046905 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.046960 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:15:07.047018 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:15:07.047073 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:15:07.047125 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:15:07.047178 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:15:07.047260 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:15:07.047334 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:15:07.047387 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:15:07.047438 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:15:07.047489 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.047540 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.047548 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:15:07.047554 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:15:07.047562 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:15:07.047568 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:15:07.047573 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:15:07.047579 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:15:07.047585 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:15:07.047591 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:15:07.047597 kernel: iommu: Default domain type: Translated Nov 1 01:15:07.047602 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:15:07.047608 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:15:07.047615 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:15:07.047621 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:15:07.047626 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Nov 1 01:15:07.047632 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 1 01:15:07.047638 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 1 01:15:07.047643 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:15:07.047649 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:15:07.047700 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:15:07.047753 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:15:07.047808 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:15:07.047816 kernel: vgaarb: loaded Nov 1 01:15:07.047822 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:15:07.047828 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:15:07.047834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:15:07.047840 kernel: pnp: PnP ACPI init Nov 1 01:15:07.047891 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:15:07.047940 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:15:07.047994 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:15:07.048045 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:15:07.048092 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:15:07.048140 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:15:07.048186 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:15:07.048280 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:15:07.048328 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:15:07.048373 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:15:07.048421 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:15:07.048466 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:15:07.048511 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:15:07.048560 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 01:15:07.048607 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:15:07.048654 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:15:07.048699 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:15:07.048744 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:15:07.048788 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:15:07.048833 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:15:07.048881 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 01:15:07.048890 kernel: pnp: PnP ACPI: found 9 devices Nov 1 01:15:07.048898 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:15:07.048905 kernel: NET: Registered PF_INET protocol family Nov 1 01:15:07.048911 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:15:07.048917 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:15:07.048923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:15:07.048928 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:15:07.048934 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 01:15:07.048940 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:15:07.048946 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.048953 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.048959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:15:07.048964 kernel: NET: Registered PF_XDP protocol family Nov 1 01:15:07.049016 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:15:07.049065 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:15:07.049116 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:15:07.049168 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049248 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049321 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049374 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049424 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:15:07.049473 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:15:07.049523 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:15:07.049572 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:15:07.049625 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:15:07.049674 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:15:07.049724 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:15:07.049773 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:15:07.049824 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:15:07.049872 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:15:07.049922 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:15:07.049974 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:15:07.050025 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.050076 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.050126 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:15:07.050177 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.050253 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.050319 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:15:07.050362 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:15:07.050409 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:15:07.050452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:15:07.050496 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:15:07.050538 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:15:07.050588 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:15:07.050634 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:15:07.050684 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:15:07.050732 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:15:07.050785 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:15:07.050831 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:15:07.050881 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:15:07.050926 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:15:07.050974 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:15:07.051019 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:15:07.051029 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:15:07.051035 kernel: DMAR: No ATSR found Nov 1 01:15:07.051041 kernel: DMAR: No SATC found Nov 1 01:15:07.051047 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:15:07.051097 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:15:07.051148 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:15:07.051199 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:15:07.051291 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:15:07.051343 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:15:07.051393 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:15:07.051441 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:15:07.051490 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:15:07.051538 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:15:07.051587 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:15:07.051636 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:15:07.051685 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:15:07.051736 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:15:07.051786 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:15:07.051835 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:15:07.051884 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:15:07.051934 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:15:07.051982 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:15:07.052032 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:15:07.052081 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:15:07.052133 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:15:07.052184 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:15:07.052283 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:15:07.052335 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:15:07.052388 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:15:07.052438 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:15:07.052491 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:15:07.052500 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:15:07.052506 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:15:07.052513 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 1 01:15:07.052519 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:15:07.052525 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:15:07.052531 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:15:07.052537 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:15:07.052589 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:15:07.052598 kernel: Initialise system trusted keyrings Nov 1 01:15:07.052604 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:15:07.052612 kernel: Key type asymmetric registered Nov 1 01:15:07.052617 kernel: Asymmetric key parser 'x509' registered Nov 1 01:15:07.052623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:15:07.052629 kernel: io scheduler mq-deadline registered Nov 1 01:15:07.052635 kernel: io scheduler kyber registered Nov 1 01:15:07.052640 kernel: io scheduler bfq registered Nov 1 01:15:07.052688 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:15:07.052738 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:15:07.052790 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:15:07.052840 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:15:07.052889 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:15:07.052939 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:15:07.052994 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:15:07.053003 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:15:07.053009 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:15:07.053015 kernel: pstore: Using crash dump compression: deflate Nov 1 01:15:07.053022 kernel: pstore: Registered erst as persistent store backend Nov 1 01:15:07.053028 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:15:07.053034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:15:07.053039 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:15:07.053045 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:15:07.053051 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:15:07.053101 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:15:07.053110 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:15:07.053156 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:15:07.053205 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:15:07.053297 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:15:05 UTC (1761959705) Nov 1 01:15:07.053344 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:15:07.053352 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:15:07.053358 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:15:07.053364 kernel: intel_pstate: HWP enabled Nov 1 01:15:07.053370 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:15:07.053378 kernel: vesafb: scrolling: redraw Nov 1 01:15:07.053384 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:15:07.053390 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000003c719224, using 768k, total 768k Nov 1 01:15:07.053395 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:15:07.053401 kernel: fb0: VESA VGA frame buffer device Nov 1 01:15:07.053407 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:15:07.053413 kernel: Segment Routing with IPv6 Nov 1 01:15:07.053419 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:15:07.053424 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:15:07.053430 kernel: Key type dns_resolver registered Nov 1 01:15:07.053437 kernel: microcode: Current revision: 0x000000fc Nov 1 01:15:07.053443 kernel: microcode: Updated early from: 0x000000f4 Nov 1 01:15:07.053448 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:15:07.053454 kernel: IPI shorthand broadcast: enabled Nov 1 01:15:07.053460 kernel: sched_clock: Marking stable (1567000800, 1368982728)->(4406633217, -1470649689) Nov 1 01:15:07.053466 kernel: registered taskstats version 1 Nov 1 01:15:07.053471 kernel: Loading compiled-in X.509 certificates Nov 1 01:15:07.053477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:15:07.053483 kernel: Key type .fscrypt registered Nov 1 01:15:07.053490 kernel: Key type fscrypt-provisioning registered Nov 1 01:15:07.053495 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:15:07.053501 kernel: ima: No architecture policies found Nov 1 01:15:07.053507 kernel: clk: Disabling unused clocks Nov 1 01:15:07.053513 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:15:07.053518 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:15:07.053524 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:15:07.053530 kernel: Run /init as init process Nov 1 01:15:07.053536 kernel: with arguments: Nov 1 01:15:07.053542 kernel: /init Nov 1 01:15:07.053548 kernel: with environment: Nov 1 01:15:07.053554 kernel: HOME=/ Nov 1 01:15:07.053559 kernel: TERM=linux Nov 1 01:15:07.053566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:15:07.053573 systemd[1]: Detected architecture x86-64. Nov 1 01:15:07.053579 systemd[1]: Running in initrd. Nov 1 01:15:07.053586 systemd[1]: No hostname configured, using default hostname. Nov 1 01:15:07.053592 systemd[1]: Hostname set to . Nov 1 01:15:07.053598 systemd[1]: Initializing machine ID from random generator. Nov 1 01:15:07.053604 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:15:07.053610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:15:07.053616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:15:07.053623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:15:07.053629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:15:07.053636 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:15:07.053642 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:15:07.053648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:15:07.053655 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 1 01:15:07.053660 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 1 01:15:07.053666 kernel: clocksource: Switched to clocksource tsc Nov 1 01:15:07.053672 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:15:07.053679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:15:07.053685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:15:07.053691 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:15:07.053698 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:15:07.053704 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:15:07.053710 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:15:07.053716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:15:07.053721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:15:07.053727 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:15:07.053735 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:15:07.053741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:15:07.053747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:15:07.053753 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:15:07.053759 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:15:07.053764 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:15:07.053770 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:15:07.053776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:15:07.053783 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:15:07.053790 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:15:07.053806 systemd-journald[269]: Collecting audit messages is disabled. Nov 1 01:15:07.053820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:15:07.053828 systemd-journald[269]: Journal started Nov 1 01:15:07.053841 systemd-journald[269]: Runtime Journal (/run/log/journal/325db1ff8c14485b8a3bc3e226cc3d8f) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:15:07.067521 systemd-modules-load[271]: Inserted module 'overlay' Nov 1 01:15:07.088209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:07.117210 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:15:07.117217 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:15:07.188318 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:15:07.188331 kernel: Bridge firewalling registered Nov 1 01:15:07.177388 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:15:07.179306 systemd-modules-load[271]: Inserted module 'br_netfilter' Nov 1 01:15:07.199511 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:15:07.220554 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:15:07.240566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:07.270488 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:15:07.279882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:15:07.308112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:15:07.318417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:15:07.324023 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:15:07.324782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:15:07.325190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:15:07.327511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:15:07.328375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:15:07.329701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:15:07.332463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:07.343780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:15:07.345524 systemd-resolved[303]: Positive Trust Anchors: Nov 1 01:15:07.345529 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:15:07.345554 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:15:07.347154 systemd-resolved[303]: Defaulting to hostname 'linux'. Nov 1 01:15:07.371436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:15:07.378499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:15:07.501579 dracut-cmdline[308]: dracut-dracut-053 Nov 1 01:15:07.510423 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:15:07.711214 kernel: SCSI subsystem initialized Nov 1 01:15:07.735210 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:15:07.758209 kernel: iscsi: registered transport (tcp) Nov 1 01:15:07.790611 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:15:07.790628 kernel: QLogic iSCSI HBA Driver Nov 1 01:15:07.823436 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:15:07.857590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:15:07.914535 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:15:07.914554 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:15:07.934202 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:15:07.993273 kernel: raid6: avx2x4 gen() 52363 MB/s Nov 1 01:15:08.025235 kernel: raid6: avx2x2 gen() 52999 MB/s Nov 1 01:15:08.061586 kernel: raid6: avx2x1 gen() 45147 MB/s Nov 1 01:15:08.061602 kernel: raid6: using algorithm avx2x2 gen() 52999 MB/s Nov 1 01:15:08.108661 kernel: raid6: .... xor() 31264 MB/s, rmw enabled Nov 1 01:15:08.108678 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:15:08.150224 kernel: xor: automatically using best checksumming function avx Nov 1 01:15:08.268240 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:15:08.274305 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:15:08.305542 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:15:08.312611 systemd-udevd[495]: Using default interface naming scheme 'v255'. Nov 1 01:15:08.316287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:15:08.338997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:15:08.396386 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Nov 1 01:15:08.413536 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:15:08.439470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:15:08.527459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:15:08.570373 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:15:08.570395 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:15:08.540368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:15:08.600964 kernel: PTP clock support registered Nov 1 01:15:08.600983 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:15:08.573494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:15:08.690524 kernel: libata version 3.00 loaded. Nov 1 01:15:08.690553 kernel: ACPI: bus type USB registered Nov 1 01:15:08.690571 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:15:08.690587 kernel: usbcore: registered new interface driver usbfs Nov 1 01:15:08.690602 kernel: usbcore: registered new interface driver hub Nov 1 01:15:08.690617 kernel: usbcore: registered new device driver usb Nov 1 01:15:08.573581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:08.706300 kernel: AES CTR mode by8 optimization enabled Nov 1 01:15:08.690701 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:15:09.170336 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:15:09.170437 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:15:09.170447 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:15:09.170520 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:15:09.170529 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:15:09.170594 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:15:09.170667 kernel: scsi host0: ahci Nov 1 01:15:09.170733 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:15:09.170799 kernel: scsi host1: ahci Nov 1 01:15:09.170861 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9e Nov 1 01:15:09.170928 kernel: scsi host2: ahci Nov 1 01:15:09.170994 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:15:09.171059 kernel: scsi host3: ahci Nov 1 01:15:09.171121 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:15:09.171185 kernel: scsi host4: ahci Nov 1 01:15:09.171252 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:15:09.171321 kernel: scsi host5: ahci Nov 1 01:15:09.171384 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:15:09.171450 kernel: scsi host6: ahci Nov 1 01:15:09.171510 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9f Nov 1 01:15:09.171575 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:15:09.171583 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:15:09.171646 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:15:09.171655 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:15:09.171717 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:15:09.171727 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:15:09.171735 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:15:09.171742 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:15:09.171749 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:15:09.171757 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 1 01:15:09.171824 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:15:08.724287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:15:08.724389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:09.151780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:09.218412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:09.237513 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:15:09.247710 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:15:09.247733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:15:09.247757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:15:09.257369 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:15:09.331412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:09.342407 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:15:09.381393 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:15:09.396694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:09.466259 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.466278 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.466291 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:15:09.466301 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:15:09.466409 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:15:09.466419 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 1 01:15:09.466498 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.505207 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:15:09.523248 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.538252 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:15:09.555242 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.591244 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:15:09.591260 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:15:09.628238 kernel: ata1.00: Features: NCQ-prio Nov 1 01:15:09.643237 kernel: ata2.00: Features: NCQ-prio Nov 1 01:15:09.658256 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:15:09.658341 kernel: ata1.00: configured for UDMA/133 Nov 1 01:15:09.662237 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 1 01:15:09.662327 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:15:09.668253 kernel: ata2.00: configured for UDMA/133 Nov 1 01:15:09.668269 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:15:09.751274 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:15:09.787225 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:15:09.787349 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:15:09.810031 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:15:09.842265 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:15:09.842353 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:15:09.852094 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:15:09.852193 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:15:09.897185 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:15:09.908207 kernel: hub 1-0:1.0: USB hub found Nov 1 01:15:09.922254 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:15:09.960721 kernel: hub 2-0:1.0: USB hub found Nov 1 01:15:09.960830 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:15:09.960922 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:15:09.975208 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:15:10.018359 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.018376 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:15:10.018385 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:15:10.023086 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:15:10.037999 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:15:10.038074 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:15:10.044233 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:15:10.053265 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:15:10.053336 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:15:10.058073 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:15:10.078092 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:15:10.078182 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:15:10.087244 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 01:15:10.096252 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 01:15:10.103271 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.182021 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:15:10.182047 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:15:10.203108 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:15:10.212262 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:15:10.275209 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:15:10.295734 kernel: GPT:9289727 != 937703087 Nov 1 01:15:10.311300 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:15:10.324710 kernel: hub 1-14:1.0: USB hub found Nov 1 01:15:10.324807 kernel: GPT:9289727 != 937703087 Nov 1 01:15:10.324816 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:15:10.324823 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.324830 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:15:10.333464 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:15:10.423211 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Nov 1 01:15:10.445250 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Nov 1 01:15:10.452432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 01:15:10.509485 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (558) Nov 1 01:15:10.509503 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (553) Nov 1 01:15:10.500942 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 01:15:10.523576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:15:10.556389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:15:10.583344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:15:10.621578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:15:10.681314 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:15:10.681336 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.681344 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.681351 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.681358 disk-uuid[728]: Primary Header is updated. Nov 1 01:15:10.681358 disk-uuid[728]: Secondary Entries is updated. Nov 1 01:15:10.681358 disk-uuid[728]: Secondary Header is updated. Nov 1 01:15:10.733310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.733323 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.733334 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.763273 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:15:10.784895 kernel: usbcore: registered new interface driver usbhid Nov 1 01:15:10.784924 kernel: usbhid: USB HID core driver Nov 1 01:15:10.827274 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:15:10.922149 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:15:10.922280 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:15:10.954591 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:15:11.722372 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:11.741775 disk-uuid[729]: The operation has completed successfully. Nov 1 01:15:11.751275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:11.776540 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:15:11.776590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:15:11.812482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:15:11.850253 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:15:11.850340 sh[746]: Success Nov 1 01:15:11.893822 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:15:11.922488 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:15:11.930552 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:15:11.987097 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:15:11.987121 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:12.008453 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:15:12.027345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:15:12.045086 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:15:12.083280 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 01:15:12.085905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:15:12.094660 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:15:12.101477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:15:12.127407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:15:12.164252 kernel: BTRFS info (device sda6): first mount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:12.164296 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:12.164319 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:15:12.206492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:15:12.274471 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:15:12.274488 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 01:15:12.274497 kernel: BTRFS info (device sda6): last unmount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:12.261588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:15:12.284544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:15:12.295082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:15:12.304429 systemd-networkd[926]: lo: Link UP Nov 1 01:15:12.304431 systemd-networkd[926]: lo: Gained carrier Nov 1 01:15:12.306854 systemd-networkd[926]: Enumeration completed Nov 1 01:15:12.307571 systemd-networkd[926]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.310456 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:15:12.335056 systemd-networkd[926]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.342522 systemd[1]: Reached target network.target - Network. Nov 1 01:15:12.363662 systemd-networkd[926]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.404316 ignition[930]: Ignition 2.19.0 Nov 1 01:15:12.406492 unknown[930]: fetched base config from "system" Nov 1 01:15:12.404320 ignition[930]: Stage: fetch-offline Nov 1 01:15:12.406496 unknown[930]: fetched user config from "system" Nov 1 01:15:12.404339 ignition[930]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:12.407563 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:15:12.404345 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:12.432601 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:15:12.404400 ignition[930]: parsed url from cmdline: "" Nov 1 01:15:12.438451 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:15:12.547336 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:15:12.404402 ignition[930]: no config URL provided Nov 1 01:15:12.540257 systemd-networkd[926]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.404405 ignition[930]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:15:12.404429 ignition[930]: parsing config with SHA512: 3136ae21beb052a0c7498d69f2b33503347654af4560866444cb508851c0528458e05c39363eb2ca4a8129c9999a460cd048b28a79666cd9aac2d58a495d9769 Nov 1 01:15:12.406713 ignition[930]: fetch-offline: fetch-offline passed Nov 1 01:15:12.406716 ignition[930]: POST message to Packet Timeline Nov 1 01:15:12.406718 ignition[930]: POST Status error: resource requires networking Nov 1 01:15:12.406752 ignition[930]: Ignition finished successfully Nov 1 01:15:12.454794 ignition[944]: Ignition 2.19.0 Nov 1 01:15:12.454801 ignition[944]: Stage: kargs Nov 1 01:15:12.454962 ignition[944]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:12.454973 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:12.455788 ignition[944]: kargs: kargs passed Nov 1 01:15:12.455792 ignition[944]: POST message to Packet Timeline Nov 1 01:15:12.455806 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:12.456397 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35141->[::1]:53: read: connection refused Nov 1 01:15:12.656585 ignition[944]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:15:12.657003 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52280->[::1]:53: read: connection refused Nov 1 01:15:12.720364 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:15:12.717897 systemd-networkd[926]: eno1: Link UP Nov 1 01:15:12.718047 systemd-networkd[926]: eno2: Link UP Nov 1 01:15:12.718187 systemd-networkd[926]: enp1s0f0np0: Link UP Nov 1 01:15:12.718371 systemd-networkd[926]: enp1s0f0np0: Gained carrier Nov 1 01:15:12.733410 systemd-networkd[926]: enp1s0f1np1: Link UP Nov 1 01:15:12.756382 systemd-networkd[926]: enp1s0f0np0: DHCPv4 address 139.178.94.199/31, gateway 139.178.94.198 acquired from 145.40.83.140 Nov 1 01:15:13.058223 ignition[944]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:15:13.059484 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46159->[::1]:53: read: connection refused Nov 1 01:15:13.586011 systemd-networkd[926]: enp1s0f1np1: Gained carrier Nov 1 01:15:13.777830 systemd-networkd[926]: enp1s0f0np0: Gained IPv6LL Nov 1 01:15:13.859874 ignition[944]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:15:13.860986 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59559->[::1]:53: read: connection refused Nov 1 01:15:14.929815 systemd-networkd[926]: enp1s0f1np1: Gained IPv6LL Nov 1 01:15:15.462361 ignition[944]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:15:15.463614 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57609->[::1]:53: read: connection refused Nov 1 01:15:18.667243 ignition[944]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:15:19.760909 ignition[944]: GET result: OK Nov 1 01:15:20.616560 ignition[944]: Ignition finished successfully Nov 1 01:15:20.621585 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:15:20.650474 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:15:20.656747 ignition[964]: Ignition 2.19.0 Nov 1 01:15:20.656751 ignition[964]: Stage: disks Nov 1 01:15:20.656867 ignition[964]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:20.656874 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:20.657435 ignition[964]: disks: disks passed Nov 1 01:15:20.657438 ignition[964]: POST message to Packet Timeline Nov 1 01:15:20.657448 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:21.764948 ignition[964]: GET result: OK Nov 1 01:15:22.627632 ignition[964]: Ignition finished successfully Nov 1 01:15:22.630913 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:15:22.646613 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:15:22.664509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:15:22.685532 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:15:22.706613 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:15:22.727612 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:15:22.762494 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:15:22.800951 systemd-fsck[981]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 01:15:22.810683 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:15:22.833475 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:15:22.936251 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:15:22.936247 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:15:22.945650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:15:22.969263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:15:23.015272 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (990) Nov 1 01:15:23.015291 kernel: BTRFS info (device sda6): first mount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:23.035912 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:23.054388 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:15:23.076308 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:15:23.125448 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:15:23.125460 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 01:15:23.116026 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 01:15:23.143308 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 01:15:23.159302 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:15:23.159320 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:15:23.216438 coreos-metadata[1007]: Nov 01 01:15:23.208 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:15:23.180246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:15:23.239495 coreos-metadata[1008]: Nov 01 01:15:23.208 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:15:23.205436 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:15:23.239441 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:15:23.285388 initrd-setup-root[1022]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:15:23.296270 initrd-setup-root[1029]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:15:23.307317 initrd-setup-root[1036]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:15:23.317279 initrd-setup-root[1043]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:15:23.335715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:15:23.358418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:15:23.394418 kernel: BTRFS info (device sda6): last unmount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:23.376923 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:15:23.402984 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:15:23.418383 ignition[1111]: INFO : Ignition 2.19.0 Nov 1 01:15:23.418383 ignition[1111]: INFO : Stage: mount Nov 1 01:15:23.418383 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:23.418383 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:23.418383 ignition[1111]: INFO : mount: mount passed Nov 1 01:15:23.418383 ignition[1111]: INFO : POST message to Packet Timeline Nov 1 01:15:23.418383 ignition[1111]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:23.421050 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:15:24.278733 coreos-metadata[1007]: Nov 01 01:15:24.278 INFO Fetch successful Nov 1 01:15:24.290188 coreos-metadata[1008]: Nov 01 01:15:24.290 INFO Fetch successful Nov 1 01:15:24.320391 coreos-metadata[1007]: Nov 01 01:15:24.320 INFO wrote hostname ci-4081.3.6-n-61efafd0e9 to /sysroot/etc/hostname Nov 1 01:15:24.321686 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:15:24.346572 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:15:24.346616 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 01:15:24.411256 ignition[1111]: INFO : GET result: OK Nov 1 01:15:24.823732 ignition[1111]: INFO : Ignition finished successfully Nov 1 01:15:24.825638 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:15:24.854509 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:15:24.866554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:15:24.924365 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1137) Nov 1 01:15:24.954638 kernel: BTRFS info (device sda6): first mount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:24.954655 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:24.972959 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:15:25.012081 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:15:25.012097 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 01:15:25.026408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:15:25.051990 ignition[1154]: INFO : Ignition 2.19.0 Nov 1 01:15:25.051990 ignition[1154]: INFO : Stage: files Nov 1 01:15:25.067436 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:25.067436 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:25.067436 ignition[1154]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:15:25.067436 ignition[1154]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:15:25.057016 unknown[1154]: wrote ssh authorized keys file for user: core Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:25.451559 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:15:25.702836 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 01:15:26.617848 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:26.617848 ignition[1154]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: files passed Nov 1 01:15:26.647474 ignition[1154]: INFO : POST message to Packet Timeline Nov 1 01:15:26.647474 ignition[1154]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:27.472060 ignition[1154]: INFO : GET result: OK Nov 1 01:15:28.583693 ignition[1154]: INFO : Ignition finished successfully Nov 1 01:15:28.587845 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:15:28.613469 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:15:28.623829 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:15:28.633510 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:15:28.633566 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:15:28.694769 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:15:28.694769 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:15:28.734529 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:15:28.699583 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:15:28.721410 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:15:28.763470 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:15:28.808086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:15:28.808384 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:15:28.828742 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:15:28.848568 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:15:28.868720 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:15:28.886423 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:15:28.938585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:15:28.968446 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:15:28.973760 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:15:28.999493 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:15:29.020779 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:15:29.038892 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:15:29.039332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:15:29.066040 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:15:29.087902 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:15:29.105894 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:15:29.123904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:15:29.144879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:15:29.165892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:15:29.185893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:15:29.206931 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:15:29.227919 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:15:29.247936 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:15:29.267831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:15:29.268265 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:15:29.303658 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:15:29.313958 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:15:29.334800 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:15:29.335257 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:15:29.357801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:15:29.358223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:15:29.388914 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:15:29.389394 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:15:29.409145 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:15:29.427771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:15:29.432505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:15:29.449936 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:15:29.468914 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:15:29.487888 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:15:29.488223 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:15:29.499106 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:15:29.499449 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:15:29.530012 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:15:29.646336 ignition[1218]: INFO : Ignition 2.19.0 Nov 1 01:15:29.646336 ignition[1218]: INFO : Stage: umount Nov 1 01:15:29.646336 ignition[1218]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:29.646336 ignition[1218]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:29.646336 ignition[1218]: INFO : umount: umount passed Nov 1 01:15:29.646336 ignition[1218]: INFO : POST message to Packet Timeline Nov 1 01:15:29.646336 ignition[1218]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:29.530448 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:15:29.550007 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:15:29.550416 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:15:29.568028 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:15:29.568450 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:15:29.603500 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:15:29.619021 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:15:29.628563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:15:29.628691 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:15:29.657645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:15:29.657849 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:15:29.700657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:15:29.702682 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:15:29.702951 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:15:29.718443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:15:29.718709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:15:30.828077 ignition[1218]: INFO : GET result: OK Nov 1 01:15:31.685708 ignition[1218]: INFO : Ignition finished successfully Nov 1 01:15:31.688787 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:15:31.689092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:15:31.706642 systemd[1]: Stopped target network.target - Network. Nov 1 01:15:31.721448 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:15:31.721637 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:15:31.740589 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:15:31.740751 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:15:31.759741 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:15:31.759902 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:15:31.778721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:15:31.778887 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:15:31.797718 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:15:31.797884 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:15:31.816928 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:15:31.826368 systemd-networkd[926]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:15:31.834682 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:15:31.835391 systemd-networkd[926]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:15:31.853301 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:15:31.853584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:15:31.872378 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:15:31.872716 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:15:31.892902 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:15:31.893025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:15:31.925531 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:15:31.941409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:15:31.941657 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:15:31.962710 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:15:31.962881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:15:31.980708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:15:31.980876 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:15:32.001708 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:15:32.001874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:15:32.020949 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:15:32.040648 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:15:32.041132 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:15:32.072273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:15:32.072420 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:15:32.078752 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:15:32.078859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:15:32.106475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:15:32.106622 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:15:32.136893 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:15:32.137068 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:15:32.177362 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:15:32.177547 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:32.218315 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:15:32.218358 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:15:32.500414 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Nov 1 01:15:32.218383 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:15:32.247368 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 01:15:32.247403 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:15:32.279410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:15:32.279489 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:15:32.298487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:15:32.298614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:32.321409 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:15:32.321635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:15:32.342971 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:15:32.343258 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:15:32.362712 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:15:32.391587 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:15:32.431065 systemd[1]: Switching root. Nov 1 01:15:32.611384 systemd-journald[269]: Journal stopped Nov 1 01:15:07.039551 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:15:07.039565 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:15:07.039572 kernel: BIOS-provided physical RAM map: Nov 1 01:15:07.039577 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:15:07.039580 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:15:07.039584 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:15:07.039589 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:15:07.039593 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:15:07.039598 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Nov 1 01:15:07.039602 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Nov 1 01:15:07.039606 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Nov 1 01:15:07.039611 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Nov 1 01:15:07.039615 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 1 01:15:07.039620 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 1 01:15:07.039625 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 1 01:15:07.039630 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 1 01:15:07.039635 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:15:07.039640 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:15:07.039645 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:15:07.039649 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:15:07.039654 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:15:07.039658 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:15:07.039663 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:15:07.039668 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:15:07.039672 kernel: NX (Execute Disable) protection: active Nov 1 01:15:07.039677 kernel: APIC: Static calls initialized Nov 1 01:15:07.039681 kernel: SMBIOS 3.2.1 present. Nov 1 01:15:07.039686 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 1 01:15:07.039692 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:15:07.039697 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:15:07.039701 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:15:07.039706 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:15:07.039711 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:15:07.039716 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 01:15:07.039721 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:15:07.039726 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:15:07.039730 kernel: Using GB pages for direct mapping Nov 1 01:15:07.039736 kernel: ACPI: Early table checksum verification disabled Nov 1 01:15:07.039741 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:15:07.039746 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:15:07.039753 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 1 01:15:07.039758 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:15:07.039763 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 1 01:15:07.039768 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 1 01:15:07.039774 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:15:07.039779 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:15:07.039784 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:15:07.039789 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:15:07.039794 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:15:07.039799 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:15:07.039804 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:15:07.039810 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039815 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:15:07.039820 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:15:07.039826 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039831 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039836 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:15:07.039841 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:15:07.039846 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039851 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:15:07.039857 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:15:07.039862 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:15:07.039867 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:15:07.039872 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:15:07.039877 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:15:07.039882 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:15:07.039887 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:15:07.039893 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:15:07.039899 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:15:07.039904 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:15:07.039909 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:15:07.039914 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 1 01:15:07.039919 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 1 01:15:07.039924 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 1 01:15:07.039929 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 1 01:15:07.039934 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 1 01:15:07.039939 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 1 01:15:07.039946 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 1 01:15:07.039951 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 1 01:15:07.039956 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 1 01:15:07.039961 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 1 01:15:07.039966 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 1 01:15:07.039971 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 1 01:15:07.039976 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 1 01:15:07.039981 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 1 01:15:07.039986 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 1 01:15:07.039992 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 1 01:15:07.039997 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 1 01:15:07.040002 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 1 01:15:07.040007 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 1 01:15:07.040012 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 1 01:15:07.040017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 1 01:15:07.040022 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 1 01:15:07.040027 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 1 01:15:07.040032 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 1 01:15:07.040038 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 1 01:15:07.040043 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 1 01:15:07.040048 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 1 01:15:07.040053 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 1 01:15:07.040058 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 1 01:15:07.040063 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 1 01:15:07.040068 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 1 01:15:07.040073 kernel: No NUMA configuration found Nov 1 01:15:07.040078 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:15:07.040083 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:15:07.040090 kernel: Zone ranges: Nov 1 01:15:07.040095 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:15:07.040100 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:15:07.040105 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:15:07.040110 kernel: Movable zone start for each node Nov 1 01:15:07.040115 kernel: Early memory node ranges Nov 1 01:15:07.040120 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:15:07.040125 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:15:07.040130 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Nov 1 01:15:07.040136 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Nov 1 01:15:07.040142 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 1 01:15:07.040146 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:15:07.040152 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:15:07.040160 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:15:07.040167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:15:07.040172 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:15:07.040177 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:15:07.040184 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:15:07.040189 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:15:07.040194 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 1 01:15:07.040200 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:15:07.040208 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:15:07.040213 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:15:07.040239 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:15:07.040244 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:15:07.040250 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:15:07.040270 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:15:07.040275 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:15:07.040281 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:15:07.040286 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:15:07.040292 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:15:07.040297 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:15:07.040302 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:15:07.040308 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:15:07.040313 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:15:07.040319 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:15:07.040325 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:15:07.040330 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:15:07.040335 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:15:07.040341 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:15:07.040346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:15:07.040352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:15:07.040357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:15:07.040363 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:15:07.040369 kernel: TSC deadline timer available Nov 1 01:15:07.040375 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:15:07.040380 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:15:07.040386 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:15:07.040391 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:15:07.040397 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:15:07.040402 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:15:07.040408 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:15:07.040413 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:15:07.040420 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:15:07.040426 kernel: random: crng init done Nov 1 01:15:07.040431 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:15:07.040436 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:15:07.040442 kernel: Fallback order for Node 0: 0 Nov 1 01:15:07.040447 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 1 01:15:07.040453 kernel: Policy zone: Normal Nov 1 01:15:07.040458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:15:07.040464 kernel: software IO TLB: area num 16. Nov 1 01:15:07.040470 kernel: Memory: 32720296K/33452980K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 732424K reserved, 0K cma-reserved) Nov 1 01:15:07.040476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:15:07.040481 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:15:07.040486 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:15:07.040492 kernel: Dynamic Preempt: voluntary Nov 1 01:15:07.040497 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:15:07.040503 kernel: rcu: RCU event tracing is enabled. Nov 1 01:15:07.040509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:15:07.040515 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:15:07.040521 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:15:07.040526 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:15:07.040531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:15:07.040537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:15:07.040542 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:15:07.040548 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:15:07.040553 kernel: Console: colour dummy device 80x25 Nov 1 01:15:07.040558 kernel: printk: console [tty0] enabled Nov 1 01:15:07.040564 kernel: printk: console [ttyS1] enabled Nov 1 01:15:07.040570 kernel: ACPI: Core revision 20230628 Nov 1 01:15:07.040576 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:15:07.040581 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:15:07.040586 kernel: DMAR: Host address width 39 Nov 1 01:15:07.040592 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:15:07.040597 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:15:07.040603 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 1 01:15:07.040608 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:15:07.040614 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:15:07.040620 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:15:07.040626 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:15:07.040631 kernel: x2apic enabled Nov 1 01:15:07.040637 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 01:15:07.040642 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:15:07.040648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:15:07.040653 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:15:07.040659 kernel: process: using mwait in idle threads Nov 1 01:15:07.040664 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:15:07.040670 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:15:07.040676 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:15:07.040681 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:15:07.040687 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:15:07.040692 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:15:07.040697 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:15:07.040703 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:15:07.040708 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:15:07.040713 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:15:07.040719 kernel: TAA: Mitigation: TSX disabled Nov 1 01:15:07.040724 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:15:07.040730 kernel: SRBDS: Mitigation: Microcode Nov 1 01:15:07.040736 kernel: GDS: Mitigation: Microcode Nov 1 01:15:07.040741 kernel: active return thunk: its_return_thunk Nov 1 01:15:07.040747 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:15:07.040752 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 01:15:07.040757 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:15:07.040763 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:15:07.040768 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:15:07.040773 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:15:07.040779 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:15:07.040784 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:15:07.040791 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:15:07.040796 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:15:07.040801 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:15:07.040807 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:15:07.040812 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:15:07.040818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:15:07.040823 kernel: landlock: Up and running. Nov 1 01:15:07.040828 kernel: SELinux: Initializing. Nov 1 01:15:07.040834 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.040839 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.040845 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:15:07.040850 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:15:07.040857 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:15:07.040862 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:15:07.040868 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:15:07.040873 kernel: ... version: 4 Nov 1 01:15:07.040879 kernel: ... bit width: 48 Nov 1 01:15:07.040884 kernel: ... generic registers: 4 Nov 1 01:15:07.040890 kernel: ... value mask: 0000ffffffffffff Nov 1 01:15:07.040895 kernel: ... max period: 00007fffffffffff Nov 1 01:15:07.040900 kernel: ... fixed-purpose events: 3 Nov 1 01:15:07.040907 kernel: ... event mask: 000000070000000f Nov 1 01:15:07.040912 kernel: signal: max sigframe size: 2032 Nov 1 01:15:07.040918 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:15:07.040923 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:15:07.040929 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:15:07.040934 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:15:07.040940 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:15:07.040945 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:15:07.040950 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 01:15:07.040957 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:15:07.040963 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:15:07.040968 kernel: smpboot: Max logical packages: 1 Nov 1 01:15:07.040974 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:15:07.040979 kernel: devtmpfs: initialized Nov 1 01:15:07.040985 kernel: x86/mm: Memory block size: 128MB Nov 1 01:15:07.040990 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Nov 1 01:15:07.040996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 1 01:15:07.041002 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:15:07.041008 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:15:07.041013 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:15:07.041018 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:15:07.041024 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:15:07.041029 kernel: audit: type=2000 audit(1761959701.040:1): state=initialized audit_enabled=0 res=1 Nov 1 01:15:07.041035 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:15:07.041040 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:15:07.041046 kernel: cpuidle: using governor menu Nov 1 01:15:07.041052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:15:07.041058 kernel: dca service started, version 1.12.1 Nov 1 01:15:07.041063 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:15:07.041068 kernel: PCI: Using configuration type 1 for base access Nov 1 01:15:07.041074 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:15:07.041079 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:15:07.041085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:15:07.041090 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:15:07.041096 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:15:07.041102 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:15:07.041107 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:15:07.041113 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:15:07.041118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:15:07.041124 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:15:07.041129 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041135 kernel: ACPI: SSDT 0xFFFF933441B54000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:15:07.041140 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041146 kernel: ACPI: SSDT 0xFFFF933441B4A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:15:07.041152 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041157 kernel: ACPI: SSDT 0xFFFF933440247600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:15:07.041163 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041168 kernel: ACPI: SSDT 0xFFFF933441E7A000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:15:07.041173 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041179 kernel: ACPI: SSDT 0xFFFF93344012F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:15:07.041184 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:15:07.041189 kernel: ACPI: SSDT 0xFFFF933441B50400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:15:07.041195 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 01:15:07.041200 kernel: ACPI: Interpreter enabled Nov 1 01:15:07.041208 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:15:07.041232 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:15:07.041238 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:15:07.041243 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:15:07.041249 kernel: HEST: Table parsing has been initialized. Nov 1 01:15:07.041269 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:15:07.041275 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:15:07.041280 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 01:15:07.041286 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:15:07.041292 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 01:15:07.041298 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 01:15:07.041303 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 01:15:07.041309 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 01:15:07.041314 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 01:15:07.041320 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 01:15:07.041325 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 01:15:07.041330 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 01:15:07.041336 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 01:15:07.041342 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 01:15:07.041348 kernel: ACPI: \PIN_: New power resource Nov 1 01:15:07.041353 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:15:07.041425 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:15:07.041481 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:15:07.041531 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:15:07.041539 kernel: PCI host bridge to bus 0000:00 Nov 1 01:15:07.041592 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:15:07.041638 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:15:07.041680 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:15:07.041724 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:15:07.041766 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:15:07.041809 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:15:07.041871 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:15:07.041931 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:15:07.041983 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.042036 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:15:07.042087 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:15:07.042139 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:15:07.042189 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:15:07.042282 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:15:07.042332 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:15:07.042381 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:15:07.042434 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:15:07.042483 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:15:07.042532 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:15:07.042589 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:15:07.042639 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:15:07.042695 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:15:07.042746 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:15:07.042799 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:15:07.042848 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:15:07.042900 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:15:07.042952 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:15:07.043013 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:15:07.043064 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:15:07.043118 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:15:07.043168 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:15:07.043243 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:15:07.043313 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:15:07.043364 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:15:07.043412 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:15:07.043462 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:15:07.043510 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:15:07.043559 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:15:07.043611 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:15:07.043659 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:15:07.043716 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:15:07.043767 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.043824 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:15:07.043877 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.043931 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:15:07.043981 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.044035 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:15:07.044085 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.044139 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:15:07.044190 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.044283 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:15:07.044333 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:15:07.044387 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:15:07.044440 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:15:07.044491 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:15:07.044544 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:15:07.044598 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:15:07.044649 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:15:07.044704 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:15:07.044756 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:15:07.044806 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:15:07.044860 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:15:07.044909 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:15:07.044961 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:15:07.045017 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:15:07.045067 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:15:07.045118 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:15:07.045168 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:15:07.045244 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:15:07.045309 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:15:07.045360 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:15:07.045409 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:15:07.045460 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:15:07.045510 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:15:07.045566 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:15:07.045618 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:15:07.045670 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:15:07.045722 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:15:07.045773 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:15:07.045824 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.045874 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:15:07.045924 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:15:07.045973 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:15:07.046032 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:15:07.046084 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:15:07.046135 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:15:07.046186 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:15:07.046284 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:15:07.046338 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:15:07.046388 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:15:07.046441 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:15:07.046490 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:15:07.046540 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:15:07.046599 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:15:07.046651 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:15:07.046703 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:15:07.046753 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:15:07.046804 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:15:07.046855 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.046905 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.046960 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:15:07.047018 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:15:07.047073 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:15:07.047125 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:15:07.047178 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:15:07.047260 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:15:07.047334 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:15:07.047387 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:15:07.047438 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:15:07.047489 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.047540 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.047548 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:15:07.047554 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:15:07.047562 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:15:07.047568 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:15:07.047573 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:15:07.047579 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:15:07.047585 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:15:07.047591 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:15:07.047597 kernel: iommu: Default domain type: Translated Nov 1 01:15:07.047602 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:15:07.047608 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:15:07.047615 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:15:07.047621 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:15:07.047626 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Nov 1 01:15:07.047632 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 1 01:15:07.047638 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 1 01:15:07.047643 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:15:07.047649 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:15:07.047700 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:15:07.047753 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:15:07.047808 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:15:07.047816 kernel: vgaarb: loaded Nov 1 01:15:07.047822 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:15:07.047828 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:15:07.047834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:15:07.047840 kernel: pnp: PnP ACPI init Nov 1 01:15:07.047891 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:15:07.047940 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:15:07.047994 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:15:07.048045 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:15:07.048092 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:15:07.048140 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:15:07.048186 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:15:07.048280 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:15:07.048328 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:15:07.048373 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:15:07.048421 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:15:07.048466 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:15:07.048511 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:15:07.048560 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 01:15:07.048607 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:15:07.048654 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:15:07.048699 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:15:07.048744 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:15:07.048788 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:15:07.048833 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:15:07.048881 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 01:15:07.048890 kernel: pnp: PnP ACPI: found 9 devices Nov 1 01:15:07.048898 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:15:07.048905 kernel: NET: Registered PF_INET protocol family Nov 1 01:15:07.048911 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:15:07.048917 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:15:07.048923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:15:07.048928 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:15:07.048934 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 01:15:07.048940 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:15:07.048946 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.048953 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:15:07.048959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:15:07.048964 kernel: NET: Registered PF_XDP protocol family Nov 1 01:15:07.049016 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:15:07.049065 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:15:07.049116 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:15:07.049168 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049248 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049321 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049374 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:15:07.049424 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:15:07.049473 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:15:07.049523 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:15:07.049572 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:15:07.049625 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:15:07.049674 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:15:07.049724 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:15:07.049773 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:15:07.049824 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:15:07.049872 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:15:07.049922 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:15:07.049974 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:15:07.050025 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.050076 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.050126 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:15:07.050177 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:15:07.050253 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:15:07.050319 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:15:07.050362 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:15:07.050409 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:15:07.050452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:15:07.050496 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:15:07.050538 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:15:07.050588 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:15:07.050634 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:15:07.050684 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:15:07.050732 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:15:07.050785 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:15:07.050831 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:15:07.050881 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:15:07.050926 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:15:07.050974 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:15:07.051019 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:15:07.051029 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:15:07.051035 kernel: DMAR: No ATSR found Nov 1 01:15:07.051041 kernel: DMAR: No SATC found Nov 1 01:15:07.051047 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:15:07.051097 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:15:07.051148 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:15:07.051199 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:15:07.051291 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:15:07.051343 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:15:07.051393 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:15:07.051441 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:15:07.051490 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:15:07.051538 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:15:07.051587 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:15:07.051636 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:15:07.051685 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:15:07.051736 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:15:07.051786 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:15:07.051835 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:15:07.051884 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:15:07.051934 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:15:07.051982 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:15:07.052032 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:15:07.052081 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:15:07.052133 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:15:07.052184 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:15:07.052283 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:15:07.052335 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:15:07.052388 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:15:07.052438 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:15:07.052491 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:15:07.052500 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:15:07.052506 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:15:07.052513 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 1 01:15:07.052519 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:15:07.052525 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:15:07.052531 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:15:07.052537 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:15:07.052589 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:15:07.052598 kernel: Initialise system trusted keyrings Nov 1 01:15:07.052604 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:15:07.052612 kernel: Key type asymmetric registered Nov 1 01:15:07.052617 kernel: Asymmetric key parser 'x509' registered Nov 1 01:15:07.052623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:15:07.052629 kernel: io scheduler mq-deadline registered Nov 1 01:15:07.052635 kernel: io scheduler kyber registered Nov 1 01:15:07.052640 kernel: io scheduler bfq registered Nov 1 01:15:07.052688 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:15:07.052738 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:15:07.052790 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:15:07.052840 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:15:07.052889 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:15:07.052939 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:15:07.052994 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:15:07.053003 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:15:07.053009 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:15:07.053015 kernel: pstore: Using crash dump compression: deflate Nov 1 01:15:07.053022 kernel: pstore: Registered erst as persistent store backend Nov 1 01:15:07.053028 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:15:07.053034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:15:07.053039 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:15:07.053045 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:15:07.053051 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:15:07.053101 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:15:07.053110 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:15:07.053156 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:15:07.053205 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:15:07.053297 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:15:05 UTC (1761959705) Nov 1 01:15:07.053344 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:15:07.053352 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:15:07.053358 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:15:07.053364 kernel: intel_pstate: HWP enabled Nov 1 01:15:07.053370 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:15:07.053378 kernel: vesafb: scrolling: redraw Nov 1 01:15:07.053384 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:15:07.053390 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000003c719224, using 768k, total 768k Nov 1 01:15:07.053395 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:15:07.053401 kernel: fb0: VESA VGA frame buffer device Nov 1 01:15:07.053407 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:15:07.053413 kernel: Segment Routing with IPv6 Nov 1 01:15:07.053419 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:15:07.053424 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:15:07.053430 kernel: Key type dns_resolver registered Nov 1 01:15:07.053437 kernel: microcode: Current revision: 0x000000fc Nov 1 01:15:07.053443 kernel: microcode: Updated early from: 0x000000f4 Nov 1 01:15:07.053448 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:15:07.053454 kernel: IPI shorthand broadcast: enabled Nov 1 01:15:07.053460 kernel: sched_clock: Marking stable (1567000800, 1368982728)->(4406633217, -1470649689) Nov 1 01:15:07.053466 kernel: registered taskstats version 1 Nov 1 01:15:07.053471 kernel: Loading compiled-in X.509 certificates Nov 1 01:15:07.053477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:15:07.053483 kernel: Key type .fscrypt registered Nov 1 01:15:07.053490 kernel: Key type fscrypt-provisioning registered Nov 1 01:15:07.053495 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:15:07.053501 kernel: ima: No architecture policies found Nov 1 01:15:07.053507 kernel: clk: Disabling unused clocks Nov 1 01:15:07.053513 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:15:07.053518 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:15:07.053524 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:15:07.053530 kernel: Run /init as init process Nov 1 01:15:07.053536 kernel: with arguments: Nov 1 01:15:07.053542 kernel: /init Nov 1 01:15:07.053548 kernel: with environment: Nov 1 01:15:07.053554 kernel: HOME=/ Nov 1 01:15:07.053559 kernel: TERM=linux Nov 1 01:15:07.053566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:15:07.053573 systemd[1]: Detected architecture x86-64. Nov 1 01:15:07.053579 systemd[1]: Running in initrd. Nov 1 01:15:07.053586 systemd[1]: No hostname configured, using default hostname. Nov 1 01:15:07.053592 systemd[1]: Hostname set to . Nov 1 01:15:07.053598 systemd[1]: Initializing machine ID from random generator. Nov 1 01:15:07.053604 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:15:07.053610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:15:07.053616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:15:07.053623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:15:07.053629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:15:07.053636 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:15:07.053642 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:15:07.053648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:15:07.053655 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 1 01:15:07.053660 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 1 01:15:07.053666 kernel: clocksource: Switched to clocksource tsc Nov 1 01:15:07.053672 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:15:07.053679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:15:07.053685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:15:07.053691 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:15:07.053698 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:15:07.053704 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:15:07.053710 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:15:07.053716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:15:07.053721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:15:07.053727 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:15:07.053735 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:15:07.053741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:15:07.053747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:15:07.053753 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:15:07.053759 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:15:07.053764 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:15:07.053770 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:15:07.053776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:15:07.053783 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:15:07.053790 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:15:07.053806 systemd-journald[269]: Collecting audit messages is disabled. Nov 1 01:15:07.053820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:15:07.053828 systemd-journald[269]: Journal started Nov 1 01:15:07.053841 systemd-journald[269]: Runtime Journal (/run/log/journal/325db1ff8c14485b8a3bc3e226cc3d8f) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:15:07.067521 systemd-modules-load[271]: Inserted module 'overlay' Nov 1 01:15:07.088209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:07.117210 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:15:07.117217 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:15:07.188318 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:15:07.188331 kernel: Bridge firewalling registered Nov 1 01:15:07.177388 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:15:07.179306 systemd-modules-load[271]: Inserted module 'br_netfilter' Nov 1 01:15:07.199511 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:15:07.220554 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:15:07.240566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:07.270488 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:15:07.279882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:15:07.308112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:15:07.318417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:15:07.324023 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:15:07.324782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:15:07.325190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:15:07.327511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:15:07.328375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:15:07.329701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:15:07.332463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:07.343780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:15:07.345524 systemd-resolved[303]: Positive Trust Anchors: Nov 1 01:15:07.345529 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:15:07.345554 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:15:07.347154 systemd-resolved[303]: Defaulting to hostname 'linux'. Nov 1 01:15:07.371436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:15:07.378499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:15:07.501579 dracut-cmdline[308]: dracut-dracut-053 Nov 1 01:15:07.510423 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:15:07.711214 kernel: SCSI subsystem initialized Nov 1 01:15:07.735210 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:15:07.758209 kernel: iscsi: registered transport (tcp) Nov 1 01:15:07.790611 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:15:07.790628 kernel: QLogic iSCSI HBA Driver Nov 1 01:15:07.823436 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:15:07.857590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:15:07.914535 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:15:07.914554 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:15:07.934202 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:15:07.993273 kernel: raid6: avx2x4 gen() 52363 MB/s Nov 1 01:15:08.025235 kernel: raid6: avx2x2 gen() 52999 MB/s Nov 1 01:15:08.061586 kernel: raid6: avx2x1 gen() 45147 MB/s Nov 1 01:15:08.061602 kernel: raid6: using algorithm avx2x2 gen() 52999 MB/s Nov 1 01:15:08.108661 kernel: raid6: .... xor() 31264 MB/s, rmw enabled Nov 1 01:15:08.108678 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:15:08.150224 kernel: xor: automatically using best checksumming function avx Nov 1 01:15:08.268240 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:15:08.274305 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:15:08.305542 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:15:08.312611 systemd-udevd[495]: Using default interface naming scheme 'v255'. Nov 1 01:15:08.316287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:15:08.338997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:15:08.396386 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Nov 1 01:15:08.413536 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:15:08.439470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:15:08.527459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:15:08.570373 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:15:08.570395 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:15:08.540368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:15:08.600964 kernel: PTP clock support registered Nov 1 01:15:08.600983 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:15:08.573494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:15:08.690524 kernel: libata version 3.00 loaded. Nov 1 01:15:08.690553 kernel: ACPI: bus type USB registered Nov 1 01:15:08.690571 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:15:08.690587 kernel: usbcore: registered new interface driver usbfs Nov 1 01:15:08.690602 kernel: usbcore: registered new interface driver hub Nov 1 01:15:08.690617 kernel: usbcore: registered new device driver usb Nov 1 01:15:08.573581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:08.706300 kernel: AES CTR mode by8 optimization enabled Nov 1 01:15:08.690701 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:15:09.170336 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:15:09.170437 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:15:09.170447 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:15:09.170520 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:15:09.170529 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:15:09.170594 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:15:09.170667 kernel: scsi host0: ahci Nov 1 01:15:09.170733 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:15:09.170799 kernel: scsi host1: ahci Nov 1 01:15:09.170861 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9e Nov 1 01:15:09.170928 kernel: scsi host2: ahci Nov 1 01:15:09.170994 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:15:09.171059 kernel: scsi host3: ahci Nov 1 01:15:09.171121 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:15:09.171185 kernel: scsi host4: ahci Nov 1 01:15:09.171252 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:15:09.171321 kernel: scsi host5: ahci Nov 1 01:15:09.171384 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:15:09.171450 kernel: scsi host6: ahci Nov 1 01:15:09.171510 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9f Nov 1 01:15:09.171575 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:15:09.171583 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:15:09.171646 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:15:09.171655 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:15:09.171717 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:15:09.171727 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:15:09.171735 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:15:09.171742 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:15:09.171749 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:15:09.171757 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 1 01:15:09.171824 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:15:08.724287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:15:08.724389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:09.151780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:09.218412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:09.237513 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:15:09.247710 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:15:09.247733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:15:09.247757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:15:09.257369 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:15:09.331412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:09.342407 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:15:09.381393 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:15:09.396694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:09.466259 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.466278 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.466291 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:15:09.466301 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:15:09.466409 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:15:09.466419 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 1 01:15:09.466498 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.505207 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:15:09.523248 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.538252 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:15:09.555242 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:15:09.591244 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:15:09.591260 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:15:09.628238 kernel: ata1.00: Features: NCQ-prio Nov 1 01:15:09.643237 kernel: ata2.00: Features: NCQ-prio Nov 1 01:15:09.658256 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:15:09.658341 kernel: ata1.00: configured for UDMA/133 Nov 1 01:15:09.662237 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 1 01:15:09.662327 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:15:09.668253 kernel: ata2.00: configured for UDMA/133 Nov 1 01:15:09.668269 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:15:09.751274 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:15:09.787225 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:15:09.787349 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:15:09.810031 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:15:09.842265 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:15:09.842353 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:15:09.852094 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:15:09.852193 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:15:09.897185 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:15:09.908207 kernel: hub 1-0:1.0: USB hub found Nov 1 01:15:09.922254 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:15:09.960721 kernel: hub 2-0:1.0: USB hub found Nov 1 01:15:09.960830 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:15:09.960922 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:15:09.975208 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:15:10.018359 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.018376 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:15:10.018385 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:15:10.023086 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:15:10.037999 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:15:10.038074 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:15:10.044233 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:15:10.053265 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:15:10.053336 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:15:10.058073 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:15:10.078092 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:15:10.078182 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:15:10.087244 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 01:15:10.096252 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 01:15:10.103271 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.182021 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:15:10.182047 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:15:10.203108 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:15:10.212262 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:15:10.275209 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:15:10.295734 kernel: GPT:9289727 != 937703087 Nov 1 01:15:10.311300 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:15:10.324710 kernel: hub 1-14:1.0: USB hub found Nov 1 01:15:10.324807 kernel: GPT:9289727 != 937703087 Nov 1 01:15:10.324816 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:15:10.324823 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.324830 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:15:10.333464 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:15:10.423211 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Nov 1 01:15:10.445250 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Nov 1 01:15:10.452432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 01:15:10.509485 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (558) Nov 1 01:15:10.509503 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (553) Nov 1 01:15:10.500942 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 01:15:10.523576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:15:10.556389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:15:10.583344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:15:10.621578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:15:10.681314 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:15:10.681336 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.681344 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.681351 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.681358 disk-uuid[728]: Primary Header is updated. Nov 1 01:15:10.681358 disk-uuid[728]: Secondary Entries is updated. Nov 1 01:15:10.681358 disk-uuid[728]: Secondary Header is updated. Nov 1 01:15:10.733310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.733323 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:10.733334 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:10.763273 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:15:10.784895 kernel: usbcore: registered new interface driver usbhid Nov 1 01:15:10.784924 kernel: usbhid: USB HID core driver Nov 1 01:15:10.827274 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:15:10.922149 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:15:10.922280 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:15:10.954591 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:15:11.722372 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:15:11.741775 disk-uuid[729]: The operation has completed successfully. Nov 1 01:15:11.751275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:15:11.776540 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:15:11.776590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:15:11.812482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:15:11.850253 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:15:11.850340 sh[746]: Success Nov 1 01:15:11.893822 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:15:11.922488 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:15:11.930552 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:15:11.987097 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:15:11.987121 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:12.008453 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:15:12.027345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:15:12.045086 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:15:12.083280 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 01:15:12.085905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:15:12.094660 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:15:12.101477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:15:12.127407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:15:12.164252 kernel: BTRFS info (device sda6): first mount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:12.164296 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:12.164319 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:15:12.206492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:15:12.274471 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:15:12.274488 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 01:15:12.274497 kernel: BTRFS info (device sda6): last unmount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:12.261588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:15:12.284544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:15:12.295082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:15:12.304429 systemd-networkd[926]: lo: Link UP Nov 1 01:15:12.304431 systemd-networkd[926]: lo: Gained carrier Nov 1 01:15:12.306854 systemd-networkd[926]: Enumeration completed Nov 1 01:15:12.307571 systemd-networkd[926]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.310456 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:15:12.335056 systemd-networkd[926]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.342522 systemd[1]: Reached target network.target - Network. Nov 1 01:15:12.363662 systemd-networkd[926]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.404316 ignition[930]: Ignition 2.19.0 Nov 1 01:15:12.406492 unknown[930]: fetched base config from "system" Nov 1 01:15:12.404320 ignition[930]: Stage: fetch-offline Nov 1 01:15:12.406496 unknown[930]: fetched user config from "system" Nov 1 01:15:12.404339 ignition[930]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:12.407563 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:15:12.404345 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:12.432601 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:15:12.404400 ignition[930]: parsed url from cmdline: "" Nov 1 01:15:12.438451 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:15:12.547336 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:15:12.404402 ignition[930]: no config URL provided Nov 1 01:15:12.540257 systemd-networkd[926]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:15:12.404405 ignition[930]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:15:12.404429 ignition[930]: parsing config with SHA512: 3136ae21beb052a0c7498d69f2b33503347654af4560866444cb508851c0528458e05c39363eb2ca4a8129c9999a460cd048b28a79666cd9aac2d58a495d9769 Nov 1 01:15:12.406713 ignition[930]: fetch-offline: fetch-offline passed Nov 1 01:15:12.406716 ignition[930]: POST message to Packet Timeline Nov 1 01:15:12.406718 ignition[930]: POST Status error: resource requires networking Nov 1 01:15:12.406752 ignition[930]: Ignition finished successfully Nov 1 01:15:12.454794 ignition[944]: Ignition 2.19.0 Nov 1 01:15:12.454801 ignition[944]: Stage: kargs Nov 1 01:15:12.454962 ignition[944]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:12.454973 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:12.455788 ignition[944]: kargs: kargs passed Nov 1 01:15:12.455792 ignition[944]: POST message to Packet Timeline Nov 1 01:15:12.455806 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:12.456397 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35141->[::1]:53: read: connection refused Nov 1 01:15:12.656585 ignition[944]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:15:12.657003 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52280->[::1]:53: read: connection refused Nov 1 01:15:12.720364 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:15:12.717897 systemd-networkd[926]: eno1: Link UP Nov 1 01:15:12.718047 systemd-networkd[926]: eno2: Link UP Nov 1 01:15:12.718187 systemd-networkd[926]: enp1s0f0np0: Link UP Nov 1 01:15:12.718371 systemd-networkd[926]: enp1s0f0np0: Gained carrier Nov 1 01:15:12.733410 systemd-networkd[926]: enp1s0f1np1: Link UP Nov 1 01:15:12.756382 systemd-networkd[926]: enp1s0f0np0: DHCPv4 address 139.178.94.199/31, gateway 139.178.94.198 acquired from 145.40.83.140 Nov 1 01:15:13.058223 ignition[944]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:15:13.059484 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46159->[::1]:53: read: connection refused Nov 1 01:15:13.586011 systemd-networkd[926]: enp1s0f1np1: Gained carrier Nov 1 01:15:13.777830 systemd-networkd[926]: enp1s0f0np0: Gained IPv6LL Nov 1 01:15:13.859874 ignition[944]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:15:13.860986 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59559->[::1]:53: read: connection refused Nov 1 01:15:14.929815 systemd-networkd[926]: enp1s0f1np1: Gained IPv6LL Nov 1 01:15:15.462361 ignition[944]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:15:15.463614 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57609->[::1]:53: read: connection refused Nov 1 01:15:18.667243 ignition[944]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:15:19.760909 ignition[944]: GET result: OK Nov 1 01:15:20.616560 ignition[944]: Ignition finished successfully Nov 1 01:15:20.621585 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:15:20.650474 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:15:20.656747 ignition[964]: Ignition 2.19.0 Nov 1 01:15:20.656751 ignition[964]: Stage: disks Nov 1 01:15:20.656867 ignition[964]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:20.656874 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:20.657435 ignition[964]: disks: disks passed Nov 1 01:15:20.657438 ignition[964]: POST message to Packet Timeline Nov 1 01:15:20.657448 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:21.764948 ignition[964]: GET result: OK Nov 1 01:15:22.627632 ignition[964]: Ignition finished successfully Nov 1 01:15:22.630913 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:15:22.646613 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:15:22.664509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:15:22.685532 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:15:22.706613 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:15:22.727612 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:15:22.762494 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:15:22.800951 systemd-fsck[981]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 01:15:22.810683 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:15:22.833475 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:15:22.936251 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:15:22.936247 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:15:22.945650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:15:22.969263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:15:23.015272 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (990) Nov 1 01:15:23.015291 kernel: BTRFS info (device sda6): first mount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:23.035912 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:23.054388 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:15:23.076308 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:15:23.125448 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:15:23.125460 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 01:15:23.116026 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 01:15:23.143308 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 01:15:23.159302 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:15:23.159320 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:15:23.216438 coreos-metadata[1007]: Nov 01 01:15:23.208 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:15:23.180246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:15:23.239495 coreos-metadata[1008]: Nov 01 01:15:23.208 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:15:23.205436 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:15:23.239441 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:15:23.285388 initrd-setup-root[1022]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:15:23.296270 initrd-setup-root[1029]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:15:23.307317 initrd-setup-root[1036]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:15:23.317279 initrd-setup-root[1043]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:15:23.335715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:15:23.358418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:15:23.394418 kernel: BTRFS info (device sda6): last unmount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:23.376923 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:15:23.402984 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:15:23.418383 ignition[1111]: INFO : Ignition 2.19.0 Nov 1 01:15:23.418383 ignition[1111]: INFO : Stage: mount Nov 1 01:15:23.418383 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:23.418383 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:23.418383 ignition[1111]: INFO : mount: mount passed Nov 1 01:15:23.418383 ignition[1111]: INFO : POST message to Packet Timeline Nov 1 01:15:23.418383 ignition[1111]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:23.421050 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:15:24.278733 coreos-metadata[1007]: Nov 01 01:15:24.278 INFO Fetch successful Nov 1 01:15:24.290188 coreos-metadata[1008]: Nov 01 01:15:24.290 INFO Fetch successful Nov 1 01:15:24.320391 coreos-metadata[1007]: Nov 01 01:15:24.320 INFO wrote hostname ci-4081.3.6-n-61efafd0e9 to /sysroot/etc/hostname Nov 1 01:15:24.321686 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:15:24.346572 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:15:24.346616 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 01:15:24.411256 ignition[1111]: INFO : GET result: OK Nov 1 01:15:24.823732 ignition[1111]: INFO : Ignition finished successfully Nov 1 01:15:24.825638 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:15:24.854509 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:15:24.866554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:15:24.924365 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1137) Nov 1 01:15:24.954638 kernel: BTRFS info (device sda6): first mount of filesystem b6c6a5a1-6657-40cc-8fa9-bb3050afe890 Nov 1 01:15:24.954655 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:15:24.972959 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:15:25.012081 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:15:25.012097 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 01:15:25.026408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:15:25.051990 ignition[1154]: INFO : Ignition 2.19.0 Nov 1 01:15:25.051990 ignition[1154]: INFO : Stage: files Nov 1 01:15:25.067436 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:25.067436 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:25.067436 ignition[1154]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:15:25.067436 ignition[1154]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:15:25.067436 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:15:25.057016 unknown[1154]: wrote ssh authorized keys file for user: core Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:25.202520 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:25.451559 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:15:25.702836 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 01:15:26.617848 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:15:26.617848 ignition[1154]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:15:26.647474 ignition[1154]: INFO : files: files passed Nov 1 01:15:26.647474 ignition[1154]: INFO : POST message to Packet Timeline Nov 1 01:15:26.647474 ignition[1154]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:27.472060 ignition[1154]: INFO : GET result: OK Nov 1 01:15:28.583693 ignition[1154]: INFO : Ignition finished successfully Nov 1 01:15:28.587845 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:15:28.613469 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:15:28.623829 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:15:28.633510 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:15:28.633566 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:15:28.694769 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:15:28.694769 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:15:28.734529 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:15:28.699583 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:15:28.721410 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:15:28.763470 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:15:28.808086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:15:28.808384 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:15:28.828742 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:15:28.848568 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:15:28.868720 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:15:28.886423 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:15:28.938585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:15:28.968446 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:15:28.973760 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:15:28.999493 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:15:29.020779 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:15:29.038892 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:15:29.039332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:15:29.066040 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:15:29.087902 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:15:29.105894 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:15:29.123904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:15:29.144879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:15:29.165892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:15:29.185893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:15:29.206931 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:15:29.227919 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:15:29.247936 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:15:29.267831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:15:29.268265 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:15:29.303658 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:15:29.313958 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:15:29.334800 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:15:29.335257 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:15:29.357801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:15:29.358223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:15:29.388914 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:15:29.389394 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:15:29.409145 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:15:29.427771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:15:29.432505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:15:29.449936 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:15:29.468914 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:15:29.487888 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:15:29.488223 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:15:29.499106 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:15:29.499449 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:15:29.530012 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:15:29.646336 ignition[1218]: INFO : Ignition 2.19.0 Nov 1 01:15:29.646336 ignition[1218]: INFO : Stage: umount Nov 1 01:15:29.646336 ignition[1218]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:15:29.646336 ignition[1218]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:15:29.646336 ignition[1218]: INFO : umount: umount passed Nov 1 01:15:29.646336 ignition[1218]: INFO : POST message to Packet Timeline Nov 1 01:15:29.646336 ignition[1218]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:15:29.530448 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:15:29.550007 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:15:29.550416 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:15:29.568028 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:15:29.568450 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:15:29.603500 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:15:29.619021 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:15:29.628563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:15:29.628691 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:15:29.657645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:15:29.657849 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:15:29.700657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:15:29.702682 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:15:29.702951 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:15:29.718443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:15:29.718709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:15:30.828077 ignition[1218]: INFO : GET result: OK Nov 1 01:15:31.685708 ignition[1218]: INFO : Ignition finished successfully Nov 1 01:15:31.688787 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:15:31.689092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:15:31.706642 systemd[1]: Stopped target network.target - Network. Nov 1 01:15:31.721448 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:15:31.721637 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:15:31.740589 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:15:31.740751 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:15:31.759741 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:15:31.759902 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:15:31.778721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:15:31.778887 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:15:31.797718 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:15:31.797884 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:15:31.816928 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:15:31.826368 systemd-networkd[926]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:15:31.834682 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:15:31.835391 systemd-networkd[926]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:15:31.853301 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:15:31.853584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:15:31.872378 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:15:31.872716 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:15:31.892902 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:15:31.893025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:15:31.925531 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:15:31.941409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:15:31.941657 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:15:31.962710 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:15:31.962881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:15:31.980708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:15:31.980876 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:15:32.001708 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:15:32.001874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:15:32.020949 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:15:32.040648 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:15:32.041132 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:15:32.072273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:15:32.072420 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:15:32.078752 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:15:32.078859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:15:32.106475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:15:32.106622 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:15:32.136893 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:15:32.137068 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:15:32.177362 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:15:32.177547 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:15:32.218315 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:15:32.218358 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:15:32.500414 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Nov 1 01:15:32.218383 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:15:32.247368 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 01:15:32.247403 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:15:32.279410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:15:32.279489 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:15:32.298487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:15:32.298614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:32.321409 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:15:32.321635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:15:32.342971 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:15:32.343258 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:15:32.362712 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:15:32.391587 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:15:32.431065 systemd[1]: Switching root. Nov 1 01:15:32.611384 systemd-journald[269]: Journal stopped Nov 1 01:15:35.241742 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:15:35.241757 kernel: SELinux: policy capability open_perms=1 Nov 1 01:15:35.241764 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:15:35.241771 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:15:35.241776 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:15:35.241782 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:15:35.241788 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:15:35.241794 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:15:35.241799 kernel: audit: type=1403 audit(1761959732.822:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:15:35.241806 systemd[1]: Successfully loaded SELinux policy in 169.615ms. Nov 1 01:15:35.241814 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.072ms. Nov 1 01:15:35.241821 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:15:35.241827 systemd[1]: Detected architecture x86-64. Nov 1 01:15:35.241833 systemd[1]: Detected first boot. Nov 1 01:15:35.241840 systemd[1]: Hostname set to . Nov 1 01:15:35.241847 systemd[1]: Initializing machine ID from random generator. Nov 1 01:15:35.241854 zram_generator::config[1269]: No configuration found. Nov 1 01:15:35.241861 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:15:35.241870 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 01:15:35.241876 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 01:15:35.241883 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 01:15:35.241890 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 01:15:35.241897 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 01:15:35.241904 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 01:15:35.241910 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 01:15:35.241917 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 01:15:35.241924 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 01:15:35.241930 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 01:15:35.241937 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 01:15:35.241944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:15:35.241951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:15:35.241957 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 01:15:35.241964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 01:15:35.241970 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 01:15:35.241977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:15:35.241983 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 1 01:15:35.241990 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:15:35.241997 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 01:15:35.242004 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 01:15:35.242011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 01:15:35.242019 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 01:15:35.242025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:15:35.242032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:15:35.242039 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:15:35.242047 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:15:35.242054 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 01:15:35.242060 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 01:15:35.242067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:15:35.242074 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:15:35.242080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:15:35.242088 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 01:15:35.242095 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 01:15:35.242102 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 01:15:35.242109 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 01:15:35.242115 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:35.242122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 01:15:35.242129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 01:15:35.242137 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 01:15:35.242144 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:15:35.242151 systemd[1]: Reached target machines.target - Containers. Nov 1 01:15:35.242159 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 01:15:35.242166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:15:35.242173 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:15:35.242180 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 01:15:35.242186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:15:35.242193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:15:35.242201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:15:35.242213 kernel: ACPI: bus type drm_connector registered Nov 1 01:15:35.242219 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 01:15:35.242247 kernel: fuse: init (API version 7.39) Nov 1 01:15:35.242254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:15:35.242260 kernel: loop: module loaded Nov 1 01:15:35.242280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:15:35.242287 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 01:15:35.242295 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 01:15:35.242302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 01:15:35.242308 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 01:15:35.242315 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:15:35.242330 systemd-journald[1373]: Collecting audit messages is disabled. Nov 1 01:15:35.242345 systemd-journald[1373]: Journal started Nov 1 01:15:35.242359 systemd-journald[1373]: Runtime Journal (/run/log/journal/2772665d5c634a3a9da27d5acfb88dee) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:15:33.383226 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:15:33.397201 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 01:15:33.397461 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 01:15:35.270244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:15:35.305353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 01:15:35.338207 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 01:15:35.371245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:15:35.404700 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 01:15:35.404727 systemd[1]: Stopped verity-setup.service. Nov 1 01:15:35.467251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:35.488379 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:15:35.497797 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 01:15:35.507447 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 01:15:35.517583 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 01:15:35.527584 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 01:15:35.537509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 01:15:35.548492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 01:15:35.558570 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 01:15:35.569533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:15:35.580564 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:15:35.580640 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 01:15:35.592541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:15:35.592612 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:15:35.605551 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:15:35.605622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:15:35.615554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:15:35.615624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:15:35.627537 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:15:35.627608 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 01:15:35.637547 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:15:35.637616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:15:35.647555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:15:35.657546 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 01:15:35.669564 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 01:15:35.681558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:15:35.697334 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 01:15:35.717382 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 01:15:35.728164 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 01:15:35.738380 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:15:35.738412 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:15:35.749721 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 01:15:35.773607 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 01:15:35.785465 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 01:15:35.795524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:15:35.797417 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 01:15:35.808397 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 01:15:35.819274 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:15:35.819906 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 01:15:35.824153 systemd-journald[1373]: Time spent on flushing to /var/log/journal/2772665d5c634a3a9da27d5acfb88dee is 13.613ms for 1368 entries. Nov 1 01:15:35.824153 systemd-journald[1373]: System Journal (/var/log/journal/2772665d5c634a3a9da27d5acfb88dee) is 8.0M, max 195.6M, 187.6M free. Nov 1 01:15:35.863357 systemd-journald[1373]: Received client request to flush runtime journal. Nov 1 01:15:35.837347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:15:35.838102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:15:35.847890 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 01:15:35.857092 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:15:35.868154 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 01:15:35.894211 kernel: loop0: detected capacity change from 0 to 142488 Nov 1 01:15:35.894828 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 01:15:35.905054 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Nov 1 01:15:35.905079 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Nov 1 01:15:35.919634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 01:15:35.933210 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:15:35.944430 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 01:15:35.955456 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 01:15:35.966477 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 01:15:35.983441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:15:35.987269 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 01:15:35.996442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:15:36.010398 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 01:15:36.032561 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 01:15:36.049074 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 01:15:36.055209 kernel: loop2: detected capacity change from 0 to 8 Nov 1 01:15:36.064842 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:15:36.065318 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 01:15:36.076820 udevadm[1408]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 01:15:36.085631 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 01:15:36.106277 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 01:15:36.121448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:15:36.129484 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Nov 1 01:15:36.129495 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Nov 1 01:15:36.132507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:15:36.176283 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 01:15:36.192314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 01:15:36.208198 ldconfig[1399]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:15:36.209769 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 01:15:36.218264 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 01:15:36.254257 kernel: loop6: detected capacity change from 0 to 8 Nov 1 01:15:36.255447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:15:36.273237 kernel: loop7: detected capacity change from 0 to 140768 Nov 1 01:15:36.281176 systemd-udevd[1436]: Using default interface naming scheme 'v255'. Nov 1 01:15:36.284609 (sd-merge)[1433]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 1 01:15:36.284850 (sd-merge)[1433]: Merged extensions into '/usr'. Nov 1 01:15:36.287012 systemd[1]: Reloading requested from client PID 1404 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 01:15:36.287019 systemd[1]: Reloading... Nov 1 01:15:36.331258 zram_generator::config[1491]: No configuration found. Nov 1 01:15:36.331313 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1462) Nov 1 01:15:36.353219 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 01:15:36.353282 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:15:36.366646 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 01:15:36.408249 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:15:36.444321 kernel: IPMI message handler: version 39.2 Nov 1 01:15:36.444388 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:15:36.484288 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 01:15:36.484486 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 01:15:36.495162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:15:36.515218 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 01:15:36.532210 kernel: ipmi device interface Nov 1 01:15:36.533208 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 01:15:36.533328 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 01:15:36.564402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:15:36.584333 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Nov 1 01:15:36.584637 systemd[1]: Reloading finished in 297 ms. Nov 1 01:15:36.585213 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 01:15:36.627215 kernel: ipmi_si: IPMI System Interface driver Nov 1 01:15:36.627245 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 01:15:36.645315 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 01:15:36.661947 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 01:15:36.678555 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 01:15:36.691210 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 01:15:36.708212 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 1 01:15:36.724691 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 01:15:36.724834 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 1 01:15:36.724935 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 01:15:36.789488 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 01:15:36.853053 kernel: intel_rapl_common: Found RAPL domain package Nov 1 01:15:36.853090 kernel: intel_rapl_common: Found RAPL domain core Nov 1 01:15:36.853111 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 01:15:36.853217 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 01:15:36.880209 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 1 01:15:36.938480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:15:36.950386 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 01:15:36.985241 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 01:15:37.002210 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 01:15:37.007497 systemd[1]: Starting ensure-sysext.service... Nov 1 01:15:37.015899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 01:15:37.028176 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:15:37.038837 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:15:37.039434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:15:37.039677 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 01:15:37.041408 systemd[1]: Reloading requested from client PID 1611 ('systemctl') (unit ensure-sysext.service)... Nov 1 01:15:37.041415 systemd[1]: Reloading... Nov 1 01:15:37.081253 zram_generator::config[1642]: No configuration found. Nov 1 01:15:37.105918 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:15:37.106153 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 01:15:37.106729 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:15:37.106921 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Nov 1 01:15:37.106964 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Nov 1 01:15:37.108900 systemd-tmpfiles[1615]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:15:37.108905 systemd-tmpfiles[1615]: Skipping /boot Nov 1 01:15:37.113856 systemd-tmpfiles[1615]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:15:37.113860 systemd-tmpfiles[1615]: Skipping /boot Nov 1 01:15:37.135076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:15:37.189924 systemd[1]: Reloading finished in 148 ms. Nov 1 01:15:37.221426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 01:15:37.232417 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:15:37.243375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:15:37.270380 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:15:37.281333 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 01:15:37.291746 augenrules[1723]: No rules Nov 1 01:15:37.304705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 01:15:37.315969 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 01:15:37.324004 lvm[1728]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:15:37.329495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:15:37.339897 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 01:15:37.351146 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 01:15:37.360838 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:15:37.370478 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 01:15:37.384336 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 01:15:37.394500 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 01:15:37.405492 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 01:15:37.419321 systemd-networkd[1613]: lo: Link UP Nov 1 01:15:37.419324 systemd-networkd[1613]: lo: Gained carrier Nov 1 01:15:37.420462 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:15:37.421806 systemd-networkd[1613]: bond0: netdev ready Nov 1 01:15:37.422728 systemd-networkd[1613]: Enumeration completed Nov 1 01:15:37.430503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:37.430624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:15:37.434111 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 01:15:37.434533 systemd-networkd[1613]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:42:d4.network. Nov 1 01:15:37.446954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:15:37.449112 lvm[1747]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:15:37.457880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:15:37.465872 systemd-resolved[1730]: Positive Trust Anchors: Nov 1 01:15:37.465879 systemd-resolved[1730]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:15:37.465903 systemd-resolved[1730]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:15:37.468572 systemd-resolved[1730]: Using system hostname 'ci-4081.3.6-n-61efafd0e9'. Nov 1 01:15:37.469879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:15:37.480278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:15:37.480991 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 01:15:37.490298 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:15:37.490365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:37.490912 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:15:37.501896 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 01:15:37.513575 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 01:15:37.525568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:15:37.525920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:15:37.546200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:15:37.546296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:15:37.558623 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:15:37.558962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:15:37.569526 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 01:15:37.593635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:37.594312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:15:37.608285 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:15:37.621987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:15:37.640221 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 1 01:15:37.641286 systemd-networkd[1613]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:42:d5.network. Nov 1 01:15:37.650468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:15:37.662546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:15:37.672345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:15:37.673947 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 01:15:37.685310 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:15:37.685370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:37.686039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:15:37.686127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:15:37.697550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:15:37.697623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:15:37.709616 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:15:37.709690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:15:37.721950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:37.722098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:15:37.735033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:15:37.757933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:15:37.770060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:15:37.785222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:15:37.816669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:15:37.816936 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:15:37.817126 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:15:37.817327 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:15:37.819703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:15:37.820007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:15:37.838424 systemd-networkd[1613]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 01:15:37.839256 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 1 01:15:37.839935 systemd-networkd[1613]: enp1s0f0np0: Link UP Nov 1 01:15:37.840190 systemd-networkd[1613]: enp1s0f0np0: Gained carrier Nov 1 01:15:37.858555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:15:37.861213 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:15:37.868935 systemd-networkd[1613]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:42:d4.network. Nov 1 01:15:37.869127 systemd-networkd[1613]: enp1s0f1np1: Link UP Nov 1 01:15:37.869355 systemd-networkd[1613]: enp1s0f1np1: Gained carrier Nov 1 01:15:37.871635 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:15:37.871738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:15:37.881581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:15:37.881693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:15:37.883393 systemd-networkd[1613]: bond0: Link UP Nov 1 01:15:37.883553 systemd-networkd[1613]: bond0: Gained carrier Nov 1 01:15:37.892444 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:15:37.892515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:15:37.903175 systemd[1]: Finished ensure-sysext.service. Nov 1 01:15:37.912696 systemd[1]: Reached target network.target - Network. Nov 1 01:15:37.921241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:15:37.932246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:15:37.932308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:15:37.945306 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 01:15:37.969207 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 1 01:15:37.969231 kernel: bond0: active interface up! Nov 1 01:15:38.001276 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 01:15:38.012363 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:15:38.022336 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 01:15:38.033305 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 01:15:38.044288 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 01:15:38.055279 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:15:38.055294 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:15:38.063269 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 01:15:38.073340 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 01:15:38.090663 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 01:15:38.102256 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 01:15:38.112285 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:15:38.120487 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 01:15:38.130967 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 01:15:38.140646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 01:15:38.150580 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 01:15:38.160360 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:15:38.170279 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:15:38.178305 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:15:38.178320 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:15:38.190301 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 01:15:38.200999 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 01:15:38.210895 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 01:15:38.219905 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 01:15:38.223131 coreos-metadata[1780]: Nov 01 01:15:38.223 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:15:38.229953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 01:15:38.231331 dbus-daemon[1781]: [system] SELinux support is enabled Nov 1 01:15:38.231772 jq[1784]: false Nov 1 01:15:38.239319 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 01:15:38.239939 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 01:15:38.247019 extend-filesystems[1786]: Found loop4 Nov 1 01:15:38.247019 extend-filesystems[1786]: Found loop5 Nov 1 01:15:38.307269 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Nov 1 01:15:38.307287 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1509) Nov 1 01:15:38.307297 extend-filesystems[1786]: Found loop6 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found loop7 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda1 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda2 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda3 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found usr Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda4 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda6 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda7 Nov 1 01:15:38.307297 extend-filesystems[1786]: Found sda9 Nov 1 01:15:38.307297 extend-filesystems[1786]: Checking size of /dev/sda9 Nov 1 01:15:38.307297 extend-filesystems[1786]: Resized partition /dev/sda9 Nov 1 01:15:38.249989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 01:15:38.454367 extend-filesystems[1796]: resize2fs 1.47.1 (20-May-2024) Nov 1 01:15:38.294759 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 01:15:38.325372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 01:15:38.354332 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 01:15:38.376429 systemd-logind[1809]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 01:15:38.376439 systemd-logind[1809]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 01:15:38.472725 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:15:38.376450 systemd-logind[1809]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 01:15:38.472832 update_engine[1811]: I20251101 01:15:38.422092 1811 main.cc:92] Flatcar Update Engine starting Nov 1 01:15:38.472832 update_engine[1811]: I20251101 01:15:38.422842 1811 update_check_scheduler.cc:74] Next update check in 7m7s Nov 1 01:15:38.376747 systemd-logind[1809]: New seat seat0. Nov 1 01:15:38.472996 jq[1819]: true Nov 1 01:15:38.379140 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 1 01:15:38.393471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:15:38.406278 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 01:15:38.414865 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 01:15:38.454329 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 01:15:38.465613 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 01:15:38.494407 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:15:38.494524 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 01:15:38.494734 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:15:38.494831 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 01:15:38.504722 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:15:38.504822 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 01:15:38.515417 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 01:15:38.528346 (ntainerd)[1824]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 01:15:38.529567 jq[1823]: true Nov 1 01:15:38.531575 dbus-daemon[1781]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:15:38.533027 tar[1821]: linux-amd64/LICENSE Nov 1 01:15:38.533160 tar[1821]: linux-amd64/helm Nov 1 01:15:38.539323 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 01:15:38.539428 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 1 01:15:38.541223 systemd[1]: Started update-engine.service - Update Engine. Nov 1 01:15:38.551574 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 01:15:38.559307 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:15:38.559402 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 01:15:38.570376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:15:38.570488 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 01:15:38.582028 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 01:15:38.582894 bash[1851]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:15:38.593180 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 01:15:38.604495 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:15:38.604584 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 01:15:38.614034 locksmithd[1859]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:15:38.628449 systemd[1]: Starting sshkeys.service... Nov 1 01:15:38.636108 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 01:15:38.648194 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 01:15:38.660053 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 01:15:38.671615 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 01:15:38.682677 coreos-metadata[1878]: Nov 01 01:15:38.682 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:15:38.683832 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 01:15:38.693104 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 1 01:15:38.702525 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 01:15:38.703247 containerd[1824]: time="2025-11-01T01:15:38.703198773Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 01:15:38.716109 containerd[1824]: time="2025-11-01T01:15:38.716092522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.716875 containerd[1824]: time="2025-11-01T01:15:38.716853306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:15:38.716916 containerd[1824]: time="2025-11-01T01:15:38.716874274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:15:38.716916 containerd[1824]: time="2025-11-01T01:15:38.716884184Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:15:38.716973 containerd[1824]: time="2025-11-01T01:15:38.716966491Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 01:15:38.717002 containerd[1824]: time="2025-11-01T01:15:38.716976581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717031 containerd[1824]: time="2025-11-01T01:15:38.717008586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717031 containerd[1824]: time="2025-11-01T01:15:38.717016873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717117 containerd[1824]: time="2025-11-01T01:15:38.717105603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717117 containerd[1824]: time="2025-11-01T01:15:38.717115337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717169 containerd[1824]: time="2025-11-01T01:15:38.717122796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717169 containerd[1824]: time="2025-11-01T01:15:38.717128140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717228 containerd[1824]: time="2025-11-01T01:15:38.717168961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717333 containerd[1824]: time="2025-11-01T01:15:38.717292952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717363 containerd[1824]: time="2025-11-01T01:15:38.717348679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:15:38.717363 containerd[1824]: time="2025-11-01T01:15:38.717357730Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:15:38.717408 containerd[1824]: time="2025-11-01T01:15:38.717399895Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:15:38.717472 containerd[1824]: time="2025-11-01T01:15:38.717428221Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:15:38.731168 containerd[1824]: time="2025-11-01T01:15:38.731111541Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:15:38.731168 containerd[1824]: time="2025-11-01T01:15:38.731134643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:15:38.731168 containerd[1824]: time="2025-11-01T01:15:38.731145557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 01:15:38.731168 containerd[1824]: time="2025-11-01T01:15:38.731158157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 01:15:38.731168 containerd[1824]: time="2025-11-01T01:15:38.731167410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:15:38.731259 containerd[1824]: time="2025-11-01T01:15:38.731238325Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:15:38.731404 containerd[1824]: time="2025-11-01T01:15:38.731362186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:15:38.731431 containerd[1824]: time="2025-11-01T01:15:38.731417243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 01:15:38.731431 containerd[1824]: time="2025-11-01T01:15:38.731427016Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 01:15:38.731464 containerd[1824]: time="2025-11-01T01:15:38.731434761Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 01:15:38.731464 containerd[1824]: time="2025-11-01T01:15:38.731442313Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731464 containerd[1824]: time="2025-11-01T01:15:38.731449320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731464 containerd[1824]: time="2025-11-01T01:15:38.731456020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731466807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731476066Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731483324Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731489966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731496563Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731507762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731521 containerd[1824]: time="2025-11-01T01:15:38.731515152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731521905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731528882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731535331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731542765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731549121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731555856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731562730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731572214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731581623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731588450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731595729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731613 containerd[1824]: time="2025-11-01T01:15:38.731604293Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731615825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731622650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731628613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731652094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731662084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731668286Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731675123Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731680562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731687569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731693674Z" level=info msg="NRI interface is disabled by configuration." Nov 1 01:15:38.731772 containerd[1824]: time="2025-11-01T01:15:38.731699323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:15:38.731921 containerd[1824]: time="2025-11-01T01:15:38.731851522Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:15:38.731921 containerd[1824]: time="2025-11-01T01:15:38.731883928Z" level=info msg="Connect containerd service" Nov 1 01:15:38.731921 containerd[1824]: time="2025-11-01T01:15:38.731901119Z" level=info msg="using legacy CRI server" Nov 1 01:15:38.731921 containerd[1824]: time="2025-11-01T01:15:38.731905502Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 01:15:38.732035 containerd[1824]: time="2025-11-01T01:15:38.731966096Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:15:38.732295 containerd[1824]: time="2025-11-01T01:15:38.732254821Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:15:38.732420 containerd[1824]: time="2025-11-01T01:15:38.732367892Z" level=info msg="Start subscribing containerd event" Nov 1 01:15:38.732420 containerd[1824]: time="2025-11-01T01:15:38.732399781Z" level=info msg="Start recovering state" Nov 1 01:15:38.732461 containerd[1824]: time="2025-11-01T01:15:38.732427105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:15:38.732461 containerd[1824]: time="2025-11-01T01:15:38.732438382Z" level=info msg="Start event monitor" Nov 1 01:15:38.732461 containerd[1824]: time="2025-11-01T01:15:38.732450466Z" level=info msg="Start snapshots syncer" Nov 1 01:15:38.732461 containerd[1824]: time="2025-11-01T01:15:38.732452776Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:15:38.732515 containerd[1824]: time="2025-11-01T01:15:38.732456992Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:15:38.732515 containerd[1824]: time="2025-11-01T01:15:38.732488182Z" level=info msg="Start streaming server" Nov 1 01:15:38.732547 containerd[1824]: time="2025-11-01T01:15:38.732526807Z" level=info msg="containerd successfully booted in 0.029925s" Nov 1 01:15:38.732581 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 01:15:38.792251 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Nov 1 01:15:38.813941 extend-filesystems[1796]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 01:15:38.813941 extend-filesystems[1796]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 01:15:38.813941 extend-filesystems[1796]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Nov 1 01:15:38.854297 extend-filesystems[1786]: Resized filesystem in /dev/sda9 Nov 1 01:15:38.854297 extend-filesystems[1786]: Found sdb Nov 1 01:15:38.814641 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:15:38.866339 tar[1821]: linux-amd64/README.md Nov 1 01:15:38.814745 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 01:15:38.871607 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 01:15:39.057396 systemd-networkd[1613]: bond0: Gained IPv6LL Nov 1 01:15:39.699128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 01:15:39.710829 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 01:15:39.729392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:15:39.739893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 01:15:39.757936 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 01:15:40.574263 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 1 01:15:40.574429 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Nov 1 01:15:40.594434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:15:40.605746 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:15:41.117817 kubelet[1918]: E1101 01:15:41.117786 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:15:41.119028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:15:41.119106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:15:41.413449 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 01:15:41.429532 systemd[1]: Started sshd@0-139.178.94.199:22-139.178.89.65:43374.service - OpenSSH per-connection server daemon (139.178.89.65:43374). Nov 1 01:15:41.480141 sshd[1937]: Accepted publickey for core from 139.178.89.65 port 43374 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:41.481365 sshd[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:41.487097 systemd-logind[1809]: New session 1 of user core. Nov 1 01:15:41.487949 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 01:15:41.514694 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 01:15:41.527181 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 01:15:41.553925 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 01:15:41.570534 (systemd)[1941]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:15:41.680784 systemd[1941]: Queued start job for default target default.target. Nov 1 01:15:41.695896 systemd[1941]: Created slice app.slice - User Application Slice. Nov 1 01:15:41.695910 systemd[1941]: Reached target paths.target - Paths. Nov 1 01:15:41.695918 systemd[1941]: Reached target timers.target - Timers. Nov 1 01:15:41.696558 systemd[1941]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 01:15:41.702101 systemd[1941]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 01:15:41.702129 systemd[1941]: Reached target sockets.target - Sockets. Nov 1 01:15:41.702139 systemd[1941]: Reached target basic.target - Basic System. Nov 1 01:15:41.702160 systemd[1941]: Reached target default.target - Main User Target. Nov 1 01:15:41.702176 systemd[1941]: Startup finished in 123ms. Nov 1 01:15:41.702327 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 01:15:41.714272 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 01:15:41.785498 systemd[1]: Started sshd@1-139.178.94.199:22-139.178.89.65:43384.service - OpenSSH per-connection server daemon (139.178.89.65:43384). Nov 1 01:15:41.819448 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 43384 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:41.820183 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:41.822808 systemd-logind[1809]: New session 2 of user core. Nov 1 01:15:41.838372 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 01:15:41.906324 sshd[1952]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:41.933629 systemd[1]: sshd@1-139.178.94.199:22-139.178.89.65:43384.service: Deactivated successfully. Nov 1 01:15:41.937659 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 01:15:41.941112 systemd-logind[1809]: Session 2 logged out. Waiting for processes to exit. Nov 1 01:15:41.956261 systemd[1]: Started sshd@2-139.178.94.199:22-139.178.89.65:43388.service - OpenSSH per-connection server daemon (139.178.89.65:43388). Nov 1 01:15:41.970792 systemd-logind[1809]: Removed session 2. Nov 1 01:15:42.001464 sshd[1959]: Accepted publickey for core from 139.178.89.65 port 43388 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:42.002939 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:42.007845 systemd-logind[1809]: New session 3 of user core. Nov 1 01:15:42.025612 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 01:15:42.107033 sshd[1959]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:42.113476 systemd[1]: sshd@2-139.178.94.199:22-139.178.89.65:43388.service: Deactivated successfully. Nov 1 01:15:42.117432 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 01:15:42.120883 systemd-logind[1809]: Session 3 logged out. Waiting for processes to exit. Nov 1 01:15:42.123947 systemd-logind[1809]: Removed session 3. Nov 1 01:15:43.367639 systemd-timesyncd[1775]: Contacted time server 23.95.35.34:123 (0.flatcar.pool.ntp.org). Nov 1 01:15:43.367796 systemd-timesyncd[1775]: Initial clock synchronization to Sat 2025-11-01 01:15:43.125345 UTC. Nov 1 01:15:43.744138 login[1883]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:15:43.744742 login[1884]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:15:43.746757 systemd-logind[1809]: New session 5 of user core. Nov 1 01:15:43.747515 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 01:15:43.748721 systemd-logind[1809]: New session 4 of user core. Nov 1 01:15:43.749407 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 01:15:44.409780 coreos-metadata[1780]: Nov 01 01:15:44.409 INFO Fetch successful Nov 1 01:15:44.546912 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 01:15:44.548525 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 1 01:15:44.975481 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 1 01:15:44.989013 coreos-metadata[1878]: Nov 01 01:15:44.988 INFO Fetch successful Nov 1 01:15:45.070646 unknown[1878]: wrote ssh authorized keys file for user: core Nov 1 01:15:45.095118 update-ssh-keys[1997]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:15:45.095462 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 01:15:45.096071 systemd[1]: Finished sshkeys.service. Nov 1 01:15:45.097067 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 01:15:45.097158 systemd[1]: Startup finished in 1.769s (kernel) + 26.783s (initrd) + 12.443s (userspace) = 40.995s. Nov 1 01:15:51.180661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:15:51.198459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:15:51.418891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:15:51.421971 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:15:51.456771 kubelet[2009]: E1101 01:15:51.456674 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:15:51.459056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:15:51.459141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:15:51.961854 systemd[1]: Started sshd@3-139.178.94.199:22-139.178.89.65:33050.service - OpenSSH per-connection server daemon (139.178.89.65:33050). Nov 1 01:15:51.992722 sshd[2028]: Accepted publickey for core from 139.178.89.65 port 33050 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:51.993479 sshd[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:51.996063 systemd-logind[1809]: New session 6 of user core. Nov 1 01:15:52.017516 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 01:15:52.069937 sshd[2028]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:52.081839 systemd[1]: sshd@3-139.178.94.199:22-139.178.89.65:33050.service: Deactivated successfully. Nov 1 01:15:52.082618 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:15:52.083285 systemd-logind[1809]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:15:52.083954 systemd[1]: Started sshd@4-139.178.94.199:22-139.178.89.65:33062.service - OpenSSH per-connection server daemon (139.178.89.65:33062). Nov 1 01:15:52.084456 systemd-logind[1809]: Removed session 6. Nov 1 01:15:52.122993 sshd[2035]: Accepted publickey for core from 139.178.89.65 port 33062 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:52.123792 sshd[2035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:52.126877 systemd-logind[1809]: New session 7 of user core. Nov 1 01:15:52.136453 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 01:15:52.186621 sshd[2035]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:52.195839 systemd[1]: sshd@4-139.178.94.199:22-139.178.89.65:33062.service: Deactivated successfully. Nov 1 01:15:52.196608 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:15:52.197280 systemd-logind[1809]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:15:52.197964 systemd[1]: Started sshd@5-139.178.94.199:22-139.178.89.65:33074.service - OpenSSH per-connection server daemon (139.178.89.65:33074). Nov 1 01:15:52.198470 systemd-logind[1809]: Removed session 7. Nov 1 01:15:52.229886 sshd[2042]: Accepted publickey for core from 139.178.89.65 port 33074 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:52.230533 sshd[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:52.232879 systemd-logind[1809]: New session 8 of user core. Nov 1 01:15:52.253505 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 01:15:52.305673 sshd[2042]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:52.319935 systemd[1]: sshd@5-139.178.94.199:22-139.178.89.65:33074.service: Deactivated successfully. Nov 1 01:15:52.320753 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:15:52.321574 systemd-logind[1809]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:15:52.322286 systemd[1]: Started sshd@6-139.178.94.199:22-139.178.89.65:33082.service - OpenSSH per-connection server daemon (139.178.89.65:33082). Nov 1 01:15:52.322997 systemd-logind[1809]: Removed session 8. Nov 1 01:15:52.376765 sshd[2049]: Accepted publickey for core from 139.178.89.65 port 33082 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:52.378061 sshd[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:52.382549 systemd-logind[1809]: New session 9 of user core. Nov 1 01:15:52.401577 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 01:15:52.472444 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:15:52.472592 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:15:52.496152 sudo[2052]: pam_unix(sudo:session): session closed for user root Nov 1 01:15:52.497643 sshd[2049]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:52.509476 systemd[1]: sshd@6-139.178.94.199:22-139.178.89.65:33082.service: Deactivated successfully. Nov 1 01:15:52.510615 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:15:52.511636 systemd-logind[1809]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:15:52.512786 systemd[1]: Started sshd@7-139.178.94.199:22-139.178.89.65:33088.service - OpenSSH per-connection server daemon (139.178.89.65:33088). Nov 1 01:15:52.513544 systemd-logind[1809]: Removed session 9. Nov 1 01:15:52.573982 sshd[2057]: Accepted publickey for core from 139.178.89.65 port 33088 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:52.575304 sshd[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:52.580013 systemd-logind[1809]: New session 10 of user core. Nov 1 01:15:52.592627 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 01:15:52.656191 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:15:52.656343 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:15:52.658364 sudo[2061]: pam_unix(sudo:session): session closed for user root Nov 1 01:15:52.660933 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:15:52.661082 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:15:52.688618 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 01:15:52.690104 auditctl[2064]: No rules Nov 1 01:15:52.690900 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:15:52.691071 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 01:15:52.692562 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:15:52.731436 augenrules[2082]: No rules Nov 1 01:15:52.732978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:15:52.735226 sudo[2060]: pam_unix(sudo:session): session closed for user root Nov 1 01:15:52.738531 sshd[2057]: pam_unix(sshd:session): session closed for user core Nov 1 01:15:52.767033 systemd[1]: sshd@7-139.178.94.199:22-139.178.89.65:33088.service: Deactivated successfully. Nov 1 01:15:52.771025 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:15:52.774423 systemd-logind[1809]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:15:52.799135 systemd[1]: Started sshd@8-139.178.94.199:22-139.178.89.65:33092.service - OpenSSH per-connection server daemon (139.178.89.65:33092). Nov 1 01:15:52.801589 systemd-logind[1809]: Removed session 10. Nov 1 01:15:52.873664 sshd[2090]: Accepted publickey for core from 139.178.89.65 port 33092 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:15:52.874657 sshd[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:15:52.878159 systemd-logind[1809]: New session 11 of user core. Nov 1 01:15:52.893405 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 01:15:52.946233 sudo[2094]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:15:52.946386 sudo[2094]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:15:53.224449 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 01:15:53.224554 (dockerd)[2120]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 01:15:53.491653 dockerd[2120]: time="2025-11-01T01:15:53.491568089Z" level=info msg="Starting up" Nov 1 01:15:53.560436 dockerd[2120]: time="2025-11-01T01:15:53.560391097Z" level=info msg="Loading containers: start." Nov 1 01:15:53.645270 kernel: Initializing XFRM netlink socket Nov 1 01:15:53.727982 systemd-networkd[1613]: docker0: Link UP Nov 1 01:15:53.746219 dockerd[2120]: time="2025-11-01T01:15:53.746144278Z" level=info msg="Loading containers: done." Nov 1 01:15:53.754278 dockerd[2120]: time="2025-11-01T01:15:53.754254270Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:15:53.754347 dockerd[2120]: time="2025-11-01T01:15:53.754322033Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 01:15:53.754398 dockerd[2120]: time="2025-11-01T01:15:53.754387864Z" level=info msg="Daemon has completed initialization" Nov 1 01:15:53.754503 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3899042084-merged.mount: Deactivated successfully. Nov 1 01:15:53.769185 dockerd[2120]: time="2025-11-01T01:15:53.769150709Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:15:53.769255 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 01:15:54.589595 containerd[1824]: time="2025-11-01T01:15:54.589512825Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 01:15:55.156106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709253723.mount: Deactivated successfully. Nov 1 01:15:55.986805 containerd[1824]: time="2025-11-01T01:15:55.986779221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:55.987024 containerd[1824]: time="2025-11-01T01:15:55.986953530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 01:15:55.987435 containerd[1824]: time="2025-11-01T01:15:55.987420706Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:55.989039 containerd[1824]: time="2025-11-01T01:15:55.989024861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:55.989661 containerd[1824]: time="2025-11-01T01:15:55.989646133Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.400059025s" Nov 1 01:15:55.989699 containerd[1824]: time="2025-11-01T01:15:55.989664900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 01:15:55.990023 containerd[1824]: time="2025-11-01T01:15:55.990011250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 01:15:57.061580 containerd[1824]: time="2025-11-01T01:15:57.061555750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:57.061836 containerd[1824]: time="2025-11-01T01:15:57.061801132Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 01:15:57.062149 containerd[1824]: time="2025-11-01T01:15:57.062134639Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:57.063697 containerd[1824]: time="2025-11-01T01:15:57.063686306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:57.064359 containerd[1824]: time="2025-11-01T01:15:57.064345003Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.07431923s" Nov 1 01:15:57.064402 containerd[1824]: time="2025-11-01T01:15:57.064360796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 01:15:57.064699 containerd[1824]: time="2025-11-01T01:15:57.064657759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 01:15:58.033137 containerd[1824]: time="2025-11-01T01:15:58.033112723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:58.033364 containerd[1824]: time="2025-11-01T01:15:58.033349843Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 01:15:58.033755 containerd[1824]: time="2025-11-01T01:15:58.033745813Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:58.035745 containerd[1824]: time="2025-11-01T01:15:58.035703111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:58.036254 containerd[1824]: time="2025-11-01T01:15:58.036221259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 971.544518ms" Nov 1 01:15:58.036254 containerd[1824]: time="2025-11-01T01:15:58.036244054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 01:15:58.036504 containerd[1824]: time="2025-11-01T01:15:58.036478401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 01:15:58.937477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699046852.mount: Deactivated successfully. Nov 1 01:15:59.131411 containerd[1824]: time="2025-11-01T01:15:59.131383030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:59.131641 containerd[1824]: time="2025-11-01T01:15:59.131620365Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 01:15:59.132066 containerd[1824]: time="2025-11-01T01:15:59.132053306Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:59.133013 containerd[1824]: time="2025-11-01T01:15:59.133000604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:15:59.133475 containerd[1824]: time="2025-11-01T01:15:59.133433472Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.096940002s" Nov 1 01:15:59.133475 containerd[1824]: time="2025-11-01T01:15:59.133450187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 01:15:59.133761 containerd[1824]: time="2025-11-01T01:15:59.133712524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 01:15:59.577451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400118217.mount: Deactivated successfully. Nov 1 01:16:00.101015 containerd[1824]: time="2025-11-01T01:16:00.100985479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:00.101202 containerd[1824]: time="2025-11-01T01:16:00.101179403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 01:16:00.101705 containerd[1824]: time="2025-11-01T01:16:00.101664441Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:00.103328 containerd[1824]: time="2025-11-01T01:16:00.103287143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:00.103994 containerd[1824]: time="2025-11-01T01:16:00.103951729Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 970.220409ms" Nov 1 01:16:00.103994 containerd[1824]: time="2025-11-01T01:16:00.103969670Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 01:16:00.104246 containerd[1824]: time="2025-11-01T01:16:00.104236785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:16:00.605536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155246753.mount: Deactivated successfully. Nov 1 01:16:00.606783 containerd[1824]: time="2025-11-01T01:16:00.606744659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:00.607019 containerd[1824]: time="2025-11-01T01:16:00.606974918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 01:16:00.607457 containerd[1824]: time="2025-11-01T01:16:00.607424028Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:00.608513 containerd[1824]: time="2025-11-01T01:16:00.608475965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:00.608969 containerd[1824]: time="2025-11-01T01:16:00.608936441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 504.685762ms" Nov 1 01:16:00.608969 containerd[1824]: time="2025-11-01T01:16:00.608965392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:16:00.609297 containerd[1824]: time="2025-11-01T01:16:00.609286632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 01:16:01.129274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992221056.mount: Deactivated successfully. Nov 1 01:16:01.679560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:16:01.696555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:16:02.035136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:02.037575 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:16:02.059169 kubelet[2480]: E1101 01:16:02.059078 2480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:16:02.060851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:16:02.060955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:16:02.395773 containerd[1824]: time="2025-11-01T01:16:02.395686076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:02.395968 containerd[1824]: time="2025-11-01T01:16:02.395897914Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 01:16:02.396442 containerd[1824]: time="2025-11-01T01:16:02.396406119Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:02.398222 containerd[1824]: time="2025-11-01T01:16:02.398181700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:02.398928 containerd[1824]: time="2025-11-01T01:16:02.398892738Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.789589479s" Nov 1 01:16:02.398928 containerd[1824]: time="2025-11-01T01:16:02.398907178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 01:16:04.061310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:04.087527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:16:04.102774 systemd[1]: Reloading requested from client PID 2551 ('systemctl') (unit session-11.scope)... Nov 1 01:16:04.102782 systemd[1]: Reloading... Nov 1 01:16:04.153297 zram_generator::config[2590]: No configuration found. Nov 1 01:16:04.220796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:16:04.281496 systemd[1]: Reloading finished in 178 ms. Nov 1 01:16:04.332600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:04.333817 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:16:04.334884 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:16:04.334991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:04.335832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:16:04.578031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:04.580511 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:16:04.601846 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:16:04.601846 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:16:04.601846 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:16:04.602075 kubelet[2660]: I1101 01:16:04.601843 2660 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:16:04.907350 kubelet[2660]: I1101 01:16:04.907277 2660 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:16:04.907350 kubelet[2660]: I1101 01:16:04.907302 2660 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:16:04.907801 kubelet[2660]: I1101 01:16:04.907793 2660 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:16:04.930128 kubelet[2660]: E1101 01:16:04.930116 2660 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.94.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:04.930821 kubelet[2660]: I1101 01:16:04.930790 2660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:16:04.934973 kubelet[2660]: E1101 01:16:04.934942 2660 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:16:04.934973 kubelet[2660]: I1101 01:16:04.934971 2660 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:16:04.943466 kubelet[2660]: I1101 01:16:04.943432 2660 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:16:04.944575 kubelet[2660]: I1101 01:16:04.944523 2660 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:16:04.944663 kubelet[2660]: I1101 01:16:04.944542 2660 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-61efafd0e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:16:04.944663 kubelet[2660]: I1101 01:16:04.944637 2660 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:16:04.944663 kubelet[2660]: I1101 01:16:04.944643 2660 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:16:04.944785 kubelet[2660]: I1101 01:16:04.944707 2660 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:16:04.948079 kubelet[2660]: I1101 01:16:04.948046 2660 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:16:04.948079 kubelet[2660]: I1101 01:16:04.948060 2660 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:16:04.948079 kubelet[2660]: I1101 01:16:04.948069 2660 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:16:04.948079 kubelet[2660]: I1101 01:16:04.948074 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:16:04.950698 kubelet[2660]: I1101 01:16:04.950687 2660 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:16:04.950990 kubelet[2660]: I1101 01:16:04.950982 2660 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:16:04.951889 kubelet[2660]: W1101 01:16:04.951880 2660 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:16:04.951933 kubelet[2660]: W1101 01:16:04.951912 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-61efafd0e9&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:04.951933 kubelet[2660]: W1101 01:16:04.951913 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:04.952028 kubelet[2660]: E1101 01:16:04.951942 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-61efafd0e9&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:04.952028 kubelet[2660]: E1101 01:16:04.951945 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:04.953197 kubelet[2660]: I1101 01:16:04.953189 2660 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:16:04.953253 kubelet[2660]: I1101 01:16:04.953229 2660 server.go:1287] "Started kubelet" Nov 1 01:16:04.953324 kubelet[2660]: I1101 01:16:04.953249 2660 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:16:04.953345 kubelet[2660]: I1101 01:16:04.953296 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:16:04.953453 kubelet[2660]: I1101 01:16:04.953444 2660 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:16:04.954244 kubelet[2660]: I1101 01:16:04.954235 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:16:04.954244 kubelet[2660]: I1101 01:16:04.954239 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:16:04.954308 kubelet[2660]: I1101 01:16:04.954246 2660 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:16:04.954308 kubelet[2660]: I1101 01:16:04.954285 2660 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:16:04.954308 kubelet[2660]: E1101 01:16:04.954294 2660 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-61efafd0e9\" not found" Nov 1 01:16:04.954395 kubelet[2660]: I1101 01:16:04.954349 2660 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:16:04.954528 kubelet[2660]: E1101 01:16:04.954502 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-61efafd0e9?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="200ms" Nov 1 01:16:04.954597 kubelet[2660]: W1101 01:16:04.954508 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:04.954597 kubelet[2660]: E1101 01:16:04.954560 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:04.954674 kubelet[2660]: I1101 01:16:04.954656 2660 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:16:04.957802 kubelet[2660]: I1101 01:16:04.957784 2660 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:16:04.957802 kubelet[2660]: E1101 01:16:04.957797 2660 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:16:04.958069 kubelet[2660]: I1101 01:16:04.957871 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:16:04.958715 kubelet[2660]: I1101 01:16:04.958705 2660 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:16:04.959925 kubelet[2660]: E1101 01:16:04.958799 2660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.199:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-61efafd0e9.1873bd122a90e210 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-61efafd0e9,UID:ci-4081.3.6-n-61efafd0e9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-61efafd0e9,},FirstTimestamp:2025-11-01 01:16:04.953195024 +0000 UTC m=+0.370781380,LastTimestamp:2025-11-01 01:16:04.953195024 +0000 UTC m=+0.370781380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-61efafd0e9,}" Nov 1 01:16:04.965073 kubelet[2660]: I1101 01:16:04.965053 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:16:04.965656 kubelet[2660]: I1101 01:16:04.965627 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:16:04.965656 kubelet[2660]: I1101 01:16:04.965657 2660 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:16:04.965726 kubelet[2660]: I1101 01:16:04.965689 2660 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:16:04.965726 kubelet[2660]: I1101 01:16:04.965696 2660 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:16:04.965767 kubelet[2660]: E1101 01:16:04.965731 2660 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:16:04.965926 kubelet[2660]: W1101 01:16:04.965894 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:04.965926 kubelet[2660]: E1101 01:16:04.965919 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:04.975489 kubelet[2660]: I1101 01:16:04.975450 2660 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:16:04.975489 kubelet[2660]: I1101 01:16:04.975457 2660 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:16:04.975489 kubelet[2660]: I1101 01:16:04.975466 2660 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:16:04.976611 kubelet[2660]: I1101 01:16:04.976577 2660 policy_none.go:49] "None policy: Start" Nov 1 01:16:04.976611 kubelet[2660]: I1101 01:16:04.976584 2660 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:16:04.976611 kubelet[2660]: I1101 01:16:04.976590 2660 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:16:04.979240 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 01:16:04.999097 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 01:16:05.001017 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 01:16:05.017995 kubelet[2660]: I1101 01:16:05.017948 2660 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:16:05.018134 kubelet[2660]: I1101 01:16:05.018095 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:16:05.018134 kubelet[2660]: I1101 01:16:05.018106 2660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:16:05.018298 kubelet[2660]: I1101 01:16:05.018239 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:16:05.018720 kubelet[2660]: E1101 01:16:05.018677 2660 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:16:05.018720 kubelet[2660]: E1101 01:16:05.018712 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-61efafd0e9\" not found" Nov 1 01:16:05.087976 systemd[1]: Created slice kubepods-burstable-pod83f29ebb19231a41d661a4961cb15721.slice - libcontainer container kubepods-burstable-pod83f29ebb19231a41d661a4961cb15721.slice. Nov 1 01:16:05.100178 kubelet[2660]: E1101 01:16:05.100068 2660 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-61efafd0e9\" not found" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.106995 systemd[1]: Created slice kubepods-burstable-podfe2d94141ae551fe3a594b43a848f2df.slice - libcontainer container kubepods-burstable-podfe2d94141ae551fe3a594b43a848f2df.slice. Nov 1 01:16:05.122320 kubelet[2660]: I1101 01:16:05.122247 2660 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.123008 kubelet[2660]: E1101 01:16:05.122907 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.125221 kubelet[2660]: E1101 01:16:05.125125 2660 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-61efafd0e9\" not found" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.132319 systemd[1]: Created slice kubepods-burstable-pod5d838412f02fb335609ab456ac379473.slice - libcontainer container kubepods-burstable-pod5d838412f02fb335609ab456ac379473.slice. Nov 1 01:16:05.136549 kubelet[2660]: E1101 01:16:05.136469 2660 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-61efafd0e9\" not found" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.156664 kubelet[2660]: E1101 01:16:05.156540 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-61efafd0e9?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="400ms" Nov 1 01:16:05.256497 kubelet[2660]: I1101 01:16:05.256368 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83f29ebb19231a41d661a4961cb15721-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" (UID: \"83f29ebb19231a41d661a4961cb15721\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.256776 kubelet[2660]: I1101 01:16:05.256507 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.256776 kubelet[2660]: I1101 01:16:05.256613 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.256776 kubelet[2660]: I1101 01:16:05.256665 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.256776 kubelet[2660]: I1101 01:16:05.256716 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.256776 kubelet[2660]: I1101 01:16:05.256768 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.257299 kubelet[2660]: I1101 01:16:05.256819 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d838412f02fb335609ab456ac379473-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-61efafd0e9\" (UID: \"5d838412f02fb335609ab456ac379473\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.257299 kubelet[2660]: I1101 01:16:05.256865 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83f29ebb19231a41d661a4961cb15721-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" (UID: \"83f29ebb19231a41d661a4961cb15721\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.257299 kubelet[2660]: I1101 01:16:05.256917 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83f29ebb19231a41d661a4961cb15721-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" (UID: \"83f29ebb19231a41d661a4961cb15721\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.331083 kubelet[2660]: I1101 01:16:05.331019 2660 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.331908 kubelet[2660]: E1101 01:16:05.331823 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.403310 containerd[1824]: time="2025-11-01T01:16:05.403176985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-61efafd0e9,Uid:83f29ebb19231a41d661a4961cb15721,Namespace:kube-system,Attempt:0,}" Nov 1 01:16:05.426655 containerd[1824]: time="2025-11-01T01:16:05.426632354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-61efafd0e9,Uid:fe2d94141ae551fe3a594b43a848f2df,Namespace:kube-system,Attempt:0,}" Nov 1 01:16:05.437569 containerd[1824]: time="2025-11-01T01:16:05.437555503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-61efafd0e9,Uid:5d838412f02fb335609ab456ac379473,Namespace:kube-system,Attempt:0,}" Nov 1 01:16:05.557780 kubelet[2660]: E1101 01:16:05.557709 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-61efafd0e9?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="800ms" Nov 1 01:16:05.734009 kubelet[2660]: I1101 01:16:05.733960 2660 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.734279 kubelet[2660]: E1101 01:16:05.734251 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:05.826703 kubelet[2660]: W1101 01:16:05.826601 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:05.826703 kubelet[2660]: E1101 01:16:05.826643 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:05.895649 kubelet[2660]: W1101 01:16:05.895582 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:05.895649 kubelet[2660]: E1101 01:16:05.895626 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:05.944583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963366136.mount: Deactivated successfully. Nov 1 01:16:05.946161 containerd[1824]: time="2025-11-01T01:16:05.946140873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:16:05.946826 containerd[1824]: time="2025-11-01T01:16:05.946808891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:16:05.947026 containerd[1824]: time="2025-11-01T01:16:05.946994226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:16:05.947395 containerd[1824]: time="2025-11-01T01:16:05.947381112Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:16:05.947706 containerd[1824]: time="2025-11-01T01:16:05.947657140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 01:16:05.948065 containerd[1824]: time="2025-11-01T01:16:05.948051356Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:16:05.948154 containerd[1824]: time="2025-11-01T01:16:05.948136653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:16:05.949189 containerd[1824]: time="2025-11-01T01:16:05.949176465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:16:05.950850 containerd[1824]: time="2025-11-01T01:16:05.950837964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.252233ms" Nov 1 01:16:05.951594 containerd[1824]: time="2025-11-01T01:16:05.951581032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.237227ms" Nov 1 01:16:05.952695 containerd[1824]: time="2025-11-01T01:16:05.952682536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.00724ms" Nov 1 01:16:06.064213 containerd[1824]: time="2025-11-01T01:16:06.064156976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:06.064213 containerd[1824]: time="2025-11-01T01:16:06.064191965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:06.064213 containerd[1824]: time="2025-11-01T01:16:06.064206866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:06.064342 containerd[1824]: time="2025-11-01T01:16:06.064264445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:06.064684 containerd[1824]: time="2025-11-01T01:16:06.064466442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:06.064684 containerd[1824]: time="2025-11-01T01:16:06.064657461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:06.064684 containerd[1824]: time="2025-11-01T01:16:06.064680626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:06.064790 containerd[1824]: time="2025-11-01T01:16:06.064689148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:06.064790 containerd[1824]: time="2025-11-01T01:16:06.064686117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:06.064790 containerd[1824]: time="2025-11-01T01:16:06.064694400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:06.064790 containerd[1824]: time="2025-11-01T01:16:06.064735450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:06.064790 containerd[1824]: time="2025-11-01T01:16:06.064737644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:06.091385 systemd[1]: Started cri-containerd-5193caed446f6ec280efd612f937cf7469c9a4d9a6c48a93f78a4a45e4991e11.scope - libcontainer container 5193caed446f6ec280efd612f937cf7469c9a4d9a6c48a93f78a4a45e4991e11. Nov 1 01:16:06.092216 systemd[1]: Started cri-containerd-8b5ed62f2f176f8d562d419d7fd75daabf930d18aaaaf52c7da8b2511848c54b.scope - libcontainer container 8b5ed62f2f176f8d562d419d7fd75daabf930d18aaaaf52c7da8b2511848c54b. Nov 1 01:16:06.093066 systemd[1]: Started cri-containerd-b8fdc2f1c0a32963504646944c8ba4efdd398a03d97c00cecffa2694303e62b2.scope - libcontainer container b8fdc2f1c0a32963504646944c8ba4efdd398a03d97c00cecffa2694303e62b2. Nov 1 01:16:06.121238 containerd[1824]: time="2025-11-01T01:16:06.121198110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-61efafd0e9,Uid:83f29ebb19231a41d661a4961cb15721,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8fdc2f1c0a32963504646944c8ba4efdd398a03d97c00cecffa2694303e62b2\"" Nov 1 01:16:06.122754 containerd[1824]: time="2025-11-01T01:16:06.122736233Z" level=info msg="CreateContainer within sandbox \"b8fdc2f1c0a32963504646944c8ba4efdd398a03d97c00cecffa2694303e62b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:16:06.125146 containerd[1824]: time="2025-11-01T01:16:06.125124031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-61efafd0e9,Uid:5d838412f02fb335609ab456ac379473,Namespace:kube-system,Attempt:0,} returns sandbox id \"5193caed446f6ec280efd612f937cf7469c9a4d9a6c48a93f78a4a45e4991e11\"" Nov 1 01:16:06.126101 containerd[1824]: time="2025-11-01T01:16:06.126081264Z" level=info msg="CreateContainer within sandbox \"5193caed446f6ec280efd612f937cf7469c9a4d9a6c48a93f78a4a45e4991e11\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:16:06.126612 containerd[1824]: time="2025-11-01T01:16:06.126599699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-61efafd0e9,Uid:fe2d94141ae551fe3a594b43a848f2df,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b5ed62f2f176f8d562d419d7fd75daabf930d18aaaaf52c7da8b2511848c54b\"" Nov 1 01:16:06.127416 containerd[1824]: time="2025-11-01T01:16:06.127399769Z" level=info msg="CreateContainer within sandbox \"8b5ed62f2f176f8d562d419d7fd75daabf930d18aaaaf52c7da8b2511848c54b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:16:06.128124 containerd[1824]: time="2025-11-01T01:16:06.128111542Z" level=info msg="CreateContainer within sandbox \"b8fdc2f1c0a32963504646944c8ba4efdd398a03d97c00cecffa2694303e62b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e8bb6d8957ce29209aedcafc003c963a00d0be5f0bf7dcedce8ab9bfc33b9f4b\"" Nov 1 01:16:06.128473 containerd[1824]: time="2025-11-01T01:16:06.128449544Z" level=info msg="StartContainer for \"e8bb6d8957ce29209aedcafc003c963a00d0be5f0bf7dcedce8ab9bfc33b9f4b\"" Nov 1 01:16:06.131029 containerd[1824]: time="2025-11-01T01:16:06.131011506Z" level=info msg="CreateContainer within sandbox \"5193caed446f6ec280efd612f937cf7469c9a4d9a6c48a93f78a4a45e4991e11\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd8c6ffa5f6d43a65022dacf9be8f01a83e43274844d629369f86ad3bf58ff94\"" Nov 1 01:16:06.131304 containerd[1824]: time="2025-11-01T01:16:06.131287940Z" level=info msg="StartContainer for \"dd8c6ffa5f6d43a65022dacf9be8f01a83e43274844d629369f86ad3bf58ff94\"" Nov 1 01:16:06.134178 containerd[1824]: time="2025-11-01T01:16:06.134135099Z" level=info msg="CreateContainer within sandbox \"8b5ed62f2f176f8d562d419d7fd75daabf930d18aaaaf52c7da8b2511848c54b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9276636983aa21e510709f145ca51804f72f88813d7ec70fe52cd8376fd37e47\"" Nov 1 01:16:06.134433 containerd[1824]: time="2025-11-01T01:16:06.134413394Z" level=info msg="StartContainer for \"9276636983aa21e510709f145ca51804f72f88813d7ec70fe52cd8376fd37e47\"" Nov 1 01:16:06.156502 systemd[1]: Started cri-containerd-e8bb6d8957ce29209aedcafc003c963a00d0be5f0bf7dcedce8ab9bfc33b9f4b.scope - libcontainer container e8bb6d8957ce29209aedcafc003c963a00d0be5f0bf7dcedce8ab9bfc33b9f4b. Nov 1 01:16:06.158516 systemd[1]: Started cri-containerd-9276636983aa21e510709f145ca51804f72f88813d7ec70fe52cd8376fd37e47.scope - libcontainer container 9276636983aa21e510709f145ca51804f72f88813d7ec70fe52cd8376fd37e47. Nov 1 01:16:06.159076 systemd[1]: Started cri-containerd-dd8c6ffa5f6d43a65022dacf9be8f01a83e43274844d629369f86ad3bf58ff94.scope - libcontainer container dd8c6ffa5f6d43a65022dacf9be8f01a83e43274844d629369f86ad3bf58ff94. Nov 1 01:16:06.179770 containerd[1824]: time="2025-11-01T01:16:06.179744216Z" level=info msg="StartContainer for \"e8bb6d8957ce29209aedcafc003c963a00d0be5f0bf7dcedce8ab9bfc33b9f4b\" returns successfully" Nov 1 01:16:06.181523 containerd[1824]: time="2025-11-01T01:16:06.181501900Z" level=info msg="StartContainer for \"dd8c6ffa5f6d43a65022dacf9be8f01a83e43274844d629369f86ad3bf58ff94\" returns successfully" Nov 1 01:16:06.181602 containerd[1824]: time="2025-11-01T01:16:06.181502378Z" level=info msg="StartContainer for \"9276636983aa21e510709f145ca51804f72f88813d7ec70fe52cd8376fd37e47\" returns successfully" Nov 1 01:16:06.195045 kubelet[2660]: W1101 01:16:06.195008 2660 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:16:06.195121 kubelet[2660]: E1101 01:16:06.195053 2660 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:16:06.536006 kubelet[2660]: I1101 01:16:06.535965 2660 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.741439 kubelet[2660]: E1101 01:16:06.741417 2660 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-61efafd0e9\" not found" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.835080 kubelet[2660]: I1101 01:16:06.834988 2660 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.835080 kubelet[2660]: E1101 01:16:06.835016 2660 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-61efafd0e9\": node \"ci-4081.3.6-n-61efafd0e9\" not found" Nov 1 01:16:06.855135 kubelet[2660]: I1101 01:16:06.855089 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.857943 kubelet[2660]: E1101 01:16:06.857902 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.857943 kubelet[2660]: I1101 01:16:06.857912 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.858664 kubelet[2660]: E1101 01:16:06.858638 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.858664 kubelet[2660]: I1101 01:16:06.858649 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.859360 kubelet[2660]: E1101 01:16:06.859322 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-61efafd0e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.949400 kubelet[2660]: I1101 01:16:06.949313 2660 apiserver.go:52] "Watching apiserver" Nov 1 01:16:06.954926 kubelet[2660]: I1101 01:16:06.954834 2660 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:16:06.973019 kubelet[2660]: I1101 01:16:06.972984 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.975727 kubelet[2660]: I1101 01:16:06.975642 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.978681 kubelet[2660]: E1101 01:16:06.978247 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.980635 kubelet[2660]: E1101 01:16:06.980562 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.981749 kubelet[2660]: I1101 01:16:06.981706 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:06.984980 kubelet[2660]: E1101 01:16:06.984894 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-61efafd0e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:07.984530 kubelet[2660]: I1101 01:16:07.984446 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:07.985504 kubelet[2660]: I1101 01:16:07.984629 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:07.990479 kubelet[2660]: W1101 01:16:07.990422 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:07.990768 kubelet[2660]: W1101 01:16:07.990718 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:09.041229 systemd[1]: Reloading requested from client PID 2986 ('systemctl') (unit session-11.scope)... Nov 1 01:16:09.041237 systemd[1]: Reloading... Nov 1 01:16:09.089246 zram_generator::config[3025]: No configuration found. Nov 1 01:16:09.164526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:16:09.232250 systemd[1]: Reloading finished in 190 ms. Nov 1 01:16:09.285797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:16:09.299550 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:16:09.299667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:09.310574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:16:09.550059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:16:09.553227 (kubelet)[3089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:16:09.574617 kubelet[3089]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:16:09.574617 kubelet[3089]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:16:09.574617 kubelet[3089]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:16:09.574840 kubelet[3089]: I1101 01:16:09.574650 3089 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:16:09.577944 kubelet[3089]: I1101 01:16:09.577905 3089 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:16:09.577944 kubelet[3089]: I1101 01:16:09.577915 3089 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:16:09.578067 kubelet[3089]: I1101 01:16:09.578037 3089 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:16:09.578718 kubelet[3089]: I1101 01:16:09.578683 3089 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 01:16:09.579860 kubelet[3089]: I1101 01:16:09.579822 3089 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:16:09.581779 kubelet[3089]: E1101 01:16:09.581765 3089 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:16:09.581779 kubelet[3089]: I1101 01:16:09.581780 3089 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:16:09.588392 kubelet[3089]: I1101 01:16:09.588342 3089 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:16:09.588494 kubelet[3089]: I1101 01:16:09.588441 3089 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:16:09.588576 kubelet[3089]: I1101 01:16:09.588455 3089 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-61efafd0e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:16:09.588576 kubelet[3089]: I1101 01:16:09.588551 3089 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:16:09.588576 kubelet[3089]: I1101 01:16:09.588556 3089 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:16:09.588673 kubelet[3089]: I1101 01:16:09.588587 3089 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:16:09.588736 kubelet[3089]: I1101 01:16:09.588697 3089 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:16:09.588736 kubelet[3089]: I1101 01:16:09.588708 3089 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:16:09.588736 kubelet[3089]: I1101 01:16:09.588718 3089 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:16:09.588736 kubelet[3089]: I1101 01:16:09.588723 3089 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:16:09.589046 kubelet[3089]: I1101 01:16:09.589028 3089 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:16:09.589317 kubelet[3089]: I1101 01:16:09.589282 3089 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:16:09.589514 kubelet[3089]: I1101 01:16:09.589509 3089 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:16:09.589534 kubelet[3089]: I1101 01:16:09.589524 3089 server.go:1287] "Started kubelet" Nov 1 01:16:09.589676 kubelet[3089]: I1101 01:16:09.589635 3089 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:16:09.589906 kubelet[3089]: I1101 01:16:09.589691 3089 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:16:09.590232 kubelet[3089]: I1101 01:16:09.590214 3089 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:16:09.591421 kubelet[3089]: I1101 01:16:09.591412 3089 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:16:09.591492 kubelet[3089]: I1101 01:16:09.591422 3089 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:16:09.591492 kubelet[3089]: I1101 01:16:09.591438 3089 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:16:09.591492 kubelet[3089]: E1101 01:16:09.591456 3089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-61efafd0e9\" not found" Nov 1 01:16:09.591492 kubelet[3089]: I1101 01:16:09.591486 3089 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:16:09.591614 kubelet[3089]: I1101 01:16:09.591517 3089 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:16:09.591614 kubelet[3089]: I1101 01:16:09.591574 3089 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:16:09.591850 kubelet[3089]: E1101 01:16:09.591817 3089 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:16:09.591850 kubelet[3089]: I1101 01:16:09.591845 3089 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:16:09.591927 kubelet[3089]: I1101 01:16:09.591896 3089 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:16:09.592382 kubelet[3089]: I1101 01:16:09.592373 3089 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:16:09.597589 kubelet[3089]: I1101 01:16:09.597561 3089 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:16:09.598062 kubelet[3089]: I1101 01:16:09.598051 3089 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:16:09.598120 kubelet[3089]: I1101 01:16:09.598067 3089 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:16:09.598120 kubelet[3089]: I1101 01:16:09.598078 3089 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:16:09.598120 kubelet[3089]: I1101 01:16:09.598083 3089 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:16:09.598120 kubelet[3089]: E1101 01:16:09.598109 3089 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:16:09.606959 kubelet[3089]: I1101 01:16:09.606932 3089 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:16:09.606959 kubelet[3089]: I1101 01:16:09.606941 3089 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:16:09.606959 kubelet[3089]: I1101 01:16:09.606950 3089 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:16:09.607055 kubelet[3089]: I1101 01:16:09.607040 3089 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:16:09.607055 kubelet[3089]: I1101 01:16:09.607047 3089 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:16:09.607089 kubelet[3089]: I1101 01:16:09.607057 3089 policy_none.go:49] "None policy: Start" Nov 1 01:16:09.607089 kubelet[3089]: I1101 01:16:09.607062 3089 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:16:09.607089 kubelet[3089]: I1101 01:16:09.607067 3089 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:16:09.607140 kubelet[3089]: I1101 01:16:09.607134 3089 state_mem.go:75] "Updated machine memory state" Nov 1 01:16:09.608965 kubelet[3089]: I1101 01:16:09.608928 3089 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:16:09.609048 kubelet[3089]: I1101 01:16:09.609010 3089 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:16:09.609048 kubelet[3089]: I1101 01:16:09.609017 3089 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:16:09.609092 kubelet[3089]: I1101 01:16:09.609087 3089 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:16:09.609536 kubelet[3089]: E1101 01:16:09.609497 3089 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:16:09.700177 kubelet[3089]: I1101 01:16:09.700064 3089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.700459 kubelet[3089]: I1101 01:16:09.700241 3089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.700459 kubelet[3089]: I1101 01:16:09.700378 3089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.708935 kubelet[3089]: W1101 01:16:09.708890 3089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:09.708935 kubelet[3089]: W1101 01:16:09.708905 3089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:09.709296 kubelet[3089]: E1101 01:16:09.709066 3089 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-61efafd0e9\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.709296 kubelet[3089]: W1101 01:16:09.709147 3089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:09.709296 kubelet[3089]: E1101 01:16:09.709284 3089 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.716111 kubelet[3089]: I1101 01:16:09.716026 3089 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.725142 kubelet[3089]: I1101 01:16:09.725078 3089 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.725372 kubelet[3089]: I1101 01:16:09.725285 3089 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.793346 kubelet[3089]: I1101 01:16:09.793220 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83f29ebb19231a41d661a4961cb15721-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" (UID: \"83f29ebb19231a41d661a4961cb15721\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.793723 kubelet[3089]: I1101 01:16:09.793362 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.793723 kubelet[3089]: I1101 01:16:09.793442 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.793723 kubelet[3089]: I1101 01:16:09.793495 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d838412f02fb335609ab456ac379473-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-61efafd0e9\" (UID: \"5d838412f02fb335609ab456ac379473\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.793723 kubelet[3089]: I1101 01:16:09.793545 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83f29ebb19231a41d661a4961cb15721-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" (UID: \"83f29ebb19231a41d661a4961cb15721\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.793723 kubelet[3089]: I1101 01:16:09.793602 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83f29ebb19231a41d661a4961cb15721-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" (UID: \"83f29ebb19231a41d661a4961cb15721\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.794273 kubelet[3089]: I1101 01:16:09.793650 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.794273 kubelet[3089]: I1101 01:16:09.793725 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:09.794273 kubelet[3089]: I1101 01:16:09.793776 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe2d94141ae551fe3a594b43a848f2df-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" (UID: \"fe2d94141ae551fe3a594b43a848f2df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.589037 kubelet[3089]: I1101 01:16:10.588978 3089 apiserver.go:52] "Watching apiserver" Nov 1 01:16:10.592424 kubelet[3089]: I1101 01:16:10.592414 3089 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:16:10.601900 kubelet[3089]: I1101 01:16:10.601852 3089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.601990 kubelet[3089]: I1101 01:16:10.601934 3089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.601990 kubelet[3089]: I1101 01:16:10.601946 3089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.604980 kubelet[3089]: W1101 01:16:10.604970 3089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:10.605029 kubelet[3089]: E1101 01:16:10.604992 3089 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-61efafd0e9\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.605338 kubelet[3089]: W1101 01:16:10.605304 3089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:10.605338 kubelet[3089]: W1101 01:16:10.605328 3089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:16:10.605403 kubelet[3089]: E1101 01:16:10.605346 3089 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-61efafd0e9\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.605403 kubelet[3089]: E1101 01:16:10.605349 3089 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-61efafd0e9\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:10.616382 kubelet[3089]: I1101 01:16:10.616353 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-61efafd0e9" podStartSLOduration=3.616343015 podStartE2EDuration="3.616343015s" podCreationTimestamp="2025-11-01 01:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:16:10.612601553 +0000 UTC m=+1.057236676" watchObservedRunningTime="2025-11-01 01:16:10.616343015 +0000 UTC m=+1.060978134" Nov 1 01:16:10.616486 kubelet[3089]: I1101 01:16:10.616413 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-61efafd0e9" podStartSLOduration=3.616410364 podStartE2EDuration="3.616410364s" podCreationTimestamp="2025-11-01 01:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:16:10.616270782 +0000 UTC m=+1.060905902" watchObservedRunningTime="2025-11-01 01:16:10.616410364 +0000 UTC m=+1.061045480" Nov 1 01:16:10.620316 kubelet[3089]: I1101 01:16:10.620251 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-61efafd0e9" podStartSLOduration=1.6202445939999999 podStartE2EDuration="1.620244594s" podCreationTimestamp="2025-11-01 01:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:16:10.620231519 +0000 UTC m=+1.064866639" watchObservedRunningTime="2025-11-01 01:16:10.620244594 +0000 UTC m=+1.064879710" Nov 1 01:16:14.048972 kubelet[3089]: I1101 01:16:14.048857 3089 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:16:14.049849 containerd[1824]: time="2025-11-01T01:16:14.049601957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:16:14.050531 kubelet[3089]: I1101 01:16:14.050072 3089 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:16:15.168762 systemd[1]: Created slice kubepods-besteffort-pod382757b8_02d7_4737_bc63_c11221b4a046.slice - libcontainer container kubepods-besteffort-pod382757b8_02d7_4737_bc63_c11221b4a046.slice. Nov 1 01:16:15.229579 kubelet[3089]: I1101 01:16:15.229500 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/382757b8-02d7-4737-bc63-c11221b4a046-lib-modules\") pod \"kube-proxy-97mvs\" (UID: \"382757b8-02d7-4737-bc63-c11221b4a046\") " pod="kube-system/kube-proxy-97mvs" Nov 1 01:16:15.230619 kubelet[3089]: I1101 01:16:15.229623 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/382757b8-02d7-4737-bc63-c11221b4a046-kube-proxy\") pod \"kube-proxy-97mvs\" (UID: \"382757b8-02d7-4737-bc63-c11221b4a046\") " pod="kube-system/kube-proxy-97mvs" Nov 1 01:16:15.230619 kubelet[3089]: I1101 01:16:15.229725 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/382757b8-02d7-4737-bc63-c11221b4a046-xtables-lock\") pod \"kube-proxy-97mvs\" (UID: \"382757b8-02d7-4737-bc63-c11221b4a046\") " pod="kube-system/kube-proxy-97mvs" Nov 1 01:16:15.230619 kubelet[3089]: I1101 01:16:15.229830 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6l6c\" (UniqueName: \"kubernetes.io/projected/382757b8-02d7-4737-bc63-c11221b4a046-kube-api-access-d6l6c\") pod \"kube-proxy-97mvs\" (UID: \"382757b8-02d7-4737-bc63-c11221b4a046\") " pod="kube-system/kube-proxy-97mvs" Nov 1 01:16:15.296546 systemd[1]: Created slice kubepods-besteffort-pod644c1187_3c66_4da3_855f_f2b0e83ac08c.slice - libcontainer container kubepods-besteffort-pod644c1187_3c66_4da3_855f_f2b0e83ac08c.slice. Nov 1 01:16:15.330937 kubelet[3089]: I1101 01:16:15.330830 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65jt\" (UniqueName: \"kubernetes.io/projected/644c1187-3c66-4da3-855f-f2b0e83ac08c-kube-api-access-h65jt\") pod \"tigera-operator-7dcd859c48-xvzj5\" (UID: \"644c1187-3c66-4da3-855f-f2b0e83ac08c\") " pod="tigera-operator/tigera-operator-7dcd859c48-xvzj5" Nov 1 01:16:15.331240 kubelet[3089]: I1101 01:16:15.330964 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/644c1187-3c66-4da3-855f-f2b0e83ac08c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xvzj5\" (UID: \"644c1187-3c66-4da3-855f-f2b0e83ac08c\") " pod="tigera-operator/tigera-operator-7dcd859c48-xvzj5" Nov 1 01:16:15.487839 containerd[1824]: time="2025-11-01T01:16:15.487711847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-97mvs,Uid:382757b8-02d7-4737-bc63-c11221b4a046,Namespace:kube-system,Attempt:0,}" Nov 1 01:16:15.564162 containerd[1824]: time="2025-11-01T01:16:15.563924954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:15.564162 containerd[1824]: time="2025-11-01T01:16:15.564152412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:15.564162 containerd[1824]: time="2025-11-01T01:16:15.564161423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:15.564278 containerd[1824]: time="2025-11-01T01:16:15.564207432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:15.584396 systemd[1]: Started cri-containerd-287302e36b1d366c846ccf44352326fa7093c9d2dd304a3b08bd9c88f0116b3f.scope - libcontainer container 287302e36b1d366c846ccf44352326fa7093c9d2dd304a3b08bd9c88f0116b3f. Nov 1 01:16:15.598172 containerd[1824]: time="2025-11-01T01:16:15.598142913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-97mvs,Uid:382757b8-02d7-4737-bc63-c11221b4a046,Namespace:kube-system,Attempt:0,} returns sandbox id \"287302e36b1d366c846ccf44352326fa7093c9d2dd304a3b08bd9c88f0116b3f\"" Nov 1 01:16:15.600139 containerd[1824]: time="2025-11-01T01:16:15.600113072Z" level=info msg="CreateContainer within sandbox \"287302e36b1d366c846ccf44352326fa7093c9d2dd304a3b08bd9c88f0116b3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:16:15.602475 containerd[1824]: time="2025-11-01T01:16:15.602421090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xvzj5,Uid:644c1187-3c66-4da3-855f-f2b0e83ac08c,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:16:15.664001 containerd[1824]: time="2025-11-01T01:16:15.663929649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:15.664001 containerd[1824]: time="2025-11-01T01:16:15.663973736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:15.664001 containerd[1824]: time="2025-11-01T01:16:15.663984791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:15.664275 containerd[1824]: time="2025-11-01T01:16:15.664028711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:15.675396 systemd[1]: Started cri-containerd-0a72b57440ed3149f1ba5bab869ee7125b997e32d8b07c5ebb94417c235727b6.scope - libcontainer container 0a72b57440ed3149f1ba5bab869ee7125b997e32d8b07c5ebb94417c235727b6. Nov 1 01:16:15.701462 containerd[1824]: time="2025-11-01T01:16:15.701406992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xvzj5,Uid:644c1187-3c66-4da3-855f-f2b0e83ac08c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0a72b57440ed3149f1ba5bab869ee7125b997e32d8b07c5ebb94417c235727b6\"" Nov 1 01:16:15.702252 containerd[1824]: time="2025-11-01T01:16:15.702236787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:16:15.790485 containerd[1824]: time="2025-11-01T01:16:15.790371431Z" level=info msg="CreateContainer within sandbox \"287302e36b1d366c846ccf44352326fa7093c9d2dd304a3b08bd9c88f0116b3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47757e90c70ebb0a3ca3187f63b5a52d2b5be7b288eb794110858fb45d684703\"" Nov 1 01:16:15.790892 containerd[1824]: time="2025-11-01T01:16:15.790833413Z" level=info msg="StartContainer for \"47757e90c70ebb0a3ca3187f63b5a52d2b5be7b288eb794110858fb45d684703\"" Nov 1 01:16:15.816496 systemd[1]: Started cri-containerd-47757e90c70ebb0a3ca3187f63b5a52d2b5be7b288eb794110858fb45d684703.scope - libcontainer container 47757e90c70ebb0a3ca3187f63b5a52d2b5be7b288eb794110858fb45d684703. Nov 1 01:16:15.878282 containerd[1824]: time="2025-11-01T01:16:15.878251637Z" level=info msg="StartContainer for \"47757e90c70ebb0a3ca3187f63b5a52d2b5be7b288eb794110858fb45d684703\" returns successfully" Nov 1 01:16:16.643315 kubelet[3089]: I1101 01:16:16.643151 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-97mvs" podStartSLOduration=1.64311397 podStartE2EDuration="1.64311397s" podCreationTimestamp="2025-11-01 01:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:16:16.643075196 +0000 UTC m=+7.087710384" watchObservedRunningTime="2025-11-01 01:16:16.64311397 +0000 UTC m=+7.087749138" Nov 1 01:16:17.830553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063443643.mount: Deactivated successfully. Nov 1 01:16:18.244542 containerd[1824]: time="2025-11-01T01:16:18.244491156Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:18.244760 containerd[1824]: time="2025-11-01T01:16:18.244714086Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 01:16:18.245093 containerd[1824]: time="2025-11-01T01:16:18.245054300Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:18.246099 containerd[1824]: time="2025-11-01T01:16:18.246059528Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:18.246611 containerd[1824]: time="2025-11-01T01:16:18.246572710Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.544314937s" Nov 1 01:16:18.246611 containerd[1824]: time="2025-11-01T01:16:18.246597265Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:16:18.247496 containerd[1824]: time="2025-11-01T01:16:18.247481567Z" level=info msg="CreateContainer within sandbox \"0a72b57440ed3149f1ba5bab869ee7125b997e32d8b07c5ebb94417c235727b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:16:18.251544 containerd[1824]: time="2025-11-01T01:16:18.251498232Z" level=info msg="CreateContainer within sandbox \"0a72b57440ed3149f1ba5bab869ee7125b997e32d8b07c5ebb94417c235727b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2bbca9e46addc1387b28c29662e8e4e1c62b967afb277468ec7bd881ecc1a18c\"" Nov 1 01:16:18.251723 containerd[1824]: time="2025-11-01T01:16:18.251682405Z" level=info msg="StartContainer for \"2bbca9e46addc1387b28c29662e8e4e1c62b967afb277468ec7bd881ecc1a18c\"" Nov 1 01:16:18.281730 systemd[1]: Started cri-containerd-2bbca9e46addc1387b28c29662e8e4e1c62b967afb277468ec7bd881ecc1a18c.scope - libcontainer container 2bbca9e46addc1387b28c29662e8e4e1c62b967afb277468ec7bd881ecc1a18c. Nov 1 01:16:18.332252 containerd[1824]: time="2025-11-01T01:16:18.332194399Z" level=info msg="StartContainer for \"2bbca9e46addc1387b28c29662e8e4e1c62b967afb277468ec7bd881ecc1a18c\" returns successfully" Nov 1 01:16:18.650678 kubelet[3089]: I1101 01:16:18.650413 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xvzj5" podStartSLOduration=1.105411856 podStartE2EDuration="3.650376666s" podCreationTimestamp="2025-11-01 01:16:15 +0000 UTC" firstStartedPulling="2025-11-01 01:16:15.701987801 +0000 UTC m=+6.146622922" lastFinishedPulling="2025-11-01 01:16:18.246952616 +0000 UTC m=+8.691587732" observedRunningTime="2025-11-01 01:16:18.650170684 +0000 UTC m=+9.094805876" watchObservedRunningTime="2025-11-01 01:16:18.650376666 +0000 UTC m=+9.095011836" Nov 1 01:16:22.763800 sudo[2094]: pam_unix(sudo:session): session closed for user root Nov 1 01:16:22.764876 sshd[2090]: pam_unix(sshd:session): session closed for user core Nov 1 01:16:22.767382 systemd[1]: sshd@8-139.178.94.199:22-139.178.89.65:33092.service: Deactivated successfully. Nov 1 01:16:22.768536 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:16:22.768663 systemd[1]: session-11.scope: Consumed 3.253s CPU time, 167.8M memory peak, 0B memory swap peak. Nov 1 01:16:22.769005 systemd-logind[1809]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:16:22.769699 systemd-logind[1809]: Removed session 11. Nov 1 01:16:24.103311 update_engine[1811]: I20251101 01:16:24.103264 1811 update_attempter.cc:509] Updating boot flags... Nov 1 01:16:24.131223 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3618) Nov 1 01:16:24.165213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3617) Nov 1 01:16:24.191215 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3617) Nov 1 01:16:26.769151 systemd[1]: Created slice kubepods-besteffort-pod886c7932_b86d_4bd6_90f9_9c6fc17bc334.slice - libcontainer container kubepods-besteffort-pod886c7932_b86d_4bd6_90f9_9c6fc17bc334.slice. Nov 1 01:16:26.808401 kubelet[3089]: I1101 01:16:26.808341 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/886c7932-b86d-4bd6-90f9-9c6fc17bc334-tigera-ca-bundle\") pod \"calico-typha-759b94f96-lg97k\" (UID: \"886c7932-b86d-4bd6-90f9-9c6fc17bc334\") " pod="calico-system/calico-typha-759b94f96-lg97k" Nov 1 01:16:26.808401 kubelet[3089]: I1101 01:16:26.808377 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/886c7932-b86d-4bd6-90f9-9c6fc17bc334-typha-certs\") pod \"calico-typha-759b94f96-lg97k\" (UID: \"886c7932-b86d-4bd6-90f9-9c6fc17bc334\") " pod="calico-system/calico-typha-759b94f96-lg97k" Nov 1 01:16:26.808401 kubelet[3089]: I1101 01:16:26.808393 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zx2z\" (UniqueName: \"kubernetes.io/projected/886c7932-b86d-4bd6-90f9-9c6fc17bc334-kube-api-access-5zx2z\") pod \"calico-typha-759b94f96-lg97k\" (UID: \"886c7932-b86d-4bd6-90f9-9c6fc17bc334\") " pod="calico-system/calico-typha-759b94f96-lg97k" Nov 1 01:16:26.962631 systemd[1]: Created slice kubepods-besteffort-pod0d842bcb_6057_4ed7_be1f_c0608e97b494.slice - libcontainer container kubepods-besteffort-pod0d842bcb_6057_4ed7_be1f_c0608e97b494.slice. Nov 1 01:16:27.010400 kubelet[3089]: I1101 01:16:27.010318 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-cni-net-dir\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.010400 kubelet[3089]: I1101 01:16:27.010372 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0d842bcb-6057-4ed7-be1f-c0608e97b494-node-certs\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.010656 kubelet[3089]: I1101 01:16:27.010401 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-cni-bin-dir\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.010656 kubelet[3089]: I1101 01:16:27.010461 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-cni-log-dir\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.010656 kubelet[3089]: I1101 01:16:27.010491 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-flexvol-driver-host\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.010656 kubelet[3089]: I1101 01:16:27.010517 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-var-lib-calico\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.010656 kubelet[3089]: I1101 01:16:27.010543 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-lib-modules\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.011032 kubelet[3089]: I1101 01:16:27.010568 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d842bcb-6057-4ed7-be1f-c0608e97b494-tigera-ca-bundle\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.011032 kubelet[3089]: I1101 01:16:27.010591 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-var-run-calico\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.011032 kubelet[3089]: I1101 01:16:27.010705 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-xtables-lock\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.011032 kubelet[3089]: I1101 01:16:27.010785 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0d842bcb-6057-4ed7-be1f-c0608e97b494-policysync\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.011032 kubelet[3089]: I1101 01:16:27.010839 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r98q\" (UniqueName: \"kubernetes.io/projected/0d842bcb-6057-4ed7-be1f-c0608e97b494-kube-api-access-7r98q\") pod \"calico-node-76rt6\" (UID: \"0d842bcb-6057-4ed7-be1f-c0608e97b494\") " pod="calico-system/calico-node-76rt6" Nov 1 01:16:27.074382 containerd[1824]: time="2025-11-01T01:16:27.074134018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-759b94f96-lg97k,Uid:886c7932-b86d-4bd6-90f9-9c6fc17bc334,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:27.100707 containerd[1824]: time="2025-11-01T01:16:27.100477763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:27.100707 containerd[1824]: time="2025-11-01T01:16:27.100700223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:27.100707 containerd[1824]: time="2025-11-01T01:16:27.100708770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:27.100831 containerd[1824]: time="2025-11-01T01:16:27.100753959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:27.109460 kubelet[3089]: E1101 01:16:27.109428 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:27.112518 kubelet[3089]: E1101 01:16:27.112503 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.112518 kubelet[3089]: W1101 01:16:27.112516 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.112619 kubelet[3089]: E1101 01:16:27.112538 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.113459 kubelet[3089]: E1101 01:16:27.113449 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.113459 kubelet[3089]: W1101 01:16:27.113458 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.113534 kubelet[3089]: E1101 01:16:27.113467 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.116138 kubelet[3089]: E1101 01:16:27.116105 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.116138 kubelet[3089]: W1101 01:16:27.116113 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.116138 kubelet[3089]: E1101 01:16:27.116123 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.124766 systemd[1]: Started cri-containerd-d5c60836862ab7dc075906371fd561b57c4de6dd81d32c239a5a81e0d59acd3e.scope - libcontainer container d5c60836862ab7dc075906371fd561b57c4de6dd81d32c239a5a81e0d59acd3e. Nov 1 01:16:27.189843 containerd[1824]: time="2025-11-01T01:16:27.189793593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-759b94f96-lg97k,Uid:886c7932-b86d-4bd6-90f9-9c6fc17bc334,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5c60836862ab7dc075906371fd561b57c4de6dd81d32c239a5a81e0d59acd3e\"" Nov 1 01:16:27.190591 containerd[1824]: time="2025-11-01T01:16:27.190575834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:16:27.202835 kubelet[3089]: E1101 01:16:27.202815 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.202835 kubelet[3089]: W1101 01:16:27.202829 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.202835 kubelet[3089]: E1101 01:16:27.202842 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.202994 kubelet[3089]: E1101 01:16:27.202934 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.202994 kubelet[3089]: W1101 01:16:27.202940 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.202994 kubelet[3089]: E1101 01:16:27.202945 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203091 kubelet[3089]: E1101 01:16:27.203021 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203091 kubelet[3089]: W1101 01:16:27.203025 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203091 kubelet[3089]: E1101 01:16:27.203030 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203178 kubelet[3089]: E1101 01:16:27.203136 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203178 kubelet[3089]: W1101 01:16:27.203141 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203178 kubelet[3089]: E1101 01:16:27.203145 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203280 kubelet[3089]: E1101 01:16:27.203245 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203280 kubelet[3089]: W1101 01:16:27.203250 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203280 kubelet[3089]: E1101 01:16:27.203255 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203371 kubelet[3089]: E1101 01:16:27.203336 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203371 kubelet[3089]: W1101 01:16:27.203341 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203371 kubelet[3089]: E1101 01:16:27.203345 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203467 kubelet[3089]: E1101 01:16:27.203415 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203467 kubelet[3089]: W1101 01:16:27.203420 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203467 kubelet[3089]: E1101 01:16:27.203424 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203563 kubelet[3089]: E1101 01:16:27.203503 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203563 kubelet[3089]: W1101 01:16:27.203507 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203563 kubelet[3089]: E1101 01:16:27.203513 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203654 kubelet[3089]: E1101 01:16:27.203585 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203654 kubelet[3089]: W1101 01:16:27.203590 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203654 kubelet[3089]: E1101 01:16:27.203594 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203739 kubelet[3089]: E1101 01:16:27.203662 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203739 kubelet[3089]: W1101 01:16:27.203667 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203739 kubelet[3089]: E1101 01:16:27.203672 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203794 kubelet[3089]: E1101 01:16:27.203741 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203794 kubelet[3089]: W1101 01:16:27.203746 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203794 kubelet[3089]: E1101 01:16:27.203750 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203841 kubelet[3089]: E1101 01:16:27.203818 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203841 kubelet[3089]: W1101 01:16:27.203823 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203841 kubelet[3089]: E1101 01:16:27.203827 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.203913 kubelet[3089]: E1101 01:16:27.203894 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.203913 kubelet[3089]: W1101 01:16:27.203899 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.203913 kubelet[3089]: E1101 01:16:27.203903 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204002 kubelet[3089]: E1101 01:16:27.203968 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204002 kubelet[3089]: W1101 01:16:27.203973 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204002 kubelet[3089]: E1101 01:16:27.203977 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204076 kubelet[3089]: E1101 01:16:27.204051 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204076 kubelet[3089]: W1101 01:16:27.204055 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204076 kubelet[3089]: E1101 01:16:27.204059 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204149 kubelet[3089]: E1101 01:16:27.204143 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204149 kubelet[3089]: W1101 01:16:27.204149 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204191 kubelet[3089]: E1101 01:16:27.204153 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204254 kubelet[3089]: E1101 01:16:27.204248 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204254 kubelet[3089]: W1101 01:16:27.204253 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204293 kubelet[3089]: E1101 01:16:27.204258 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204342 kubelet[3089]: E1101 01:16:27.204337 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204342 kubelet[3089]: W1101 01:16:27.204341 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204400 kubelet[3089]: E1101 01:16:27.204346 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204429 kubelet[3089]: E1101 01:16:27.204423 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204451 kubelet[3089]: W1101 01:16:27.204429 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204451 kubelet[3089]: E1101 01:16:27.204434 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.204510 kubelet[3089]: E1101 01:16:27.204503 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.204510 kubelet[3089]: W1101 01:16:27.204508 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.204558 kubelet[3089]: E1101 01:16:27.204512 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.212887 kubelet[3089]: E1101 01:16:27.212846 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.212887 kubelet[3089]: W1101 01:16:27.212855 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.212887 kubelet[3089]: E1101 01:16:27.212863 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.212887 kubelet[3089]: I1101 01:16:27.212878 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe329cee-9aa5-425f-b021-f1def80c02c8-varrun\") pod \"csi-node-driver-kckfw\" (UID: \"fe329cee-9aa5-425f-b021-f1def80c02c8\") " pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:27.213074 kubelet[3089]: E1101 01:16:27.213033 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213074 kubelet[3089]: W1101 01:16:27.213039 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213074 kubelet[3089]: E1101 01:16:27.213046 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213074 kubelet[3089]: I1101 01:16:27.213054 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe329cee-9aa5-425f-b021-f1def80c02c8-kubelet-dir\") pod \"csi-node-driver-kckfw\" (UID: \"fe329cee-9aa5-425f-b021-f1def80c02c8\") " pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:27.213155 kubelet[3089]: E1101 01:16:27.213144 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213155 kubelet[3089]: W1101 01:16:27.213149 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213155 kubelet[3089]: E1101 01:16:27.213154 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213218 kubelet[3089]: I1101 01:16:27.213161 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe329cee-9aa5-425f-b021-f1def80c02c8-socket-dir\") pod \"csi-node-driver-kckfw\" (UID: \"fe329cee-9aa5-425f-b021-f1def80c02c8\") " pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:27.213312 kubelet[3089]: E1101 01:16:27.213273 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213312 kubelet[3089]: W1101 01:16:27.213282 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213312 kubelet[3089]: E1101 01:16:27.213296 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213451 kubelet[3089]: E1101 01:16:27.213413 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213451 kubelet[3089]: W1101 01:16:27.213419 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213451 kubelet[3089]: E1101 01:16:27.213426 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213590 kubelet[3089]: E1101 01:16:27.213556 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213590 kubelet[3089]: W1101 01:16:27.213562 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213590 kubelet[3089]: E1101 01:16:27.213569 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213724 kubelet[3089]: E1101 01:16:27.213686 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213724 kubelet[3089]: W1101 01:16:27.213691 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213724 kubelet[3089]: E1101 01:16:27.213697 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213792 kubelet[3089]: E1101 01:16:27.213787 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213811 kubelet[3089]: W1101 01:16:27.213792 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213811 kubelet[3089]: E1101 01:16:27.213798 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213848 kubelet[3089]: I1101 01:16:27.213810 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe329cee-9aa5-425f-b021-f1def80c02c8-registration-dir\") pod \"csi-node-driver-kckfw\" (UID: \"fe329cee-9aa5-425f-b021-f1def80c02c8\") " pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:27.213963 kubelet[3089]: E1101 01:16:27.213927 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.213963 kubelet[3089]: W1101 01:16:27.213933 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.213963 kubelet[3089]: E1101 01:16:27.213939 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.213963 kubelet[3089]: I1101 01:16:27.213947 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr67n\" (UniqueName: \"kubernetes.io/projected/fe329cee-9aa5-425f-b021-f1def80c02c8-kube-api-access-mr67n\") pod \"csi-node-driver-kckfw\" (UID: \"fe329cee-9aa5-425f-b021-f1def80c02c8\") " pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:27.214068 kubelet[3089]: E1101 01:16:27.214062 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.214068 kubelet[3089]: W1101 01:16:27.214068 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.214105 kubelet[3089]: E1101 01:16:27.214074 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.214162 kubelet[3089]: E1101 01:16:27.214157 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.214179 kubelet[3089]: W1101 01:16:27.214162 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.214179 kubelet[3089]: E1101 01:16:27.214169 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.214276 kubelet[3089]: E1101 01:16:27.214271 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.214276 kubelet[3089]: W1101 01:16:27.214276 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.214317 kubelet[3089]: E1101 01:16:27.214283 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.214396 kubelet[3089]: E1101 01:16:27.214392 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.214414 kubelet[3089]: W1101 01:16:27.214397 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.214414 kubelet[3089]: E1101 01:16:27.214403 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.214521 kubelet[3089]: E1101 01:16:27.214516 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.214542 kubelet[3089]: W1101 01:16:27.214522 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.214542 kubelet[3089]: E1101 01:16:27.214527 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.214634 kubelet[3089]: E1101 01:16:27.214630 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.214654 kubelet[3089]: W1101 01:16:27.214634 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.214654 kubelet[3089]: E1101 01:16:27.214639 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.266813 containerd[1824]: time="2025-11-01T01:16:27.266696471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-76rt6,Uid:0d842bcb-6057-4ed7-be1f-c0608e97b494,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:27.293850 containerd[1824]: time="2025-11-01T01:16:27.293629313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:27.293850 containerd[1824]: time="2025-11-01T01:16:27.293845113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:27.293968 containerd[1824]: time="2025-11-01T01:16:27.293854850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:27.293968 containerd[1824]: time="2025-11-01T01:16:27.293899964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:27.314395 systemd[1]: Started cri-containerd-bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a.scope - libcontainer container bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a. Nov 1 01:16:27.314978 kubelet[3089]: E1101 01:16:27.314965 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.314978 kubelet[3089]: W1101 01:16:27.314978 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.315052 kubelet[3089]: E1101 01:16:27.314991 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.315186 kubelet[3089]: E1101 01:16:27.315178 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.315186 kubelet[3089]: W1101 01:16:27.315185 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.315260 kubelet[3089]: E1101 01:16:27.315194 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.315403 kubelet[3089]: E1101 01:16:27.315392 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.315434 kubelet[3089]: W1101 01:16:27.315403 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.315434 kubelet[3089]: E1101 01:16:27.315414 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.315594 kubelet[3089]: E1101 01:16:27.315585 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.315629 kubelet[3089]: W1101 01:16:27.315594 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.315629 kubelet[3089]: E1101 01:16:27.315605 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.315746 kubelet[3089]: E1101 01:16:27.315739 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.315773 kubelet[3089]: W1101 01:16:27.315746 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.315773 kubelet[3089]: E1101 01:16:27.315756 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316010 kubelet[3089]: E1101 01:16:27.315997 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316046 kubelet[3089]: W1101 01:16:27.316011 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316046 kubelet[3089]: E1101 01:16:27.316024 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316173 kubelet[3089]: E1101 01:16:27.316166 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316173 kubelet[3089]: W1101 01:16:27.316172 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316232 kubelet[3089]: E1101 01:16:27.316180 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316354 kubelet[3089]: E1101 01:16:27.316346 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316389 kubelet[3089]: W1101 01:16:27.316356 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316389 kubelet[3089]: E1101 01:16:27.316366 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316544 kubelet[3089]: E1101 01:16:27.316535 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316574 kubelet[3089]: W1101 01:16:27.316544 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316574 kubelet[3089]: E1101 01:16:27.316559 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316694 kubelet[3089]: E1101 01:16:27.316687 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316722 kubelet[3089]: W1101 01:16:27.316694 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316722 kubelet[3089]: E1101 01:16:27.316707 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316818 kubelet[3089]: E1101 01:16:27.316812 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316844 kubelet[3089]: W1101 01:16:27.316819 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316844 kubelet[3089]: E1101 01:16:27.316827 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.316934 kubelet[3089]: E1101 01:16:27.316928 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.316961 kubelet[3089]: W1101 01:16:27.316934 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.316961 kubelet[3089]: E1101 01:16:27.316942 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.317044 kubelet[3089]: E1101 01:16:27.317038 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.317070 kubelet[3089]: W1101 01:16:27.317044 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.317070 kubelet[3089]: E1101 01:16:27.317052 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.317178 kubelet[3089]: E1101 01:16:27.317172 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.317213 kubelet[3089]: W1101 01:16:27.317178 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.317213 kubelet[3089]: E1101 01:16:27.317186 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.317314 kubelet[3089]: E1101 01:16:27.317304 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.317314 kubelet[3089]: W1101 01:16:27.317310 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.317369 kubelet[3089]: E1101 01:16:27.317317 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.317566 kubelet[3089]: E1101 01:16:27.317554 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.317604 kubelet[3089]: W1101 01:16:27.317568 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.317604 kubelet[3089]: E1101 01:16:27.317584 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.317766 kubelet[3089]: E1101 01:16:27.317757 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.317792 kubelet[3089]: W1101 01:16:27.317769 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.317824 kubelet[3089]: E1101 01:16:27.317788 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.317926 kubelet[3089]: E1101 01:16:27.317917 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.317963 kubelet[3089]: W1101 01:16:27.317928 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.317963 kubelet[3089]: E1101 01:16:27.317945 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.318072 kubelet[3089]: E1101 01:16:27.318063 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.318102 kubelet[3089]: W1101 01:16:27.318074 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.318102 kubelet[3089]: E1101 01:16:27.318090 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.318222 kubelet[3089]: E1101 01:16:27.318213 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.318260 kubelet[3089]: W1101 01:16:27.318223 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.318260 kubelet[3089]: E1101 01:16:27.318237 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.318430 kubelet[3089]: E1101 01:16:27.318421 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.318463 kubelet[3089]: W1101 01:16:27.318432 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.318463 kubelet[3089]: E1101 01:16:27.318445 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.318596 kubelet[3089]: E1101 01:16:27.318587 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.318627 kubelet[3089]: W1101 01:16:27.318598 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.318627 kubelet[3089]: E1101 01:16:27.318612 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.318787 kubelet[3089]: E1101 01:16:27.318778 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.318817 kubelet[3089]: W1101 01:16:27.318790 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.318817 kubelet[3089]: E1101 01:16:27.318805 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.318956 kubelet[3089]: E1101 01:16:27.318947 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.318988 kubelet[3089]: W1101 01:16:27.318958 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.318988 kubelet[3089]: E1101 01:16:27.318969 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.319153 kubelet[3089]: E1101 01:16:27.319145 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.319179 kubelet[3089]: W1101 01:16:27.319155 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.319179 kubelet[3089]: E1101 01:16:27.319167 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.324038 kubelet[3089]: E1101 01:16:27.324022 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:27.324038 kubelet[3089]: W1101 01:16:27.324034 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:27.324180 kubelet[3089]: E1101 01:16:27.324048 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:27.332006 containerd[1824]: time="2025-11-01T01:16:27.331930082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-76rt6,Uid:0d842bcb-6057-4ed7-be1f-c0608e97b494,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\"" Nov 1 01:16:28.598499 kubelet[3089]: E1101 01:16:28.598449 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:28.692535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498611918.mount: Deactivated successfully. Nov 1 01:16:29.245982 containerd[1824]: time="2025-11-01T01:16:29.245951553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:29.246192 containerd[1824]: time="2025-11-01T01:16:29.246027602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 01:16:29.246419 containerd[1824]: time="2025-11-01T01:16:29.246408576Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:29.247530 containerd[1824]: time="2025-11-01T01:16:29.247496269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:29.248220 containerd[1824]: time="2025-11-01T01:16:29.248179155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.057582913s" Nov 1 01:16:29.248220 containerd[1824]: time="2025-11-01T01:16:29.248197156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:16:29.248757 containerd[1824]: time="2025-11-01T01:16:29.248705320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:16:29.251569 containerd[1824]: time="2025-11-01T01:16:29.251527070Z" level=info msg="CreateContainer within sandbox \"d5c60836862ab7dc075906371fd561b57c4de6dd81d32c239a5a81e0d59acd3e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:16:29.255840 containerd[1824]: time="2025-11-01T01:16:29.255796907Z" level=info msg="CreateContainer within sandbox \"d5c60836862ab7dc075906371fd561b57c4de6dd81d32c239a5a81e0d59acd3e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a60ca73aa27b7c195a70c4af6d91570e65efbf18d8f2803db068389d6e434487\"" Nov 1 01:16:29.256108 containerd[1824]: time="2025-11-01T01:16:29.256068406Z" level=info msg="StartContainer for \"a60ca73aa27b7c195a70c4af6d91570e65efbf18d8f2803db068389d6e434487\"" Nov 1 01:16:29.333725 systemd[1]: Started cri-containerd-a60ca73aa27b7c195a70c4af6d91570e65efbf18d8f2803db068389d6e434487.scope - libcontainer container a60ca73aa27b7c195a70c4af6d91570e65efbf18d8f2803db068389d6e434487. Nov 1 01:16:29.387920 containerd[1824]: time="2025-11-01T01:16:29.387890199Z" level=info msg="StartContainer for \"a60ca73aa27b7c195a70c4af6d91570e65efbf18d8f2803db068389d6e434487\" returns successfully" Nov 1 01:16:29.688744 kubelet[3089]: I1101 01:16:29.688622 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-759b94f96-lg97k" podStartSLOduration=1.6303453829999999 podStartE2EDuration="3.688589641s" podCreationTimestamp="2025-11-01 01:16:26 +0000 UTC" firstStartedPulling="2025-11-01 01:16:27.190399988 +0000 UTC m=+17.635035112" lastFinishedPulling="2025-11-01 01:16:29.248644254 +0000 UTC m=+19.693279370" observedRunningTime="2025-11-01 01:16:29.688111838 +0000 UTC m=+20.132747050" watchObservedRunningTime="2025-11-01 01:16:29.688589641 +0000 UTC m=+20.133224804" Nov 1 01:16:29.722000 kubelet[3089]: E1101 01:16:29.721914 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.722000 kubelet[3089]: W1101 01:16:29.721956 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.722000 kubelet[3089]: E1101 01:16:29.721994 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.722646 kubelet[3089]: E1101 01:16:29.722566 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.722646 kubelet[3089]: W1101 01:16:29.722603 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.722646 kubelet[3089]: E1101 01:16:29.722635 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.723222 kubelet[3089]: E1101 01:16:29.723167 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.723340 kubelet[3089]: W1101 01:16:29.723233 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.723340 kubelet[3089]: E1101 01:16:29.723270 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.723931 kubelet[3089]: E1101 01:16:29.723853 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.723931 kubelet[3089]: W1101 01:16:29.723889 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.723931 kubelet[3089]: E1101 01:16:29.723925 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.724621 kubelet[3089]: E1101 01:16:29.724541 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.724621 kubelet[3089]: W1101 01:16:29.724578 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.724621 kubelet[3089]: E1101 01:16:29.724610 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.725112 kubelet[3089]: E1101 01:16:29.725057 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.725112 kubelet[3089]: W1101 01:16:29.725086 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.725112 kubelet[3089]: E1101 01:16:29.725113 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.725600 kubelet[3089]: E1101 01:16:29.725545 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.725600 kubelet[3089]: W1101 01:16:29.725573 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.725600 kubelet[3089]: E1101 01:16:29.725598 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.726030 kubelet[3089]: E1101 01:16:29.726003 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.726139 kubelet[3089]: W1101 01:16:29.726031 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.726139 kubelet[3089]: E1101 01:16:29.726055 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.726592 kubelet[3089]: E1101 01:16:29.726536 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.726592 kubelet[3089]: W1101 01:16:29.726566 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.726592 kubelet[3089]: E1101 01:16:29.726593 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.727125 kubelet[3089]: E1101 01:16:29.727072 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.727125 kubelet[3089]: W1101 01:16:29.727099 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.727125 kubelet[3089]: E1101 01:16:29.727124 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.727697 kubelet[3089]: E1101 01:16:29.727641 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.727697 kubelet[3089]: W1101 01:16:29.727670 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.727697 kubelet[3089]: E1101 01:16:29.727696 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.728295 kubelet[3089]: E1101 01:16:29.728236 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.728295 kubelet[3089]: W1101 01:16:29.728274 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.728526 kubelet[3089]: E1101 01:16:29.728310 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.728857 kubelet[3089]: E1101 01:16:29.728802 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.728857 kubelet[3089]: W1101 01:16:29.728832 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.728857 kubelet[3089]: E1101 01:16:29.728856 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.729390 kubelet[3089]: E1101 01:16:29.729340 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.729390 kubelet[3089]: W1101 01:16:29.729369 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.729619 kubelet[3089]: E1101 01:16:29.729393 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.729934 kubelet[3089]: E1101 01:16:29.729879 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.729934 kubelet[3089]: W1101 01:16:29.729907 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.729934 kubelet[3089]: E1101 01:16:29.729931 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.733519 kubelet[3089]: E1101 01:16:29.733440 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.733519 kubelet[3089]: W1101 01:16:29.733476 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.733519 kubelet[3089]: E1101 01:16:29.733508 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.734256 kubelet[3089]: E1101 01:16:29.734151 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.734256 kubelet[3089]: W1101 01:16:29.734191 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.734256 kubelet[3089]: E1101 01:16:29.734261 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.734961 kubelet[3089]: E1101 01:16:29.734859 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.734961 kubelet[3089]: W1101 01:16:29.734903 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.734961 kubelet[3089]: E1101 01:16:29.734948 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.735612 kubelet[3089]: E1101 01:16:29.735535 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.735612 kubelet[3089]: W1101 01:16:29.735573 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.735612 kubelet[3089]: E1101 01:16:29.735618 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.736249 kubelet[3089]: E1101 01:16:29.736191 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.736249 kubelet[3089]: W1101 01:16:29.736243 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.736470 kubelet[3089]: E1101 01:16:29.736364 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.736800 kubelet[3089]: E1101 01:16:29.736727 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.736800 kubelet[3089]: W1101 01:16:29.736756 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.737076 kubelet[3089]: E1101 01:16:29.736853 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.737318 kubelet[3089]: E1101 01:16:29.737266 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.737318 kubelet[3089]: W1101 01:16:29.737294 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.737557 kubelet[3089]: E1101 01:16:29.737406 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.737918 kubelet[3089]: E1101 01:16:29.737863 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.737918 kubelet[3089]: W1101 01:16:29.737892 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.738142 kubelet[3089]: E1101 01:16:29.737928 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.738667 kubelet[3089]: E1101 01:16:29.738605 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.738667 kubelet[3089]: W1101 01:16:29.738643 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.738874 kubelet[3089]: E1101 01:16:29.738685 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.739269 kubelet[3089]: E1101 01:16:29.739201 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.739269 kubelet[3089]: W1101 01:16:29.739254 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.739503 kubelet[3089]: E1101 01:16:29.739360 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.739820 kubelet[3089]: E1101 01:16:29.739751 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.739820 kubelet[3089]: W1101 01:16:29.739778 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.740085 kubelet[3089]: E1101 01:16:29.739855 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.740493 kubelet[3089]: E1101 01:16:29.740413 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.740493 kubelet[3089]: W1101 01:16:29.740441 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.740757 kubelet[3089]: E1101 01:16:29.740518 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.741013 kubelet[3089]: E1101 01:16:29.740922 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.741013 kubelet[3089]: W1101 01:16:29.740950 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.741297 kubelet[3089]: E1101 01:16:29.741053 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.741556 kubelet[3089]: E1101 01:16:29.741496 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.741556 kubelet[3089]: W1101 01:16:29.741523 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.741797 kubelet[3089]: E1101 01:16:29.741557 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.742255 kubelet[3089]: E1101 01:16:29.742196 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.742402 kubelet[3089]: W1101 01:16:29.742253 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.742402 kubelet[3089]: E1101 01:16:29.742298 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.743114 kubelet[3089]: E1101 01:16:29.743035 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.743114 kubelet[3089]: W1101 01:16:29.743073 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.743398 kubelet[3089]: E1101 01:16:29.743128 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.743792 kubelet[3089]: E1101 01:16:29.743742 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.743792 kubelet[3089]: W1101 01:16:29.743775 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.744127 kubelet[3089]: E1101 01:16:29.743808 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:29.744532 kubelet[3089]: E1101 01:16:29.744447 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:29.744532 kubelet[3089]: W1101 01:16:29.744480 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:29.744532 kubelet[3089]: E1101 01:16:29.744511 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.599236 kubelet[3089]: E1101 01:16:30.599084 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:30.666896 kubelet[3089]: I1101 01:16:30.666882 3089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:16:30.695211 containerd[1824]: time="2025-11-01T01:16:30.695184032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:30.695480 containerd[1824]: time="2025-11-01T01:16:30.695390082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 01:16:30.695772 containerd[1824]: time="2025-11-01T01:16:30.695758296Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:30.696712 containerd[1824]: time="2025-11-01T01:16:30.696697625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:30.697160 containerd[1824]: time="2025-11-01T01:16:30.697143488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.448422508s" Nov 1 01:16:30.697206 containerd[1824]: time="2025-11-01T01:16:30.697164675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:16:30.698081 containerd[1824]: time="2025-11-01T01:16:30.698068583Z" level=info msg="CreateContainer within sandbox \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:16:30.702822 containerd[1824]: time="2025-11-01T01:16:30.702761122Z" level=info msg="CreateContainer within sandbox \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee\"" Nov 1 01:16:30.703017 containerd[1824]: time="2025-11-01T01:16:30.703005342Z" level=info msg="StartContainer for \"832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee\"" Nov 1 01:16:30.725651 systemd[1]: Started cri-containerd-832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee.scope - libcontainer container 832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee. Nov 1 01:16:30.736818 kubelet[3089]: E1101 01:16:30.736737 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.736818 kubelet[3089]: W1101 01:16:30.736780 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.737649 kubelet[3089]: E1101 01:16:30.736827 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.737649 kubelet[3089]: E1101 01:16:30.737343 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.737649 kubelet[3089]: W1101 01:16:30.737379 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.737649 kubelet[3089]: E1101 01:16:30.737413 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.738034 kubelet[3089]: E1101 01:16:30.737963 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.738034 kubelet[3089]: W1101 01:16:30.737995 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.738323 kubelet[3089]: E1101 01:16:30.738028 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.738642 kubelet[3089]: E1101 01:16:30.738559 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.738642 kubelet[3089]: W1101 01:16:30.738591 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.738642 kubelet[3089]: E1101 01:16:30.738626 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.739238 kubelet[3089]: E1101 01:16:30.739169 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.739238 kubelet[3089]: W1101 01:16:30.739199 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.739459 kubelet[3089]: E1101 01:16:30.739266 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.739792 kubelet[3089]: E1101 01:16:30.739720 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.739792 kubelet[3089]: W1101 01:16:30.739757 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.739792 kubelet[3089]: E1101 01:16:30.739790 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.740272 kubelet[3089]: E1101 01:16:30.740195 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.740272 kubelet[3089]: W1101 01:16:30.740249 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.740272 kubelet[3089]: E1101 01:16:30.740274 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.740774 kubelet[3089]: E1101 01:16:30.740727 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.740774 kubelet[3089]: W1101 01:16:30.740753 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.740958 kubelet[3089]: E1101 01:16:30.740777 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.741197 kubelet[3089]: E1101 01:16:30.741170 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.741336 kubelet[3089]: W1101 01:16:30.741197 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.741336 kubelet[3089]: E1101 01:16:30.741261 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.741725 kubelet[3089]: E1101 01:16:30.741673 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.741725 kubelet[3089]: W1101 01:16:30.741698 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.741725 kubelet[3089]: E1101 01:16:30.741723 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.742143 kubelet[3089]: E1101 01:16:30.742116 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.742274 kubelet[3089]: W1101 01:16:30.742143 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.742274 kubelet[3089]: E1101 01:16:30.742167 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.742701 kubelet[3089]: E1101 01:16:30.742657 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.742701 kubelet[3089]: W1101 01:16:30.742681 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.742701 kubelet[3089]: E1101 01:16:30.742703 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.743286 kubelet[3089]: E1101 01:16:30.743238 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.743286 kubelet[3089]: W1101 01:16:30.743268 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.743524 kubelet[3089]: E1101 01:16:30.743296 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.743761 kubelet[3089]: E1101 01:16:30.743715 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.743761 kubelet[3089]: W1101 01:16:30.743744 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.743961 kubelet[3089]: E1101 01:16:30.743769 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.744171 kubelet[3089]: E1101 01:16:30.744146 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.744302 kubelet[3089]: W1101 01:16:30.744172 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.744302 kubelet[3089]: E1101 01:16:30.744229 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.744982 kubelet[3089]: E1101 01:16:30.744930 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.744982 kubelet[3089]: W1101 01:16:30.744958 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.745245 kubelet[3089]: E1101 01:16:30.744986 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.745526 kubelet[3089]: E1101 01:16:30.745471 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.745526 kubelet[3089]: W1101 01:16:30.745499 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.745821 kubelet[3089]: E1101 01:16:30.745534 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.746192 kubelet[3089]: E1101 01:16:30.746154 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.746412 kubelet[3089]: W1101 01:16:30.746196 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.746412 kubelet[3089]: E1101 01:16:30.746281 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.746996 kubelet[3089]: E1101 01:16:30.746951 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.747194 kubelet[3089]: W1101 01:16:30.746996 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.747194 kubelet[3089]: E1101 01:16:30.747066 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.747719 kubelet[3089]: E1101 01:16:30.747683 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.747861 kubelet[3089]: W1101 01:16:30.747725 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.747861 kubelet[3089]: E1101 01:16:30.747782 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.748436 kubelet[3089]: E1101 01:16:30.748390 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.748436 kubelet[3089]: W1101 01:16:30.748422 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.748838 kubelet[3089]: E1101 01:16:30.748494 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.749026 kubelet[3089]: E1101 01:16:30.748934 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.749026 kubelet[3089]: W1101 01:16:30.748972 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.749402 kubelet[3089]: E1101 01:16:30.749041 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.749581 kubelet[3089]: E1101 01:16:30.749524 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.749581 kubelet[3089]: W1101 01:16:30.749552 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.749951 kubelet[3089]: E1101 01:16:30.749608 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.750137 kubelet[3089]: E1101 01:16:30.749953 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.750137 kubelet[3089]: W1101 01:16:30.749978 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.750137 kubelet[3089]: E1101 01:16:30.750041 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.750646 kubelet[3089]: E1101 01:16:30.750449 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.750646 kubelet[3089]: W1101 01:16:30.750473 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.750646 kubelet[3089]: E1101 01:16:30.750507 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.751177 kubelet[3089]: E1101 01:16:30.750897 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.751177 kubelet[3089]: W1101 01:16:30.750919 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.751177 kubelet[3089]: E1101 01:16:30.750951 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.751687 kubelet[3089]: E1101 01:16:30.751426 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.751687 kubelet[3089]: W1101 01:16:30.751450 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.751687 kubelet[3089]: E1101 01:16:30.751570 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.752259 kubelet[3089]: E1101 01:16:30.751928 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.752259 kubelet[3089]: W1101 01:16:30.751954 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.752259 kubelet[3089]: E1101 01:16:30.752042 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.752767 kubelet[3089]: E1101 01:16:30.752528 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.752767 kubelet[3089]: W1101 01:16:30.752556 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.752767 kubelet[3089]: E1101 01:16:30.752592 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.756520 kubelet[3089]: E1101 01:16:30.753836 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.756520 kubelet[3089]: W1101 01:16:30.755917 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.756520 kubelet[3089]: E1101 01:16:30.755976 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.756968 kubelet[3089]: E1101 01:16:30.756849 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.756968 kubelet[3089]: W1101 01:16:30.756882 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.756968 kubelet[3089]: E1101 01:16:30.756912 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.758121 kubelet[3089]: E1101 01:16:30.758078 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.758121 kubelet[3089]: W1101 01:16:30.758118 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.758325 kubelet[3089]: E1101 01:16:30.758172 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.758707 kubelet[3089]: E1101 01:16:30.758664 3089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:16:30.758778 kubelet[3089]: W1101 01:16:30.758713 3089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:16:30.758778 kubelet[3089]: E1101 01:16:30.758752 3089 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:16:30.772456 containerd[1824]: time="2025-11-01T01:16:30.772418684Z" level=info msg="StartContainer for \"832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee\" returns successfully" Nov 1 01:16:30.782246 systemd[1]: cri-containerd-832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee.scope: Deactivated successfully. Nov 1 01:16:31.228627 containerd[1824]: time="2025-11-01T01:16:31.228586997Z" level=info msg="shim disconnected" id=832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee namespace=k8s.io Nov 1 01:16:31.228627 containerd[1824]: time="2025-11-01T01:16:31.228623531Z" level=warning msg="cleaning up after shim disconnected" id=832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee namespace=k8s.io Nov 1 01:16:31.228627 containerd[1824]: time="2025-11-01T01:16:31.228629471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:16:31.251160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-832fa00015035eb93ba642474247aae06fbb726b66b027942d7c22edd8e93dee-rootfs.mount: Deactivated successfully. Nov 1 01:16:31.675570 containerd[1824]: time="2025-11-01T01:16:31.675361518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:16:32.599293 kubelet[3089]: E1101 01:16:32.599271 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:34.598424 kubelet[3089]: E1101 01:16:34.598397 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:34.786876 containerd[1824]: time="2025-11-01T01:16:34.786815994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:34.787080 containerd[1824]: time="2025-11-01T01:16:34.787010528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 01:16:34.787376 containerd[1824]: time="2025-11-01T01:16:34.787331607Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:34.788503 containerd[1824]: time="2025-11-01T01:16:34.788461053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:34.788944 containerd[1824]: time="2025-11-01T01:16:34.788900496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.113458832s" Nov 1 01:16:34.788944 containerd[1824]: time="2025-11-01T01:16:34.788921758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:16:34.789901 containerd[1824]: time="2025-11-01T01:16:34.789889494Z" level=info msg="CreateContainer within sandbox \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:16:34.794588 containerd[1824]: time="2025-11-01T01:16:34.794545197Z" level=info msg="CreateContainer within sandbox \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b\"" Nov 1 01:16:34.794794 containerd[1824]: time="2025-11-01T01:16:34.794754638Z" level=info msg="StartContainer for \"5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b\"" Nov 1 01:16:34.823490 systemd[1]: Started cri-containerd-5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b.scope - libcontainer container 5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b. Nov 1 01:16:34.836608 containerd[1824]: time="2025-11-01T01:16:34.836586242Z" level=info msg="StartContainer for \"5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b\" returns successfully" Nov 1 01:16:35.397684 containerd[1824]: time="2025-11-01T01:16:35.397659444Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:16:35.398822 systemd[1]: cri-containerd-5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b.scope: Deactivated successfully. Nov 1 01:16:35.485518 kubelet[3089]: I1101 01:16:35.485464 3089 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:16:35.545860 systemd[1]: Created slice kubepods-besteffort-podc808102b_d4ca_4405_80f4_fc0935baaa15.slice - libcontainer container kubepods-besteffort-podc808102b_d4ca_4405_80f4_fc0935baaa15.slice. Nov 1 01:16:35.553357 systemd[1]: Created slice kubepods-burstable-podc59cb744_7d2f_4d9a_b681_ac1bc163601e.slice - libcontainer container kubepods-burstable-podc59cb744_7d2f_4d9a_b681_ac1bc163601e.slice. Nov 1 01:16:35.559125 systemd[1]: Created slice kubepods-besteffort-pode53afa66_571d_49bb_8168_fa6f398b3e23.slice - libcontainer container kubepods-besteffort-pode53afa66_571d_49bb_8168_fa6f398b3e23.slice. Nov 1 01:16:35.562602 systemd[1]: Created slice kubepods-burstable-pod8933d3a0_6f08_44c4_b76e_2bacec659217.slice - libcontainer container kubepods-burstable-pod8933d3a0_6f08_44c4_b76e_2bacec659217.slice. Nov 1 01:16:35.566497 systemd[1]: Created slice kubepods-besteffort-podf2ac6185_3f80_4cfa_971b_e2d87f342f5e.slice - libcontainer container kubepods-besteffort-podf2ac6185_3f80_4cfa_971b_e2d87f342f5e.slice. Nov 1 01:16:35.569834 systemd[1]: Created slice kubepods-besteffort-pod3f82bdd6_ea5e_416a_9d40_0c4d58eb0f92.slice - libcontainer container kubepods-besteffort-pod3f82bdd6_ea5e_416a_9d40_0c4d58eb0f92.slice. Nov 1 01:16:35.588844 kubelet[3089]: I1101 01:16:35.582383 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd73e6-dcff-41d4-b90f-7314863a267d-config\") pod \"goldmane-666569f655-t2l45\" (UID: \"07dd73e6-dcff-41d4-b90f-7314863a267d\") " pod="calico-system/goldmane-666569f655-t2l45" Nov 1 01:16:35.588844 kubelet[3089]: I1101 01:16:35.582407 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07dd73e6-dcff-41d4-b90f-7314863a267d-goldmane-ca-bundle\") pod \"goldmane-666569f655-t2l45\" (UID: \"07dd73e6-dcff-41d4-b90f-7314863a267d\") " pod="calico-system/goldmane-666569f655-t2l45" Nov 1 01:16:35.588844 kubelet[3089]: I1101 01:16:35.582427 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/07dd73e6-dcff-41d4-b90f-7314863a267d-goldmane-key-pair\") pod \"goldmane-666569f655-t2l45\" (UID: \"07dd73e6-dcff-41d4-b90f-7314863a267d\") " pod="calico-system/goldmane-666569f655-t2l45" Nov 1 01:16:35.588844 kubelet[3089]: I1101 01:16:35.582446 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5lj\" (UniqueName: \"kubernetes.io/projected/07dd73e6-dcff-41d4-b90f-7314863a267d-kube-api-access-tt5lj\") pod \"goldmane-666569f655-t2l45\" (UID: \"07dd73e6-dcff-41d4-b90f-7314863a267d\") " pod="calico-system/goldmane-666569f655-t2l45" Nov 1 01:16:35.588844 kubelet[3089]: I1101 01:16:35.582468 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-backend-key-pair\") pod \"whisker-864686dcc7-xql57\" (UID: \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\") " pod="calico-system/whisker-864686dcc7-xql57" Nov 1 01:16:35.572704 systemd[1]: Created slice kubepods-besteffort-pod07dd73e6_dcff_41d4_b90f_7314863a267d.slice - libcontainer container kubepods-besteffort-pod07dd73e6_dcff_41d4_b90f_7314863a267d.slice. Nov 1 01:16:35.589053 kubelet[3089]: I1101 01:16:35.582535 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e53afa66-571d-49bb-8168-fa6f398b3e23-calico-apiserver-certs\") pod \"calico-apiserver-5847c846dc-9sdnh\" (UID: \"e53afa66-571d-49bb-8168-fa6f398b3e23\") " pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" Nov 1 01:16:35.589053 kubelet[3089]: I1101 01:16:35.582556 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-ca-bundle\") pod \"whisker-864686dcc7-xql57\" (UID: \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\") " pod="calico-system/whisker-864686dcc7-xql57" Nov 1 01:16:35.589053 kubelet[3089]: I1101 01:16:35.582568 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcm8\" (UniqueName: \"kubernetes.io/projected/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-kube-api-access-lzcm8\") pod \"whisker-864686dcc7-xql57\" (UID: \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\") " pod="calico-system/whisker-864686dcc7-xql57" Nov 1 01:16:35.589053 kubelet[3089]: I1101 01:16:35.582580 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkp2\" (UniqueName: \"kubernetes.io/projected/e53afa66-571d-49bb-8168-fa6f398b3e23-kube-api-access-hlkp2\") pod \"calico-apiserver-5847c846dc-9sdnh\" (UID: \"e53afa66-571d-49bb-8168-fa6f398b3e23\") " pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" Nov 1 01:16:35.683582 kubelet[3089]: I1101 01:16:35.683512 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgsmt\" (UniqueName: \"kubernetes.io/projected/c59cb744-7d2f-4d9a-b681-ac1bc163601e-kube-api-access-tgsmt\") pod \"coredns-668d6bf9bc-4dj4l\" (UID: \"c59cb744-7d2f-4d9a-b681-ac1bc163601e\") " pod="kube-system/coredns-668d6bf9bc-4dj4l" Nov 1 01:16:35.684859 kubelet[3089]: I1101 01:16:35.683615 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7xl\" (UniqueName: \"kubernetes.io/projected/f2ac6185-3f80-4cfa-971b-e2d87f342f5e-kube-api-access-zt7xl\") pod \"calico-apiserver-5847c846dc-7gbvk\" (UID: \"f2ac6185-3f80-4cfa-971b-e2d87f342f5e\") " pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" Nov 1 01:16:35.684859 kubelet[3089]: I1101 01:16:35.683699 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrs8\" (UniqueName: \"kubernetes.io/projected/c808102b-d4ca-4405-80f4-fc0935baaa15-kube-api-access-9wrs8\") pod \"calico-kube-controllers-895fdb58f-xjcnc\" (UID: \"c808102b-d4ca-4405-80f4-fc0935baaa15\") " pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" Nov 1 01:16:35.684859 kubelet[3089]: I1101 01:16:35.683774 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2ac6185-3f80-4cfa-971b-e2d87f342f5e-calico-apiserver-certs\") pod \"calico-apiserver-5847c846dc-7gbvk\" (UID: \"f2ac6185-3f80-4cfa-971b-e2d87f342f5e\") " pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" Nov 1 01:16:35.684859 kubelet[3089]: I1101 01:16:35.683854 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qff\" (UniqueName: \"kubernetes.io/projected/8933d3a0-6f08-44c4-b76e-2bacec659217-kube-api-access-t6qff\") pod \"coredns-668d6bf9bc-pdzpp\" (UID: \"8933d3a0-6f08-44c4-b76e-2bacec659217\") " pod="kube-system/coredns-668d6bf9bc-pdzpp" Nov 1 01:16:35.684859 kubelet[3089]: I1101 01:16:35.683909 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c808102b-d4ca-4405-80f4-fc0935baaa15-tigera-ca-bundle\") pod \"calico-kube-controllers-895fdb58f-xjcnc\" (UID: \"c808102b-d4ca-4405-80f4-fc0935baaa15\") " pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" Nov 1 01:16:35.685512 kubelet[3089]: I1101 01:16:35.684178 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59cb744-7d2f-4d9a-b681-ac1bc163601e-config-volume\") pod \"coredns-668d6bf9bc-4dj4l\" (UID: \"c59cb744-7d2f-4d9a-b681-ac1bc163601e\") " pod="kube-system/coredns-668d6bf9bc-4dj4l" Nov 1 01:16:35.685512 kubelet[3089]: I1101 01:16:35.684325 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8933d3a0-6f08-44c4-b76e-2bacec659217-config-volume\") pod \"coredns-668d6bf9bc-pdzpp\" (UID: \"8933d3a0-6f08-44c4-b76e-2bacec659217\") " pod="kube-system/coredns-668d6bf9bc-pdzpp" Nov 1 01:16:35.769492 containerd[1824]: time="2025-11-01T01:16:35.769427714Z" level=info msg="shim disconnected" id=5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b namespace=k8s.io Nov 1 01:16:35.769492 containerd[1824]: time="2025-11-01T01:16:35.769459418Z" level=warning msg="cleaning up after shim disconnected" id=5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b namespace=k8s.io Nov 1 01:16:35.769492 containerd[1824]: time="2025-11-01T01:16:35.769464805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:16:35.809915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e7c53a292668745a3c912b232b473d9ee4cce3670af3ddc1f588b1d2f8ba20b-rootfs.mount: Deactivated successfully. Nov 1 01:16:35.850768 containerd[1824]: time="2025-11-01T01:16:35.850710469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-895fdb58f-xjcnc,Uid:c808102b-d4ca-4405-80f4-fc0935baaa15,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:35.858195 containerd[1824]: time="2025-11-01T01:16:35.858173617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dj4l,Uid:c59cb744-7d2f-4d9a-b681-ac1bc163601e,Namespace:kube-system,Attempt:0,}" Nov 1 01:16:35.861637 containerd[1824]: time="2025-11-01T01:16:35.861614387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-9sdnh,Uid:e53afa66-571d-49bb-8168-fa6f398b3e23,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:16:35.879670 containerd[1824]: time="2025-11-01T01:16:35.879643797Z" level=error msg="Failed to destroy network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.879831 containerd[1824]: time="2025-11-01T01:16:35.879818655Z" level=error msg="encountered an error cleaning up failed sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.879859 containerd[1824]: time="2025-11-01T01:16:35.879849833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-895fdb58f-xjcnc,Uid:c808102b-d4ca-4405-80f4-fc0935baaa15,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.880012 kubelet[3089]: E1101 01:16:35.879987 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.880048 kubelet[3089]: E1101 01:16:35.880037 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" Nov 1 01:16:35.880070 kubelet[3089]: E1101 01:16:35.880055 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" Nov 1 01:16:35.880094 kubelet[3089]: E1101 01:16:35.880082 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:16:35.885029 containerd[1824]: time="2025-11-01T01:16:35.885003840Z" level=error msg="Failed to destroy network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.885190 containerd[1824]: time="2025-11-01T01:16:35.885178249Z" level=error msg="encountered an error cleaning up failed sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.885223 containerd[1824]: time="2025-11-01T01:16:35.885212239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dj4l,Uid:c59cb744-7d2f-4d9a-b681-ac1bc163601e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.885347 kubelet[3089]: E1101 01:16:35.885326 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.885394 kubelet[3089]: E1101 01:16:35.885358 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4dj4l" Nov 1 01:16:35.885394 kubelet[3089]: E1101 01:16:35.885372 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4dj4l" Nov 1 01:16:35.885436 kubelet[3089]: E1101 01:16:35.885396 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4dj4l_kube-system(c59cb744-7d2f-4d9a-b681-ac1bc163601e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4dj4l_kube-system(c59cb744-7d2f-4d9a-b681-ac1bc163601e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4dj4l" podUID="c59cb744-7d2f-4d9a-b681-ac1bc163601e" Nov 1 01:16:35.888110 containerd[1824]: time="2025-11-01T01:16:35.888092252Z" level=error msg="Failed to destroy network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.888273 containerd[1824]: time="2025-11-01T01:16:35.888258307Z" level=error msg="encountered an error cleaning up failed sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.888305 containerd[1824]: time="2025-11-01T01:16:35.888284556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-9sdnh,Uid:e53afa66-571d-49bb-8168-fa6f398b3e23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.888399 kubelet[3089]: E1101 01:16:35.888386 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.888425 kubelet[3089]: E1101 01:16:35.888411 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" Nov 1 01:16:35.888444 kubelet[3089]: E1101 01:16:35.888422 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" Nov 1 01:16:35.888467 kubelet[3089]: E1101 01:16:35.888445 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:16:35.889821 containerd[1824]: time="2025-11-01T01:16:35.889807434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864686dcc7-xql57,Uid:3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:35.889860 containerd[1824]: time="2025-11-01T01:16:35.889836709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t2l45,Uid:07dd73e6-dcff-41d4-b90f-7314863a267d,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:35.889880 containerd[1824]: time="2025-11-01T01:16:35.889808640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-7gbvk,Uid:f2ac6185-3f80-4cfa-971b-e2d87f342f5e,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:16:35.889949 containerd[1824]: time="2025-11-01T01:16:35.889809879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pdzpp,Uid:8933d3a0-6f08-44c4-b76e-2bacec659217,Namespace:kube-system,Attempt:0,}" Nov 1 01:16:35.921885 containerd[1824]: time="2025-11-01T01:16:35.921852681Z" level=error msg="Failed to destroy network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922070 containerd[1824]: time="2025-11-01T01:16:35.922049819Z" level=error msg="Failed to destroy network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922095 containerd[1824]: time="2025-11-01T01:16:35.922063755Z" level=error msg="encountered an error cleaning up failed sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922117 containerd[1824]: time="2025-11-01T01:16:35.922097021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pdzpp,Uid:8933d3a0-6f08-44c4-b76e-2bacec659217,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922218 containerd[1824]: time="2025-11-01T01:16:35.922199461Z" level=error msg="encountered an error cleaning up failed sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922247 containerd[1824]: time="2025-11-01T01:16:35.922228323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t2l45,Uid:07dd73e6-dcff-41d4-b90f-7314863a267d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922290 kubelet[3089]: E1101 01:16:35.922252 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922319 kubelet[3089]: E1101 01:16:35.922291 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.922340 kubelet[3089]: E1101 01:16:35.922323 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-t2l45" Nov 1 01:16:35.922358 kubelet[3089]: E1101 01:16:35.922343 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-t2l45" Nov 1 01:16:35.922381 kubelet[3089]: E1101 01:16:35.922295 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pdzpp" Nov 1 01:16:35.922402 kubelet[3089]: E1101 01:16:35.922375 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pdzpp" Nov 1 01:16:35.922402 kubelet[3089]: E1101 01:16:35.922379 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:16:35.922452 kubelet[3089]: E1101 01:16:35.922411 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pdzpp_kube-system(8933d3a0-6f08-44c4-b76e-2bacec659217)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pdzpp_kube-system(8933d3a0-6f08-44c4-b76e-2bacec659217)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pdzpp" podUID="8933d3a0-6f08-44c4-b76e-2bacec659217" Nov 1 01:16:35.923305 containerd[1824]: time="2025-11-01T01:16:35.923285066Z" level=error msg="Failed to destroy network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.923450 containerd[1824]: time="2025-11-01T01:16:35.923437139Z" level=error msg="encountered an error cleaning up failed sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.923475 containerd[1824]: time="2025-11-01T01:16:35.923464639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864686dcc7-xql57,Uid:3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.923545 kubelet[3089]: E1101 01:16:35.923531 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.923572 kubelet[3089]: E1101 01:16:35.923552 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-864686dcc7-xql57" Nov 1 01:16:35.923596 kubelet[3089]: E1101 01:16:35.923572 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-864686dcc7-xql57" Nov 1 01:16:35.923596 kubelet[3089]: E1101 01:16:35.923589 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-864686dcc7-xql57_calico-system(3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-864686dcc7-xql57_calico-system(3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-864686dcc7-xql57" podUID="3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92" Nov 1 01:16:35.923835 containerd[1824]: time="2025-11-01T01:16:35.923801051Z" level=error msg="Failed to destroy network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.923967 containerd[1824]: time="2025-11-01T01:16:35.923954609Z" level=error msg="encountered an error cleaning up failed sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.923992 containerd[1824]: time="2025-11-01T01:16:35.923975529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-7gbvk,Uid:f2ac6185-3f80-4cfa-971b-e2d87f342f5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.924051 kubelet[3089]: E1101 01:16:35.924039 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:35.924074 kubelet[3089]: E1101 01:16:35.924058 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" Nov 1 01:16:35.924074 kubelet[3089]: E1101 01:16:35.924068 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" Nov 1 01:16:35.924110 kubelet[3089]: E1101 01:16:35.924086 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:16:36.604265 systemd[1]: Created slice kubepods-besteffort-podfe329cee_9aa5_425f_b021_f1def80c02c8.slice - libcontainer container kubepods-besteffort-podfe329cee_9aa5_425f_b021_f1def80c02c8.slice. Nov 1 01:16:36.606271 containerd[1824]: time="2025-11-01T01:16:36.606238756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kckfw,Uid:fe329cee-9aa5-425f-b021-f1def80c02c8,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:36.633034 containerd[1824]: time="2025-11-01T01:16:36.633007983Z" level=error msg="Failed to destroy network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.633190 containerd[1824]: time="2025-11-01T01:16:36.633178826Z" level=error msg="encountered an error cleaning up failed sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.633268 containerd[1824]: time="2025-11-01T01:16:36.633229350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kckfw,Uid:fe329cee-9aa5-425f-b021-f1def80c02c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.633400 kubelet[3089]: E1101 01:16:36.633376 3089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.633434 kubelet[3089]: E1101 01:16:36.633416 3089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:36.633434 kubelet[3089]: E1101 01:16:36.633430 3089 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kckfw" Nov 1 01:16:36.633470 kubelet[3089]: E1101 01:16:36.633455 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:36.688466 kubelet[3089]: I1101 01:16:36.688407 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:16:36.689801 containerd[1824]: time="2025-11-01T01:16:36.689724042Z" level=info msg="StopPodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\"" Nov 1 01:16:36.690380 containerd[1824]: time="2025-11-01T01:16:36.690303313Z" level=info msg="Ensure that sandbox a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff in task-service has been cleanup successfully" Nov 1 01:16:36.690723 kubelet[3089]: I1101 01:16:36.690678 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:16:36.691899 containerd[1824]: time="2025-11-01T01:16:36.691839974Z" level=info msg="StopPodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\"" Nov 1 01:16:36.692363 containerd[1824]: time="2025-11-01T01:16:36.692311652Z" level=info msg="Ensure that sandbox 8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1 in task-service has been cleanup successfully" Nov 1 01:16:36.693198 kubelet[3089]: I1101 01:16:36.693123 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:16:36.694309 containerd[1824]: time="2025-11-01T01:16:36.694242410Z" level=info msg="StopPodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\"" Nov 1 01:16:36.694810 containerd[1824]: time="2025-11-01T01:16:36.694727794Z" level=info msg="Ensure that sandbox 362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9 in task-service has been cleanup successfully" Nov 1 01:16:36.695662 kubelet[3089]: I1101 01:16:36.695597 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:16:36.696885 containerd[1824]: time="2025-11-01T01:16:36.696798925Z" level=info msg="StopPodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\"" Nov 1 01:16:36.697441 containerd[1824]: time="2025-11-01T01:16:36.697359619Z" level=info msg="Ensure that sandbox 7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f in task-service has been cleanup successfully" Nov 1 01:16:36.697640 kubelet[3089]: I1101 01:16:36.697630 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:16:36.697998 containerd[1824]: time="2025-11-01T01:16:36.697946386Z" level=info msg="StopPodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\"" Nov 1 01:16:36.698128 containerd[1824]: time="2025-11-01T01:16:36.698111998Z" level=info msg="Ensure that sandbox 47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495 in task-service has been cleanup successfully" Nov 1 01:16:36.698182 kubelet[3089]: I1101 01:16:36.698171 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:16:36.698459 containerd[1824]: time="2025-11-01T01:16:36.698442085Z" level=info msg="StopPodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\"" Nov 1 01:16:36.698593 containerd[1824]: time="2025-11-01T01:16:36.698580570Z" level=info msg="Ensure that sandbox d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a in task-service has been cleanup successfully" Nov 1 01:16:36.698893 kubelet[3089]: I1101 01:16:36.698874 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:16:36.699426 containerd[1824]: time="2025-11-01T01:16:36.699384998Z" level=info msg="StopPodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\"" Nov 1 01:16:36.699591 containerd[1824]: time="2025-11-01T01:16:36.699574001Z" level=info msg="Ensure that sandbox fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a in task-service has been cleanup successfully" Nov 1 01:16:36.700850 kubelet[3089]: I1101 01:16:36.700822 3089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:16:36.701036 containerd[1824]: time="2025-11-01T01:16:36.701011155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:16:36.701251 containerd[1824]: time="2025-11-01T01:16:36.701234510Z" level=info msg="StopPodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\"" Nov 1 01:16:36.701397 containerd[1824]: time="2025-11-01T01:16:36.701382578Z" level=info msg="Ensure that sandbox b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc in task-service has been cleanup successfully" Nov 1 01:16:36.713864 containerd[1824]: time="2025-11-01T01:16:36.713821702Z" level=error msg="StopPodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" failed" error="failed to destroy network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.713962 containerd[1824]: time="2025-11-01T01:16:36.713824796Z" level=error msg="StopPodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" failed" error="failed to destroy network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.714027 kubelet[3089]: E1101 01:16:36.714004 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:16:36.714082 kubelet[3089]: E1101 01:16:36.714051 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff"} Nov 1 01:16:36.714111 kubelet[3089]: E1101 01:16:36.714004 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:16:36.714111 kubelet[3089]: E1101 01:16:36.714103 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07dd73e6-dcff-41d4-b90f-7314863a267d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.714111 kubelet[3089]: E1101 01:16:36.714104 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9"} Nov 1 01:16:36.714227 kubelet[3089]: E1101 01:16:36.714123 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07dd73e6-dcff-41d4-b90f-7314863a267d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:16:36.714227 kubelet[3089]: E1101 01:16:36.714132 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c59cb744-7d2f-4d9a-b681-ac1bc163601e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.714227 kubelet[3089]: E1101 01:16:36.714155 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c59cb744-7d2f-4d9a-b681-ac1bc163601e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4dj4l" podUID="c59cb744-7d2f-4d9a-b681-ac1bc163601e" Nov 1 01:16:36.715665 containerd[1824]: time="2025-11-01T01:16:36.715642765Z" level=error msg="StopPodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" failed" error="failed to destroy network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.715755 containerd[1824]: time="2025-11-01T01:16:36.715735928Z" level=error msg="StopPodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" failed" error="failed to destroy network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.715788 kubelet[3089]: E1101 01:16:36.715749 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:16:36.715788 kubelet[3089]: E1101 01:16:36.715775 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1"} Nov 1 01:16:36.715841 kubelet[3089]: E1101 01:16:36.715794 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e53afa66-571d-49bb-8168-fa6f398b3e23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.715841 kubelet[3089]: E1101 01:16:36.715807 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e53afa66-571d-49bb-8168-fa6f398b3e23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:16:36.715841 kubelet[3089]: E1101 01:16:36.715809 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:16:36.715841 kubelet[3089]: E1101 01:16:36.715827 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a"} Nov 1 01:16:36.715943 kubelet[3089]: E1101 01:16:36.715841 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8933d3a0-6f08-44c4-b76e-2bacec659217\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.715943 kubelet[3089]: E1101 01:16:36.715854 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8933d3a0-6f08-44c4-b76e-2bacec659217\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pdzpp" podUID="8933d3a0-6f08-44c4-b76e-2bacec659217" Nov 1 01:16:36.716659 containerd[1824]: time="2025-11-01T01:16:36.716645460Z" level=error msg="StopPodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" failed" error="failed to destroy network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.716725 kubelet[3089]: E1101 01:16:36.716711 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:16:36.716756 kubelet[3089]: E1101 01:16:36.716728 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a"} Nov 1 01:16:36.716756 kubelet[3089]: E1101 01:16:36.716742 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c808102b-d4ca-4405-80f4-fc0935baaa15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.716805 kubelet[3089]: E1101 01:16:36.716756 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c808102b-d4ca-4405-80f4-fc0935baaa15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:16:36.717112 containerd[1824]: time="2025-11-01T01:16:36.717093934Z" level=error msg="StopPodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" failed" error="failed to destroy network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.717178 kubelet[3089]: E1101 01:16:36.717164 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:16:36.717178 kubelet[3089]: E1101 01:16:36.717178 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f"} Nov 1 01:16:36.717246 kubelet[3089]: E1101 01:16:36.717192 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe329cee-9aa5-425f-b021-f1def80c02c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.717246 kubelet[3089]: E1101 01:16:36.717208 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe329cee-9aa5-425f-b021-f1def80c02c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:36.717914 containerd[1824]: time="2025-11-01T01:16:36.717877617Z" level=error msg="StopPodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" failed" error="failed to destroy network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.717953 kubelet[3089]: E1101 01:16:36.717941 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:16:36.717981 kubelet[3089]: E1101 01:16:36.717958 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495"} Nov 1 01:16:36.717981 kubelet[3089]: E1101 01:16:36.717973 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2ac6185-3f80-4cfa-971b-e2d87f342f5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.718034 kubelet[3089]: E1101 01:16:36.717985 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2ac6185-3f80-4cfa-971b-e2d87f342f5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:16:36.718524 containerd[1824]: time="2025-11-01T01:16:36.718487304Z" level=error msg="StopPodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" failed" error="failed to destroy network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:16:36.718594 kubelet[3089]: E1101 01:16:36.718541 3089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:16:36.718594 kubelet[3089]: E1101 01:16:36.718558 3089 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc"} Nov 1 01:16:36.718594 kubelet[3089]: E1101 01:16:36.718571 3089 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:16:36.718594 kubelet[3089]: E1101 01:16:36.718581 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-864686dcc7-xql57" podUID="3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92" Nov 1 01:16:36.808171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9-shm.mount: Deactivated successfully. Nov 1 01:16:36.808455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a-shm.mount: Deactivated successfully. Nov 1 01:16:41.289185 kubelet[3089]: I1101 01:16:41.289126 3089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:16:41.774077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267796141.mount: Deactivated successfully. Nov 1 01:16:41.796269 containerd[1824]: time="2025-11-01T01:16:41.796219608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:41.796444 containerd[1824]: time="2025-11-01T01:16:41.796396114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 01:16:41.796784 containerd[1824]: time="2025-11-01T01:16:41.796744817Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:41.797675 containerd[1824]: time="2025-11-01T01:16:41.797634489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:16:41.798063 containerd[1824]: time="2025-11-01T01:16:41.798022654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.096977504s" Nov 1 01:16:41.798063 containerd[1824]: time="2025-11-01T01:16:41.798037914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:16:41.801382 containerd[1824]: time="2025-11-01T01:16:41.801360636Z" level=info msg="CreateContainer within sandbox \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:16:41.806784 containerd[1824]: time="2025-11-01T01:16:41.806744161Z" level=info msg="CreateContainer within sandbox \"bdac6dd4695a9a0e42e8183234a7da70c8ef577410772530fa9169eb86b99a0a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a5475e872ab9674e1839f34ac942d04701261a39abd068fbbb8e819cc3318242\"" Nov 1 01:16:41.807005 containerd[1824]: time="2025-11-01T01:16:41.806987388Z" level=info msg="StartContainer for \"a5475e872ab9674e1839f34ac942d04701261a39abd068fbbb8e819cc3318242\"" Nov 1 01:16:41.831381 systemd[1]: Started cri-containerd-a5475e872ab9674e1839f34ac942d04701261a39abd068fbbb8e819cc3318242.scope - libcontainer container a5475e872ab9674e1839f34ac942d04701261a39abd068fbbb8e819cc3318242. Nov 1 01:16:41.847549 containerd[1824]: time="2025-11-01T01:16:41.847499352Z" level=info msg="StartContainer for \"a5475e872ab9674e1839f34ac942d04701261a39abd068fbbb8e819cc3318242\" returns successfully" Nov 1 01:16:41.914337 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:16:41.914395 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:16:41.953182 containerd[1824]: time="2025-11-01T01:16:41.953152997Z" level=info msg="StopPodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\"" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.976 [INFO][4703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.976 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" iface="eth0" netns="/var/run/netns/cni-864fed90-0835-964b-18c7-8c4aaebc3119" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.976 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" iface="eth0" netns="/var/run/netns/cni-864fed90-0835-964b-18c7-8c4aaebc3119" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.976 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" iface="eth0" netns="/var/run/netns/cni-864fed90-0835-964b-18c7-8c4aaebc3119" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.976 [INFO][4703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.976 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.986 [INFO][4733] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.986 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.986 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.989 [WARNING][4733] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.989 [INFO][4733] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.990 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:41.992578 containerd[1824]: 2025-11-01 01:16:41.991 [INFO][4703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:16:41.992854 containerd[1824]: time="2025-11-01T01:16:41.992626990Z" level=info msg="TearDown network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" successfully" Nov 1 01:16:41.992854 containerd[1824]: time="2025-11-01T01:16:41.992642437Z" level=info msg="StopPodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" returns successfully" Nov 1 01:16:42.029817 kubelet[3089]: I1101 01:16:42.029597 3089 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzcm8\" (UniqueName: \"kubernetes.io/projected/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-kube-api-access-lzcm8\") pod \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\" (UID: \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\") " Nov 1 01:16:42.029817 kubelet[3089]: I1101 01:16:42.029696 3089 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-backend-key-pair\") pod \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\" (UID: \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\") " Nov 1 01:16:42.029817 kubelet[3089]: I1101 01:16:42.029759 3089 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-ca-bundle\") pod \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\" (UID: \"3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92\") " Nov 1 01:16:42.030800 kubelet[3089]: I1101 01:16:42.030730 3089 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92" (UID: "3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:16:42.035628 kubelet[3089]: I1101 01:16:42.035557 3089 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-kube-api-access-lzcm8" (OuterVolumeSpecName: "kube-api-access-lzcm8") pod "3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92" (UID: "3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92"). InnerVolumeSpecName "kube-api-access-lzcm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:16:42.035915 kubelet[3089]: I1101 01:16:42.035810 3089 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92" (UID: "3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:16:42.130794 kubelet[3089]: I1101 01:16:42.130756 3089 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-61efafd0e9\" DevicePath \"\"" Nov 1 01:16:42.130794 kubelet[3089]: I1101 01:16:42.130791 3089 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-whisker-ca-bundle\") on node \"ci-4081.3.6-n-61efafd0e9\" DevicePath \"\"" Nov 1 01:16:42.130955 kubelet[3089]: I1101 01:16:42.130808 3089 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lzcm8\" (UniqueName: \"kubernetes.io/projected/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92-kube-api-access-lzcm8\") on node \"ci-4081.3.6-n-61efafd0e9\" DevicePath \"\"" Nov 1 01:16:42.726159 systemd[1]: Removed slice kubepods-besteffort-pod3f82bdd6_ea5e_416a_9d40_0c4d58eb0f92.slice - libcontainer container kubepods-besteffort-pod3f82bdd6_ea5e_416a_9d40_0c4d58eb0f92.slice. Nov 1 01:16:42.733629 kubelet[3089]: I1101 01:16:42.733594 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-76rt6" podStartSLOduration=2.268181801 podStartE2EDuration="16.733583435s" podCreationTimestamp="2025-11-01 01:16:26 +0000 UTC" firstStartedPulling="2025-11-01 01:16:27.332950433 +0000 UTC m=+17.777585567" lastFinishedPulling="2025-11-01 01:16:41.798352079 +0000 UTC m=+32.242987201" observedRunningTime="2025-11-01 01:16:42.733364491 +0000 UTC m=+33.177999616" watchObservedRunningTime="2025-11-01 01:16:42.733583435 +0000 UTC m=+33.178218556" Nov 1 01:16:42.757766 systemd[1]: Created slice kubepods-besteffort-poda870b1c4_6e9d_4a96_936e_df1c8a98c970.slice - libcontainer container kubepods-besteffort-poda870b1c4_6e9d_4a96_936e_df1c8a98c970.slice. Nov 1 01:16:42.776388 systemd[1]: run-netns-cni\x2d864fed90\x2d0835\x2d964b\x2d18c7\x2d8c4aaebc3119.mount: Deactivated successfully. Nov 1 01:16:42.776480 systemd[1]: var-lib-kubelet-pods-3f82bdd6\x2dea5e\x2d416a\x2d9d40\x2d0c4d58eb0f92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzcm8.mount: Deactivated successfully. Nov 1 01:16:42.776555 systemd[1]: var-lib-kubelet-pods-3f82bdd6\x2dea5e\x2d416a\x2d9d40\x2d0c4d58eb0f92-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:16:42.835577 kubelet[3089]: I1101 01:16:42.835495 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a870b1c4-6e9d-4a96-936e-df1c8a98c970-whisker-ca-bundle\") pod \"whisker-f8c77c549-chp4w\" (UID: \"a870b1c4-6e9d-4a96-936e-df1c8a98c970\") " pod="calico-system/whisker-f8c77c549-chp4w" Nov 1 01:16:42.835862 kubelet[3089]: I1101 01:16:42.835685 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zc5c\" (UniqueName: \"kubernetes.io/projected/a870b1c4-6e9d-4a96-936e-df1c8a98c970-kube-api-access-9zc5c\") pod \"whisker-f8c77c549-chp4w\" (UID: \"a870b1c4-6e9d-4a96-936e-df1c8a98c970\") " pod="calico-system/whisker-f8c77c549-chp4w" Nov 1 01:16:42.835862 kubelet[3089]: I1101 01:16:42.835794 3089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a870b1c4-6e9d-4a96-936e-df1c8a98c970-whisker-backend-key-pair\") pod \"whisker-f8c77c549-chp4w\" (UID: \"a870b1c4-6e9d-4a96-936e-df1c8a98c970\") " pod="calico-system/whisker-f8c77c549-chp4w" Nov 1 01:16:43.060704 containerd[1824]: time="2025-11-01T01:16:43.060490920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8c77c549-chp4w,Uid:a870b1c4-6e9d-4a96-936e-df1c8a98c970,Namespace:calico-system,Attempt:0,}" Nov 1 01:16:43.134270 systemd-networkd[1613]: cali356d24044f8: Link UP Nov 1 01:16:43.134398 systemd-networkd[1613]: cali356d24044f8: Gained carrier Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.077 [INFO][4765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.084 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0 whisker-f8c77c549- calico-system a870b1c4-6e9d-4a96-936e-df1c8a98c970 864 0 2025-11-01 01:16:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f8c77c549 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 whisker-f8c77c549-chp4w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali356d24044f8 [] [] }} ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.084 [INFO][4765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.098 [INFO][4784] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" HandleID="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.098 [INFO][4784] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" HandleID="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"whisker-f8c77c549-chp4w", "timestamp":"2025-11-01 01:16:43.098796242 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.098 [INFO][4784] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.098 [INFO][4784] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.098 [INFO][4784] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.104 [INFO][4784] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.107 [INFO][4784] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.113 [INFO][4784] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.115 [INFO][4784] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.117 [INFO][4784] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.117 [INFO][4784] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.118 [INFO][4784] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.121 [INFO][4784] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.127 [INFO][4784] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.193/26] block=192.168.7.192/26 handle="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.127 [INFO][4784] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.193/26] handle="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.127 [INFO][4784] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:43.142436 containerd[1824]: 2025-11-01 01:16:43.127 [INFO][4784] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.193/26] IPv6=[] ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" HandleID="k8s-pod-network.53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.143159 containerd[1824]: 2025-11-01 01:16:43.129 [INFO][4765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0", GenerateName:"whisker-f8c77c549-", Namespace:"calico-system", SelfLink:"", UID:"a870b1c4-6e9d-4a96-936e-df1c8a98c970", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8c77c549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"whisker-f8c77c549-chp4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.7.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali356d24044f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:43.143159 containerd[1824]: 2025-11-01 01:16:43.129 [INFO][4765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.193/32] ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.143159 containerd[1824]: 2025-11-01 01:16:43.129 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali356d24044f8 ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.143159 containerd[1824]: 2025-11-01 01:16:43.134 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.143159 containerd[1824]: 2025-11-01 01:16:43.134 [INFO][4765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0", GenerateName:"whisker-f8c77c549-", Namespace:"calico-system", SelfLink:"", UID:"a870b1c4-6e9d-4a96-936e-df1c8a98c970", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8c77c549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc", Pod:"whisker-f8c77c549-chp4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.7.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali356d24044f8", MAC:"de:4a:0e:e2:ad:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:43.143159 containerd[1824]: 2025-11-01 01:16:43.141 [INFO][4765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc" Namespace="calico-system" Pod="whisker-f8c77c549-chp4w" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--f8c77c549--chp4w-eth0" Nov 1 01:16:43.151943 containerd[1824]: time="2025-11-01T01:16:43.151901988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:43.151943 containerd[1824]: time="2025-11-01T01:16:43.151932050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:43.152042 containerd[1824]: time="2025-11-01T01:16:43.151944767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:43.152042 containerd[1824]: time="2025-11-01T01:16:43.151998412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:43.175765 systemd[1]: Started cri-containerd-53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc.scope - libcontainer container 53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc. Nov 1 01:16:43.261536 containerd[1824]: time="2025-11-01T01:16:43.261502049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8c77c549-chp4w,Uid:a870b1c4-6e9d-4a96-936e-df1c8a98c970,Namespace:calico-system,Attempt:0,} returns sandbox id \"53a06a20074270efecf229646ffd9c9680aac1456c549a127af09b8d8d59c1dc\"" Nov 1 01:16:43.262468 containerd[1824]: time="2025-11-01T01:16:43.262452398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:16:43.265240 kernel: bpftool[4989]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 01:16:43.420159 systemd-networkd[1613]: vxlan.calico: Link UP Nov 1 01:16:43.420163 systemd-networkd[1613]: vxlan.calico: Gained carrier Nov 1 01:16:43.599787 kubelet[3089]: I1101 01:16:43.599768 3089 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92" path="/var/lib/kubelet/pods/3f82bdd6-ea5e-416a-9d40-0c4d58eb0f92/volumes" Nov 1 01:16:43.612311 containerd[1824]: time="2025-11-01T01:16:43.612285390Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:43.612622 containerd[1824]: time="2025-11-01T01:16:43.612599002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:16:43.612674 containerd[1824]: time="2025-11-01T01:16:43.612650995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:16:43.612754 kubelet[3089]: E1101 01:16:43.612732 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:16:43.612790 kubelet[3089]: E1101 01:16:43.612766 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:16:43.612866 kubelet[3089]: E1101 01:16:43.612848 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f106df30827649c0a1b41319c8c22502,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:43.614344 containerd[1824]: time="2025-11-01T01:16:43.614332732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:16:43.723733 kubelet[3089]: I1101 01:16:43.723673 3089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:16:43.978166 containerd[1824]: time="2025-11-01T01:16:43.977882796Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:43.978938 containerd[1824]: time="2025-11-01T01:16:43.978852529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:16:43.978938 containerd[1824]: time="2025-11-01T01:16:43.978916789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:16:43.979134 kubelet[3089]: E1101 01:16:43.979079 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:16:43.979134 kubelet[3089]: E1101 01:16:43.979113 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:16:43.979420 kubelet[3089]: E1101 01:16:43.979182 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:43.980399 kubelet[3089]: E1101 01:16:43.980357 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:16:44.593566 systemd-networkd[1613]: cali356d24044f8: Gained IPv6LL Nov 1 01:16:44.729933 kubelet[3089]: E1101 01:16:44.729800 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:16:45.233557 systemd-networkd[1613]: vxlan.calico: Gained IPv6LL Nov 1 01:16:47.599645 containerd[1824]: time="2025-11-01T01:16:47.599576486Z" level=info msg="StopPodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\"" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.623 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.624 [INFO][5122] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" iface="eth0" netns="/var/run/netns/cni-0cdacccc-5eac-2c9d-2dfe-2f64b8b91991" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.624 [INFO][5122] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" iface="eth0" netns="/var/run/netns/cni-0cdacccc-5eac-2c9d-2dfe-2f64b8b91991" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.624 [INFO][5122] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" iface="eth0" netns="/var/run/netns/cni-0cdacccc-5eac-2c9d-2dfe-2f64b8b91991" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.624 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.624 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.635 [INFO][5137] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.635 [INFO][5137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.635 [INFO][5137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.640 [WARNING][5137] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.640 [INFO][5137] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.641 [INFO][5137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:47.642745 containerd[1824]: 2025-11-01 01:16:47.641 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:16:47.643072 containerd[1824]: time="2025-11-01T01:16:47.642837328Z" level=info msg="TearDown network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" successfully" Nov 1 01:16:47.643072 containerd[1824]: time="2025-11-01T01:16:47.642854938Z" level=info msg="StopPodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" returns successfully" Nov 1 01:16:47.643302 containerd[1824]: time="2025-11-01T01:16:47.643288768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kckfw,Uid:fe329cee-9aa5-425f-b021-f1def80c02c8,Namespace:calico-system,Attempt:1,}" Nov 1 01:16:47.644597 systemd[1]: run-netns-cni\x2d0cdacccc\x2d5eac\x2d2c9d\x2d2dfe\x2d2f64b8b91991.mount: Deactivated successfully. Nov 1 01:16:47.712008 systemd-networkd[1613]: caliabccb52fa6b: Link UP Nov 1 01:16:47.712115 systemd-networkd[1613]: caliabccb52fa6b: Gained carrier Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.679 [INFO][5152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0 csi-node-driver- calico-system fe329cee-9aa5-425f-b021-f1def80c02c8 895 0 2025-11-01 01:16:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 csi-node-driver-kckfw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliabccb52fa6b [] [] }} ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.679 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.692 [INFO][5177] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" HandleID="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.692 [INFO][5177] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" HandleID="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"csi-node-driver-kckfw", "timestamp":"2025-11-01 01:16:47.692479674 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.692 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.692 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.692 [INFO][5177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.696 [INFO][5177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.699 [INFO][5177] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.701 [INFO][5177] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.702 [INFO][5177] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.704 [INFO][5177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.704 [INFO][5177] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.704 [INFO][5177] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146 Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.707 [INFO][5177] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.710 [INFO][5177] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.194/26] block=192.168.7.192/26 handle="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.710 [INFO][5177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.194/26] handle="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.710 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:47.718555 containerd[1824]: 2025-11-01 01:16:47.710 [INFO][5177] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.194/26] IPv6=[] ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" HandleID="k8s-pod-network.b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.718949 containerd[1824]: 2025-11-01 01:16:47.711 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe329cee-9aa5-425f-b021-f1def80c02c8", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"csi-node-driver-kckfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabccb52fa6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:47.718949 containerd[1824]: 2025-11-01 01:16:47.711 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.194/32] ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.718949 containerd[1824]: 2025-11-01 01:16:47.711 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabccb52fa6b ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.718949 containerd[1824]: 2025-11-01 01:16:47.712 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.718949 containerd[1824]: 2025-11-01 01:16:47.712 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe329cee-9aa5-425f-b021-f1def80c02c8", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146", Pod:"csi-node-driver-kckfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabccb52fa6b", MAC:"0e:06:9e:97:d4:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:47.718949 containerd[1824]: 2025-11-01 01:16:47.717 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146" Namespace="calico-system" Pod="csi-node-driver-kckfw" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:16:47.726403 containerd[1824]: time="2025-11-01T01:16:47.726360987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:47.726631 containerd[1824]: time="2025-11-01T01:16:47.726397591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:47.726657 containerd[1824]: time="2025-11-01T01:16:47.726631036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:47.726729 containerd[1824]: time="2025-11-01T01:16:47.726678839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:47.750397 systemd[1]: Started cri-containerd-b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146.scope - libcontainer container b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146. Nov 1 01:16:47.763739 containerd[1824]: time="2025-11-01T01:16:47.763710181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kckfw,Uid:fe329cee-9aa5-425f-b021-f1def80c02c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146\"" Nov 1 01:16:47.764765 containerd[1824]: time="2025-11-01T01:16:47.764745352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:16:48.131347 containerd[1824]: time="2025-11-01T01:16:48.131191182Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:48.132128 containerd[1824]: time="2025-11-01T01:16:48.132101958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:16:48.132199 containerd[1824]: time="2025-11-01T01:16:48.132175770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:16:48.132311 kubelet[3089]: E1101 01:16:48.132256 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:16:48.132311 kubelet[3089]: E1101 01:16:48.132289 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:16:48.132512 kubelet[3089]: E1101 01:16:48.132362 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:48.134112 containerd[1824]: time="2025-11-01T01:16:48.134099939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:16:48.273118 kubelet[3089]: I1101 01:16:48.273034 3089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:16:48.486125 containerd[1824]: time="2025-11-01T01:16:48.486072887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:48.486841 containerd[1824]: time="2025-11-01T01:16:48.486815325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:16:48.486904 containerd[1824]: time="2025-11-01T01:16:48.486887014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:16:48.487009 kubelet[3089]: E1101 01:16:48.486990 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:16:48.487043 kubelet[3089]: E1101 01:16:48.487018 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:16:48.487105 kubelet[3089]: E1101 01:16:48.487082 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:48.488400 kubelet[3089]: E1101 01:16:48.488299 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:48.740521 kubelet[3089]: E1101 01:16:48.740400 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:49.393539 systemd-networkd[1613]: caliabccb52fa6b: Gained IPv6LL Nov 1 01:16:49.599778 containerd[1824]: time="2025-11-01T01:16:49.599752407Z" level=info msg="StopPodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\"" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.623 [INFO][5327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.623 [INFO][5327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" iface="eth0" netns="/var/run/netns/cni-5577d096-3bf5-53eb-fd2d-565fdf8277dc" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.623 [INFO][5327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" iface="eth0" netns="/var/run/netns/cni-5577d096-3bf5-53eb-fd2d-565fdf8277dc" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.623 [INFO][5327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" iface="eth0" netns="/var/run/netns/cni-5577d096-3bf5-53eb-fd2d-565fdf8277dc" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.623 [INFO][5327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.623 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.636 [INFO][5342] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.636 [INFO][5342] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.636 [INFO][5342] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.641 [WARNING][5342] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.641 [INFO][5342] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.642 [INFO][5342] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:49.644217 containerd[1824]: 2025-11-01 01:16:49.643 [INFO][5327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:16:49.644578 containerd[1824]: time="2025-11-01T01:16:49.644270830Z" level=info msg="TearDown network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" successfully" Nov 1 01:16:49.644578 containerd[1824]: time="2025-11-01T01:16:49.644292155Z" level=info msg="StopPodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" returns successfully" Nov 1 01:16:49.644789 containerd[1824]: time="2025-11-01T01:16:49.644767042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dj4l,Uid:c59cb744-7d2f-4d9a-b681-ac1bc163601e,Namespace:kube-system,Attempt:1,}" Nov 1 01:16:49.646252 systemd[1]: run-netns-cni\x2d5577d096\x2d3bf5\x2d53eb\x2dfd2d\x2d565fdf8277dc.mount: Deactivated successfully. Nov 1 01:16:49.703179 systemd-networkd[1613]: calia1d9c5871fb: Link UP Nov 1 01:16:49.703420 systemd-networkd[1613]: calia1d9c5871fb: Gained carrier Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.666 [INFO][5359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0 coredns-668d6bf9bc- kube-system c59cb744-7d2f-4d9a-b681-ac1bc163601e 918 0 2025-11-01 01:16:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 coredns-668d6bf9bc-4dj4l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia1d9c5871fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.666 [INFO][5359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.680 [INFO][5383] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" HandleID="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.680 [INFO][5383] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" HandleID="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"coredns-668d6bf9bc-4dj4l", "timestamp":"2025-11-01 01:16:49.680074405 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.680 [INFO][5383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.680 [INFO][5383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.680 [INFO][5383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.685 [INFO][5383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.687 [INFO][5383] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.690 [INFO][5383] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.691 [INFO][5383] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.692 [INFO][5383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.692 [INFO][5383] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.693 [INFO][5383] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.696 [INFO][5383] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.700 [INFO][5383] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.195/26] block=192.168.7.192/26 handle="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.700 [INFO][5383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.195/26] handle="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.700 [INFO][5383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:49.712433 containerd[1824]: 2025-11-01 01:16:49.700 [INFO][5383] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.195/26] IPv6=[] ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" HandleID="k8s-pod-network.472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.713143 containerd[1824]: 2025-11-01 01:16:49.701 [INFO][5359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c59cb744-7d2f-4d9a-b681-ac1bc163601e", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"coredns-668d6bf9bc-4dj4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1d9c5871fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:49.713143 containerd[1824]: 2025-11-01 01:16:49.701 [INFO][5359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.195/32] ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.713143 containerd[1824]: 2025-11-01 01:16:49.701 [INFO][5359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1d9c5871fb ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.713143 containerd[1824]: 2025-11-01 01:16:49.703 [INFO][5359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.713143 containerd[1824]: 2025-11-01 01:16:49.703 [INFO][5359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c59cb744-7d2f-4d9a-b681-ac1bc163601e", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d", Pod:"coredns-668d6bf9bc-4dj4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1d9c5871fb", MAC:"ca:7f:34:d9:44:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:49.713143 containerd[1824]: 2025-11-01 01:16:49.710 [INFO][5359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dj4l" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:16:49.738738 containerd[1824]: time="2025-11-01T01:16:49.738668505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:49.738738 containerd[1824]: time="2025-11-01T01:16:49.738700298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:49.738738 containerd[1824]: time="2025-11-01T01:16:49.738711767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:49.738998 containerd[1824]: time="2025-11-01T01:16:49.738943515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:49.740227 kubelet[3089]: E1101 01:16:49.740193 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:16:49.769488 systemd[1]: Started cri-containerd-472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d.scope - libcontainer container 472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d. Nov 1 01:16:49.802037 containerd[1824]: time="2025-11-01T01:16:49.802008939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dj4l,Uid:c59cb744-7d2f-4d9a-b681-ac1bc163601e,Namespace:kube-system,Attempt:1,} returns sandbox id \"472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d\"" Nov 1 01:16:49.803627 containerd[1824]: time="2025-11-01T01:16:49.803594023Z" level=info msg="CreateContainer within sandbox \"472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:16:49.808223 containerd[1824]: time="2025-11-01T01:16:49.808209543Z" level=info msg="CreateContainer within sandbox \"472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ce830761ae97c4e25a3934a9018fc8c9f7247ab2ce8ae28bf9faf149827028d\"" Nov 1 01:16:49.808432 containerd[1824]: time="2025-11-01T01:16:49.808421477Z" level=info msg="StartContainer for \"0ce830761ae97c4e25a3934a9018fc8c9f7247ab2ce8ae28bf9faf149827028d\"" Nov 1 01:16:49.835310 systemd[1]: Started cri-containerd-0ce830761ae97c4e25a3934a9018fc8c9f7247ab2ce8ae28bf9faf149827028d.scope - libcontainer container 0ce830761ae97c4e25a3934a9018fc8c9f7247ab2ce8ae28bf9faf149827028d. Nov 1 01:16:49.849670 containerd[1824]: time="2025-11-01T01:16:49.849642935Z" level=info msg="StartContainer for \"0ce830761ae97c4e25a3934a9018fc8c9f7247ab2ce8ae28bf9faf149827028d\" returns successfully" Nov 1 01:16:50.599486 containerd[1824]: time="2025-11-01T01:16:50.599346961Z" level=info msg="StopPodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\"" Nov 1 01:16:50.599709 containerd[1824]: time="2025-11-01T01:16:50.599630337Z" level=info msg="StopPodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\"" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.629 [INFO][5518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.629 [INFO][5518] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" iface="eth0" netns="/var/run/netns/cni-3935b6aa-be2e-80be-14b5-5a868d20b205" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.630 [INFO][5518] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" iface="eth0" netns="/var/run/netns/cni-3935b6aa-be2e-80be-14b5-5a868d20b205" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.631 [INFO][5518] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" iface="eth0" netns="/var/run/netns/cni-3935b6aa-be2e-80be-14b5-5a868d20b205" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.631 [INFO][5518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.631 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.668 [INFO][5553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.668 [INFO][5553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.668 [INFO][5553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.673 [WARNING][5553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.673 [INFO][5553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.675 [INFO][5553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:50.677170 containerd[1824]: 2025-11-01 01:16:50.675 [INFO][5518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:16:50.678022 containerd[1824]: time="2025-11-01T01:16:50.677300597Z" level=info msg="TearDown network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" successfully" Nov 1 01:16:50.678022 containerd[1824]: time="2025-11-01T01:16:50.677327880Z" level=info msg="StopPodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" returns successfully" Nov 1 01:16:50.678022 containerd[1824]: time="2025-11-01T01:16:50.677963903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-9sdnh,Uid:e53afa66-571d-49bb-8168-fa6f398b3e23,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:16:50.680111 systemd[1]: run-netns-cni\x2d3935b6aa\x2dbe2e\x2d80be\x2d14b5\x2d5a868d20b205.mount: Deactivated successfully. Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.628 [INFO][5517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.628 [INFO][5517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" iface="eth0" netns="/var/run/netns/cni-13633aa6-f133-2c15-b6b5-a456e5a9495d" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.629 [INFO][5517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" iface="eth0" netns="/var/run/netns/cni-13633aa6-f133-2c15-b6b5-a456e5a9495d" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.629 [INFO][5517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" iface="eth0" netns="/var/run/netns/cni-13633aa6-f133-2c15-b6b5-a456e5a9495d" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.630 [INFO][5517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.630 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.668 [INFO][5551] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.668 [INFO][5551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.675 [INFO][5551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.680 [WARNING][5551] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.680 [INFO][5551] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.681 [INFO][5551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:50.683807 containerd[1824]: 2025-11-01 01:16:50.682 [INFO][5517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:16:50.684359 containerd[1824]: time="2025-11-01T01:16:50.683880671Z" level=info msg="TearDown network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" successfully" Nov 1 01:16:50.684359 containerd[1824]: time="2025-11-01T01:16:50.683900514Z" level=info msg="StopPodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" returns successfully" Nov 1 01:16:50.684449 containerd[1824]: time="2025-11-01T01:16:50.684407272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-7gbvk,Uid:f2ac6185-3f80-4cfa-971b-e2d87f342f5e,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:16:50.686714 systemd[1]: run-netns-cni\x2d13633aa6\x2df133\x2d2c15\x2db6b5\x2da456e5a9495d.mount: Deactivated successfully. Nov 1 01:16:50.746518 kubelet[3089]: I1101 01:16:50.746478 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4dj4l" podStartSLOduration=35.746462496 podStartE2EDuration="35.746462496s" podCreationTimestamp="2025-11-01 01:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:16:50.746334004 +0000 UTC m=+41.190969139" watchObservedRunningTime="2025-11-01 01:16:50.746462496 +0000 UTC m=+41.191097611" Nov 1 01:16:50.749136 systemd-networkd[1613]: cali430ff05a7e4: Link UP Nov 1 01:16:50.749369 systemd-networkd[1613]: cali430ff05a7e4: Gained carrier Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.714 [INFO][5583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0 calico-apiserver-5847c846dc- calico-apiserver e53afa66-571d-49bb-8168-fa6f398b3e23 934 0 2025-11-01 01:16:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5847c846dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 calico-apiserver-5847c846dc-9sdnh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali430ff05a7e4 [] [] }} ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.714 [INFO][5583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" HandleID="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" HandleID="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"calico-apiserver-5847c846dc-9sdnh", "timestamp":"2025-11-01 01:16:50.726506075 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.731 [INFO][5630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.734 [INFO][5630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.737 [INFO][5630] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.739 [INFO][5630] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.740 [INFO][5630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.740 [INFO][5630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.741 [INFO][5630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5 Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.743 [INFO][5630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.746 [INFO][5630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.196/26] block=192.168.7.192/26 handle="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.746 [INFO][5630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.196/26] handle="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.746 [INFO][5630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:50.755119 containerd[1824]: 2025-11-01 01:16:50.746 [INFO][5630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.196/26] IPv6=[] ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" HandleID="k8s-pod-network.2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.755570 containerd[1824]: 2025-11-01 01:16:50.747 [INFO][5583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e53afa66-571d-49bb-8168-fa6f398b3e23", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"calico-apiserver-5847c846dc-9sdnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali430ff05a7e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:50.755570 containerd[1824]: 2025-11-01 01:16:50.747 [INFO][5583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.196/32] ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.755570 containerd[1824]: 2025-11-01 01:16:50.747 [INFO][5583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali430ff05a7e4 ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.755570 containerd[1824]: 2025-11-01 01:16:50.749 [INFO][5583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.755570 containerd[1824]: 2025-11-01 01:16:50.749 [INFO][5583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e53afa66-571d-49bb-8168-fa6f398b3e23", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5", Pod:"calico-apiserver-5847c846dc-9sdnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali430ff05a7e4", MAC:"82:9d:a2:7b:b7:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:50.755570 containerd[1824]: 2025-11-01 01:16:50.753 [INFO][5583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-9sdnh" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:16:50.763936 containerd[1824]: time="2025-11-01T01:16:50.763668547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:50.763936 containerd[1824]: time="2025-11-01T01:16:50.763878528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:50.763936 containerd[1824]: time="2025-11-01T01:16:50.763886422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:50.764035 containerd[1824]: time="2025-11-01T01:16:50.763924516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:50.788368 systemd[1]: Started cri-containerd-2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5.scope - libcontainer container 2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5. Nov 1 01:16:50.813191 containerd[1824]: time="2025-11-01T01:16:50.813169499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-9sdnh,Uid:e53afa66-571d-49bb-8168-fa6f398b3e23,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5\"" Nov 1 01:16:50.813949 containerd[1824]: time="2025-11-01T01:16:50.813911803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:16:50.893297 systemd-networkd[1613]: cali580c3110e84: Link UP Nov 1 01:16:50.894277 systemd-networkd[1613]: cali580c3110e84: Gained carrier Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.713 [INFO][5585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0 calico-apiserver-5847c846dc- calico-apiserver f2ac6185-3f80-4cfa-971b-e2d87f342f5e 933 0 2025-11-01 01:16:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5847c846dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 calico-apiserver-5847c846dc-7gbvk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali580c3110e84 [] [] }} ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.714 [INFO][5585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" HandleID="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.726 [INFO][5628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" HandleID="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"calico-apiserver-5847c846dc-7gbvk", "timestamp":"2025-11-01 01:16:50.726508056 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.727 [INFO][5628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.746 [INFO][5628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.746 [INFO][5628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.834 [INFO][5628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.843 [INFO][5628] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.853 [INFO][5628] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.857 [INFO][5628] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.862 [INFO][5628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.862 [INFO][5628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.865 [INFO][5628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.872 [INFO][5628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.884 [INFO][5628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.197/26] block=192.168.7.192/26 handle="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.884 [INFO][5628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.197/26] handle="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.884 [INFO][5628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:50.918682 containerd[1824]: 2025-11-01 01:16:50.884 [INFO][5628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.197/26] IPv6=[] ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" HandleID="k8s-pod-network.a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.920869 containerd[1824]: 2025-11-01 01:16:50.889 [INFO][5585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2ac6185-3f80-4cfa-971b-e2d87f342f5e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"calico-apiserver-5847c846dc-7gbvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali580c3110e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:50.920869 containerd[1824]: 2025-11-01 01:16:50.889 [INFO][5585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.197/32] ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.920869 containerd[1824]: 2025-11-01 01:16:50.889 [INFO][5585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580c3110e84 ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.920869 containerd[1824]: 2025-11-01 01:16:50.894 [INFO][5585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.920869 containerd[1824]: 2025-11-01 01:16:50.895 [INFO][5585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2ac6185-3f80-4cfa-971b-e2d87f342f5e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a", Pod:"calico-apiserver-5847c846dc-7gbvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali580c3110e84", MAC:"7e:11:b9:37:54:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:50.920869 containerd[1824]: 2025-11-01 01:16:50.914 [INFO][5585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a" Namespace="calico-apiserver" Pod="calico-apiserver-5847c846dc-7gbvk" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:16:50.932266 containerd[1824]: time="2025-11-01T01:16:50.932222488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:50.932266 containerd[1824]: time="2025-11-01T01:16:50.932255361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:50.932266 containerd[1824]: time="2025-11-01T01:16:50.932262720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:50.932369 containerd[1824]: time="2025-11-01T01:16:50.932301696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:50.948682 systemd[1]: Started cri-containerd-a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a.scope - libcontainer container a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a. Nov 1 01:16:51.008724 containerd[1824]: time="2025-11-01T01:16:51.008695137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5847c846dc-7gbvk,Uid:f2ac6185-3f80-4cfa-971b-e2d87f342f5e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a\"" Nov 1 01:16:51.121539 systemd-networkd[1613]: calia1d9c5871fb: Gained IPv6LL Nov 1 01:16:51.202147 containerd[1824]: time="2025-11-01T01:16:51.202057470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:51.203139 containerd[1824]: time="2025-11-01T01:16:51.203117247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:16:51.203202 containerd[1824]: time="2025-11-01T01:16:51.203185987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:16:51.203322 kubelet[3089]: E1101 01:16:51.203301 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:16:51.203351 kubelet[3089]: E1101 01:16:51.203331 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:16:51.203527 kubelet[3089]: E1101 01:16:51.203462 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlkp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:51.203621 containerd[1824]: time="2025-11-01T01:16:51.203537988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:16:51.204654 kubelet[3089]: E1101 01:16:51.204639 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:16:51.579593 containerd[1824]: time="2025-11-01T01:16:51.579359086Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:51.580363 containerd[1824]: time="2025-11-01T01:16:51.580337578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:16:51.580431 containerd[1824]: time="2025-11-01T01:16:51.580415528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:16:51.580512 kubelet[3089]: E1101 01:16:51.580495 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:16:51.580581 kubelet[3089]: E1101 01:16:51.580536 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:16:51.580650 kubelet[3089]: E1101 01:16:51.580614 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:51.581771 kubelet[3089]: E1101 01:16:51.581756 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:16:51.599354 containerd[1824]: time="2025-11-01T01:16:51.599326201Z" level=info msg="StopPodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\"" Nov 1 01:16:51.599454 containerd[1824]: time="2025-11-01T01:16:51.599326187Z" level=info msg="StopPodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\"" Nov 1 01:16:51.599489 containerd[1824]: time="2025-11-01T01:16:51.599405831Z" level=info msg="StopPodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\"" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.648 [INFO][5799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" iface="eth0" netns="/var/run/netns/cni-3afe7655-26c4-cf2e-ec6c-6f109a3df5cd" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" iface="eth0" netns="/var/run/netns/cni-3afe7655-26c4-cf2e-ec6c-6f109a3df5cd" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.650 [INFO][5799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" iface="eth0" netns="/var/run/netns/cni-3afe7655-26c4-cf2e-ec6c-6f109a3df5cd" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.650 [INFO][5799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.650 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.673 [INFO][5853] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.673 [INFO][5853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.673 [INFO][5853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.678 [WARNING][5853] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.678 [INFO][5853] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.679 [INFO][5853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:51.681599 containerd[1824]: 2025-11-01 01:16:51.680 [INFO][5799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:16:51.682373 containerd[1824]: time="2025-11-01T01:16:51.681685774Z" level=info msg="TearDown network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" successfully" Nov 1 01:16:51.682373 containerd[1824]: time="2025-11-01T01:16:51.681705530Z" level=info msg="StopPodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" returns successfully" Nov 1 01:16:51.682373 containerd[1824]: time="2025-11-01T01:16:51.682160412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-895fdb58f-xjcnc,Uid:c808102b-d4ca-4405-80f4-fc0935baaa15,Namespace:calico-system,Attempt:1,}" Nov 1 01:16:51.683567 systemd[1]: run-netns-cni\x2d3afe7655\x2d26c4\x2dcf2e\x2dec6c\x2d6f109a3df5cd.mount: Deactivated successfully. Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.646 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.647 [INFO][5801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" iface="eth0" netns="/var/run/netns/cni-8c924012-1190-6553-d2ab-f07e451b0a76" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.648 [INFO][5801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" iface="eth0" netns="/var/run/netns/cni-8c924012-1190-6553-d2ab-f07e451b0a76" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.648 [INFO][5801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" iface="eth0" netns="/var/run/netns/cni-8c924012-1190-6553-d2ab-f07e451b0a76" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.673 [INFO][5849] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.674 [INFO][5849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.679 [INFO][5849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.683 [WARNING][5849] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.683 [INFO][5849] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.684 [INFO][5849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:51.686085 containerd[1824]: 2025-11-01 01:16:51.685 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:16:51.686335 containerd[1824]: time="2025-11-01T01:16:51.686163807Z" level=info msg="TearDown network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" successfully" Nov 1 01:16:51.686335 containerd[1824]: time="2025-11-01T01:16:51.686185615Z" level=info msg="StopPodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" returns successfully" Nov 1 01:16:51.686540 containerd[1824]: time="2025-11-01T01:16:51.686529461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t2l45,Uid:07dd73e6-dcff-41d4-b90f-7314863a267d,Namespace:calico-system,Attempt:1,}" Nov 1 01:16:51.689945 systemd[1]: run-netns-cni\x2d8c924012\x2d1190\x2d6553\x2dd2ab\x2df07e451b0a76.mount: Deactivated successfully. Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.647 [INFO][5800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.648 [INFO][5800] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" iface="eth0" netns="/var/run/netns/cni-2b3448d0-803c-687d-08a4-d1fa1790344c" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.648 [INFO][5800] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" iface="eth0" netns="/var/run/netns/cni-2b3448d0-803c-687d-08a4-d1fa1790344c" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5800] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" iface="eth0" netns="/var/run/netns/cni-2b3448d0-803c-687d-08a4-d1fa1790344c" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.649 [INFO][5800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.676 [INFO][5851] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.676 [INFO][5851] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.684 [INFO][5851] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.687 [WARNING][5851] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.687 [INFO][5851] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.689 [INFO][5851] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:51.691245 containerd[1824]: 2025-11-01 01:16:51.690 [INFO][5800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:16:51.691558 containerd[1824]: time="2025-11-01T01:16:51.691333537Z" level=info msg="TearDown network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" successfully" Nov 1 01:16:51.691558 containerd[1824]: time="2025-11-01T01:16:51.691353372Z" level=info msg="StopPodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" returns successfully" Nov 1 01:16:51.691814 containerd[1824]: time="2025-11-01T01:16:51.691799673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pdzpp,Uid:8933d3a0-6f08-44c4-b76e-2bacec659217,Namespace:kube-system,Attempt:1,}" Nov 1 01:16:51.737174 systemd-networkd[1613]: cali76339371f5d: Link UP Nov 1 01:16:51.737288 systemd-networkd[1613]: cali76339371f5d: Gained carrier Nov 1 01:16:51.744068 kubelet[3089]: E1101 01:16:51.744038 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.706 [INFO][5904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0 calico-kube-controllers-895fdb58f- calico-system c808102b-d4ca-4405-80f4-fc0935baaa15 962 0 2025-11-01 01:16:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:895fdb58f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 calico-kube-controllers-895fdb58f-xjcnc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali76339371f5d [] [] }} ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.706 [INFO][5904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.719 [INFO][5971] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" HandleID="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.719 [INFO][5971] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" HandleID="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"calico-kube-controllers-895fdb58f-xjcnc", "timestamp":"2025-11-01 01:16:51.719337199 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.719 [INFO][5971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.719 [INFO][5971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.719 [INFO][5971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.723 [INFO][5971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.725 [INFO][5971] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.727 [INFO][5971] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.728 [INFO][5971] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.729 [INFO][5971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.729 [INFO][5971] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.730 [INFO][5971] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829 Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.732 [INFO][5971] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.735 [INFO][5971] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.198/26] block=192.168.7.192/26 handle="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.735 [INFO][5971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.198/26] handle="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.735 [INFO][5971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:51.744292 containerd[1824]: 2025-11-01 01:16:51.735 [INFO][5971] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.198/26] IPv6=[] ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" HandleID="k8s-pod-network.20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744827 containerd[1824]: 2025-11-01 01:16:51.736 [INFO][5904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0", GenerateName:"calico-kube-controllers-895fdb58f-", Namespace:"calico-system", SelfLink:"", UID:"c808102b-d4ca-4405-80f4-fc0935baaa15", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"895fdb58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"calico-kube-controllers-895fdb58f-xjcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76339371f5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:51.744827 containerd[1824]: 2025-11-01 01:16:51.736 [INFO][5904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.198/32] ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744827 containerd[1824]: 2025-11-01 01:16:51.736 [INFO][5904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76339371f5d ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744827 containerd[1824]: 2025-11-01 01:16:51.737 [INFO][5904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744827 containerd[1824]: 2025-11-01 01:16:51.738 [INFO][5904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0", GenerateName:"calico-kube-controllers-895fdb58f-", Namespace:"calico-system", SelfLink:"", UID:"c808102b-d4ca-4405-80f4-fc0935baaa15", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"895fdb58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829", Pod:"calico-kube-controllers-895fdb58f-xjcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76339371f5d", MAC:"1e:25:c9:a1:bc:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:51.744827 containerd[1824]: 2025-11-01 01:16:51.743 [INFO][5904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829" Namespace="calico-system" Pod="calico-kube-controllers-895fdb58f-xjcnc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:16:51.744983 kubelet[3089]: E1101 01:16:51.744831 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:16:51.754247 containerd[1824]: time="2025-11-01T01:16:51.754142026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:51.754247 containerd[1824]: time="2025-11-01T01:16:51.754177424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:51.754247 containerd[1824]: time="2025-11-01T01:16:51.754184724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:51.754247 containerd[1824]: time="2025-11-01T01:16:51.754237698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:51.772508 systemd[1]: Started cri-containerd-20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829.scope - libcontainer container 20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829. Nov 1 01:16:51.794747 containerd[1824]: time="2025-11-01T01:16:51.794725013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-895fdb58f-xjcnc,Uid:c808102b-d4ca-4405-80f4-fc0935baaa15,Namespace:calico-system,Attempt:1,} returns sandbox id \"20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829\"" Nov 1 01:16:51.795427 containerd[1824]: time="2025-11-01T01:16:51.795416164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:16:51.889455 systemd-networkd[1613]: calia57a65b701d: Link UP Nov 1 01:16:51.890172 systemd-networkd[1613]: calia57a65b701d: Gained carrier Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.709 [INFO][5915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0 goldmane-666569f655- calico-system 07dd73e6-dcff-41d4-b90f-7314863a267d 961 0 2025-11-01 01:16:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 goldmane-666569f655-t2l45 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia57a65b701d [] [] }} ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.709 [INFO][5915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.721 [INFO][5977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" HandleID="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.721 [INFO][5977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" HandleID="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"goldmane-666569f655-t2l45", "timestamp":"2025-11-01 01:16:51.721587267 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.721 [INFO][5977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.735 [INFO][5977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.735 [INFO][5977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.826 [INFO][5977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.835 [INFO][5977] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.844 [INFO][5977] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.848 [INFO][5977] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.854 [INFO][5977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.854 [INFO][5977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.857 [INFO][5977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0 Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.865 [INFO][5977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.878 [INFO][5977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.199/26] block=192.168.7.192/26 handle="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.878 [INFO][5977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.199/26] handle="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.878 [INFO][5977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:51.911523 containerd[1824]: 2025-11-01 01:16:51.878 [INFO][5977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.199/26] IPv6=[] ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" HandleID="k8s-pod-network.55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.912633 containerd[1824]: 2025-11-01 01:16:51.883 [INFO][5915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"07dd73e6-dcff-41d4-b90f-7314863a267d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"goldmane-666569f655-t2l45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia57a65b701d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:51.912633 containerd[1824]: 2025-11-01 01:16:51.883 [INFO][5915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.199/32] ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.912633 containerd[1824]: 2025-11-01 01:16:51.883 [INFO][5915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia57a65b701d ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.912633 containerd[1824]: 2025-11-01 01:16:51.890 [INFO][5915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.912633 containerd[1824]: 2025-11-01 01:16:51.891 [INFO][5915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"07dd73e6-dcff-41d4-b90f-7314863a267d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0", Pod:"goldmane-666569f655-t2l45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia57a65b701d", MAC:"ca:ad:cd:3b:83:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:51.912633 containerd[1824]: 2025-11-01 01:16:51.908 [INFO][5915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0" Namespace="calico-system" Pod="goldmane-666569f655-t2l45" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:16:51.925786 containerd[1824]: time="2025-11-01T01:16:51.923227031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:51.925786 containerd[1824]: time="2025-11-01T01:16:51.923256866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:51.925786 containerd[1824]: time="2025-11-01T01:16:51.923264108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:51.925786 containerd[1824]: time="2025-11-01T01:16:51.923301102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:51.944346 systemd[1]: Started cri-containerd-55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0.scope - libcontainer container 55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0. Nov 1 01:16:51.950708 systemd-networkd[1613]: cali7913462f559: Link UP Nov 1 01:16:51.950849 systemd-networkd[1613]: cali7913462f559: Gained carrier Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.710 [INFO][5931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0 coredns-668d6bf9bc- kube-system 8933d3a0-6f08-44c4-b76e-2bacec659217 963 0 2025-11-01 01:16:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-61efafd0e9 coredns-668d6bf9bc-pdzpp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7913462f559 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.710 [INFO][5931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.722 [INFO][5983] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" HandleID="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.722 [INFO][5983] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" HandleID="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d5880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-61efafd0e9", "pod":"coredns-668d6bf9bc-pdzpp", "timestamp":"2025-11-01 01:16:51.722479107 +0000 UTC"}, Hostname:"ci-4081.3.6-n-61efafd0e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.722 [INFO][5983] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.878 [INFO][5983] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.878 [INFO][5983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-61efafd0e9' Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.925 [INFO][5983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.934 [INFO][5983] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.940 [INFO][5983] ipam/ipam.go 511: Trying affinity for 192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.941 [INFO][5983] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.942 [INFO][5983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.192/26 host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.942 [INFO][5983] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.192/26 handle="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.943 [INFO][5983] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.945 [INFO][5983] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.192/26 handle="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.948 [INFO][5983] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.200/26] block=192.168.7.192/26 handle="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.948 [INFO][5983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.200/26] handle="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" host="ci-4081.3.6-n-61efafd0e9" Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.948 [INFO][5983] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:16:51.957567 containerd[1824]: 2025-11-01 01:16:51.948 [INFO][5983] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.200/26] IPv6=[] ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" HandleID="k8s-pod-network.ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.958254 containerd[1824]: 2025-11-01 01:16:51.949 [INFO][5931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8933d3a0-6f08-44c4-b76e-2bacec659217", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"", Pod:"coredns-668d6bf9bc-pdzpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7913462f559", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:51.958254 containerd[1824]: 2025-11-01 01:16:51.949 [INFO][5931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.200/32] ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.958254 containerd[1824]: 2025-11-01 01:16:51.949 [INFO][5931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7913462f559 ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.958254 containerd[1824]: 2025-11-01 01:16:51.950 [INFO][5931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.958254 containerd[1824]: 2025-11-01 01:16:51.951 [INFO][5931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8933d3a0-6f08-44c4-b76e-2bacec659217", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b", Pod:"coredns-668d6bf9bc-pdzpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7913462f559", MAC:"4a:ce:b0:30:40:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:16:51.958254 containerd[1824]: 2025-11-01 01:16:51.956 [INFO][5931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pdzpp" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:16:51.966410 containerd[1824]: time="2025-11-01T01:16:51.966232338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:16:51.966410 containerd[1824]: time="2025-11-01T01:16:51.966400797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:16:51.966529 containerd[1824]: time="2025-11-01T01:16:51.966416954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:51.966529 containerd[1824]: time="2025-11-01T01:16:51.966462511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:16:51.967755 containerd[1824]: time="2025-11-01T01:16:51.967736463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t2l45,Uid:07dd73e6-dcff-41d4-b90f-7314863a267d,Namespace:calico-system,Attempt:1,} returns sandbox id \"55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0\"" Nov 1 01:16:51.988644 systemd[1]: Started cri-containerd-ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b.scope - libcontainer container ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b. Nov 1 01:16:52.079596 containerd[1824]: time="2025-11-01T01:16:52.079566776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pdzpp,Uid:8933d3a0-6f08-44c4-b76e-2bacec659217,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b\"" Nov 1 01:16:52.081246 containerd[1824]: time="2025-11-01T01:16:52.081228807Z" level=info msg="CreateContainer within sandbox \"ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:16:52.085981 containerd[1824]: time="2025-11-01T01:16:52.085966627Z" level=info msg="CreateContainer within sandbox \"ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"170ac879dd377e55c67be0381a8983f2008c48422c585fc348c4364e944804d2\"" Nov 1 01:16:52.086176 containerd[1824]: time="2025-11-01T01:16:52.086164874Z" level=info msg="StartContainer for \"170ac879dd377e55c67be0381a8983f2008c48422c585fc348c4364e944804d2\"" Nov 1 01:16:52.115372 systemd[1]: Started cri-containerd-170ac879dd377e55c67be0381a8983f2008c48422c585fc348c4364e944804d2.scope - libcontainer container 170ac879dd377e55c67be0381a8983f2008c48422c585fc348c4364e944804d2. Nov 1 01:16:52.125643 containerd[1824]: time="2025-11-01T01:16:52.125620841Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:52.126073 containerd[1824]: time="2025-11-01T01:16:52.126048913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:16:52.126151 containerd[1824]: time="2025-11-01T01:16:52.126096135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:16:52.126196 kubelet[3089]: E1101 01:16:52.126174 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:16:52.126359 kubelet[3089]: E1101 01:16:52.126210 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:16:52.126394 kubelet[3089]: E1101 01:16:52.126353 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wrs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:52.126493 containerd[1824]: time="2025-11-01T01:16:52.126374910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:16:52.127496 kubelet[3089]: E1101 01:16:52.127480 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:16:52.127896 containerd[1824]: time="2025-11-01T01:16:52.127885536Z" level=info msg="StartContainer for \"170ac879dd377e55c67be0381a8983f2008c48422c585fc348c4364e944804d2\" returns successfully" Nov 1 01:16:52.146430 systemd-networkd[1613]: cali430ff05a7e4: Gained IPv6LL Nov 1 01:16:52.483104 containerd[1824]: time="2025-11-01T01:16:52.483062131Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:52.483581 containerd[1824]: time="2025-11-01T01:16:52.483556960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:16:52.483648 containerd[1824]: time="2025-11-01T01:16:52.483620111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:16:52.483734 kubelet[3089]: E1101 01:16:52.483710 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:16:52.483766 kubelet[3089]: E1101 01:16:52.483745 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:16:52.483861 kubelet[3089]: E1101 01:16:52.483830 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt5lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:52.485018 kubelet[3089]: E1101 01:16:52.485002 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:16:52.529306 systemd-networkd[1613]: cali580c3110e84: Gained IPv6LL Nov 1 01:16:52.653035 systemd[1]: run-netns-cni\x2d2b3448d0\x2d803c\x2d687d\x2d08a4\x2dd1fa1790344c.mount: Deactivated successfully. Nov 1 01:16:52.754549 kubelet[3089]: E1101 01:16:52.754488 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:16:52.755020 kubelet[3089]: E1101 01:16:52.754996 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:16:52.755114 kubelet[3089]: E1101 01:16:52.755101 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:16:52.755217 kubelet[3089]: E1101 01:16:52.755195 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:16:52.760589 kubelet[3089]: I1101 01:16:52.760550 3089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pdzpp" podStartSLOduration=37.760539434 podStartE2EDuration="37.760539434s" podCreationTimestamp="2025-11-01 01:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:16:52.75994347 +0000 UTC m=+43.204578598" watchObservedRunningTime="2025-11-01 01:16:52.760539434 +0000 UTC m=+43.205174550" Nov 1 01:16:52.913290 systemd-networkd[1613]: cali76339371f5d: Gained IPv6LL Nov 1 01:16:53.427362 systemd-networkd[1613]: calia57a65b701d: Gained IPv6LL Nov 1 01:16:53.757435 kubelet[3089]: E1101 01:16:53.757402 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:16:53.757939 kubelet[3089]: E1101 01:16:53.757555 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:16:53.938376 systemd-networkd[1613]: cali7913462f559: Gained IPv6LL Nov 1 01:16:56.601840 containerd[1824]: time="2025-11-01T01:16:56.601744713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:16:56.965379 containerd[1824]: time="2025-11-01T01:16:56.965252346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:56.966021 containerd[1824]: time="2025-11-01T01:16:56.965912629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:16:56.966021 containerd[1824]: time="2025-11-01T01:16:56.965979351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:16:56.966119 kubelet[3089]: E1101 01:16:56.966097 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:16:56.966466 kubelet[3089]: E1101 01:16:56.966128 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:16:56.966466 kubelet[3089]: E1101 01:16:56.966197 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f106df30827649c0a1b41319c8c22502,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:56.967965 containerd[1824]: time="2025-11-01T01:16:56.967808499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:16:57.315965 containerd[1824]: time="2025-11-01T01:16:57.315703670Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:16:57.316661 containerd[1824]: time="2025-11-01T01:16:57.316610568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:16:57.316717 containerd[1824]: time="2025-11-01T01:16:57.316674040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:16:57.316836 kubelet[3089]: E1101 01:16:57.316781 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:16:57.316836 kubelet[3089]: E1101 01:16:57.316813 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:16:57.316900 kubelet[3089]: E1101 01:16:57.316879 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:16:57.318674 kubelet[3089]: E1101 01:16:57.318654 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:17:01.599604 containerd[1824]: time="2025-11-01T01:17:01.599572002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:17:01.979975 containerd[1824]: time="2025-11-01T01:17:01.979948544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:01.989003 containerd[1824]: time="2025-11-01T01:17:01.988972488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:17:01.989071 containerd[1824]: time="2025-11-01T01:17:01.989025543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:17:01.989170 kubelet[3089]: E1101 01:17:01.989146 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:17:01.989407 kubelet[3089]: E1101 01:17:01.989180 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:17:01.989407 kubelet[3089]: E1101 01:17:01.989276 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:01.990946 containerd[1824]: time="2025-11-01T01:17:01.990899532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:17:02.373358 containerd[1824]: time="2025-11-01T01:17:02.373244516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:02.373910 containerd[1824]: time="2025-11-01T01:17:02.373884214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:17:02.373980 containerd[1824]: time="2025-11-01T01:17:02.373957590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:17:02.374109 kubelet[3089]: E1101 01:17:02.374087 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:17:02.374140 kubelet[3089]: E1101 01:17:02.374118 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:17:02.374207 kubelet[3089]: E1101 01:17:02.374183 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:02.375315 kubelet[3089]: E1101 01:17:02.375299 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:17:04.599521 containerd[1824]: time="2025-11-01T01:17:04.599463927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:17:04.955098 containerd[1824]: time="2025-11-01T01:17:04.955065900Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:04.955725 containerd[1824]: time="2025-11-01T01:17:04.955696711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:17:04.955795 containerd[1824]: time="2025-11-01T01:17:04.955771016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:17:04.955909 kubelet[3089]: E1101 01:17:04.955859 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:17:04.955909 kubelet[3089]: E1101 01:17:04.955890 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:17:04.956099 kubelet[3089]: E1101 01:17:04.955963 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wrs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:04.957123 kubelet[3089]: E1101 01:17:04.957077 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:17:05.599943 containerd[1824]: time="2025-11-01T01:17:05.599884416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:17:05.985923 containerd[1824]: time="2025-11-01T01:17:05.985815385Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:05.990467 containerd[1824]: time="2025-11-01T01:17:05.990443595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:17:05.990521 containerd[1824]: time="2025-11-01T01:17:05.990498434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:17:05.990624 kubelet[3089]: E1101 01:17:05.990566 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:17:05.990624 kubelet[3089]: E1101 01:17:05.990597 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:17:05.990822 kubelet[3089]: E1101 01:17:05.990670 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt5lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:05.991893 kubelet[3089]: E1101 01:17:05.991850 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:17:07.602167 containerd[1824]: time="2025-11-01T01:17:07.602039471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:17:07.946573 containerd[1824]: time="2025-11-01T01:17:07.946540727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:07.947122 containerd[1824]: time="2025-11-01T01:17:07.947094622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:17:07.947188 containerd[1824]: time="2025-11-01T01:17:07.947157922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:17:07.947270 kubelet[3089]: E1101 01:17:07.947249 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:07.947452 kubelet[3089]: E1101 01:17:07.947279 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:07.947452 kubelet[3089]: E1101 01:17:07.947412 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:07.947544 containerd[1824]: time="2025-11-01T01:17:07.947476649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:17:07.948551 kubelet[3089]: E1101 01:17:07.948536 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:17:08.325230 containerd[1824]: time="2025-11-01T01:17:08.325149722Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:08.325731 containerd[1824]: time="2025-11-01T01:17:08.325713093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:17:08.325776 containerd[1824]: time="2025-11-01T01:17:08.325757790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:17:08.325855 kubelet[3089]: E1101 01:17:08.325833 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:08.325887 kubelet[3089]: E1101 01:17:08.325865 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:08.325965 kubelet[3089]: E1101 01:17:08.325945 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlkp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:08.327087 kubelet[3089]: E1101 01:17:08.327072 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:17:09.596341 containerd[1824]: time="2025-11-01T01:17:09.596250496Z" level=info msg="StopPodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\"" Nov 1 01:17:09.604096 kubelet[3089]: E1101 01:17:09.603932 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.647 [WARNING][6264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e53afa66-571d-49bb-8168-fa6f398b3e23", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5", Pod:"calico-apiserver-5847c846dc-9sdnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali430ff05a7e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.647 [INFO][6264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.647 [INFO][6264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" iface="eth0" netns="" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.647 [INFO][6264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.647 [INFO][6264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.660 [INFO][6282] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.660 [INFO][6282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.660 [INFO][6282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.664 [WARNING][6282] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.664 [INFO][6282] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.664 [INFO][6282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.666605 containerd[1824]: 2025-11-01 01:17:09.665 [INFO][6264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.666605 containerd[1824]: time="2025-11-01T01:17:09.666600022Z" level=info msg="TearDown network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" successfully" Nov 1 01:17:09.666964 containerd[1824]: time="2025-11-01T01:17:09.666616343Z" level=info msg="StopPodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" returns successfully" Nov 1 01:17:09.667022 containerd[1824]: time="2025-11-01T01:17:09.667009669Z" level=info msg="RemovePodSandbox for \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\"" Nov 1 01:17:09.667046 containerd[1824]: time="2025-11-01T01:17:09.667029480Z" level=info msg="Forcibly stopping sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\"" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.684 [WARNING][6306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e53afa66-571d-49bb-8168-fa6f398b3e23", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"2128b2a833de4153f966fe6fad4876194e8fca90666fbb108bbe4933346d80e5", Pod:"calico-apiserver-5847c846dc-9sdnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali430ff05a7e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.684 [INFO][6306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.684 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" iface="eth0" netns="" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.684 [INFO][6306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.684 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.702 [INFO][6321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.702 [INFO][6321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.702 [INFO][6321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.705 [WARNING][6321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.705 [INFO][6321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" HandleID="k8s-pod-network.8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--9sdnh-eth0" Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.706 [INFO][6321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.708060 containerd[1824]: 2025-11-01 01:17:09.707 [INFO][6306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1" Nov 1 01:17:09.708371 containerd[1824]: time="2025-11-01T01:17:09.708087666Z" level=info msg="TearDown network for sandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" successfully" Nov 1 01:17:09.710736 containerd[1824]: time="2025-11-01T01:17:09.710685356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:09.710736 containerd[1824]: time="2025-11-01T01:17:09.710716951Z" level=info msg="RemovePodSandbox \"8d79ea38d44eeb399d4895b42203ac4cdba8d2dbd9a700fb3227a7bf10979fb1\" returns successfully" Nov 1 01:17:09.711033 containerd[1824]: time="2025-11-01T01:17:09.710998445Z" level=info msg="StopPodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\"" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.728 [WARNING][6341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2ac6185-3f80-4cfa-971b-e2d87f342f5e", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a", Pod:"calico-apiserver-5847c846dc-7gbvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali580c3110e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.728 [INFO][6341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.728 [INFO][6341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" iface="eth0" netns="" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.728 [INFO][6341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.728 [INFO][6341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.738 [INFO][6358] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.738 [INFO][6358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.738 [INFO][6358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.742 [WARNING][6358] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.742 [INFO][6358] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.742 [INFO][6358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.744373 containerd[1824]: 2025-11-01 01:17:09.743 [INFO][6341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.744675 containerd[1824]: time="2025-11-01T01:17:09.744395670Z" level=info msg="TearDown network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" successfully" Nov 1 01:17:09.744675 containerd[1824]: time="2025-11-01T01:17:09.744411196Z" level=info msg="StopPodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" returns successfully" Nov 1 01:17:09.744711 containerd[1824]: time="2025-11-01T01:17:09.744673901Z" level=info msg="RemovePodSandbox for \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\"" Nov 1 01:17:09.744711 containerd[1824]: time="2025-11-01T01:17:09.744694233Z" level=info msg="Forcibly stopping sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\"" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.764 [WARNING][6379] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0", GenerateName:"calico-apiserver-5847c846dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2ac6185-3f80-4cfa-971b-e2d87f342f5e", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5847c846dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"a0df76c788c456e0ab03c1a6022ed58af8cd23b38d3c9e89e315f2f832d13a0a", Pod:"calico-apiserver-5847c846dc-7gbvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali580c3110e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.764 [INFO][6379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.764 [INFO][6379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" iface="eth0" netns="" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.764 [INFO][6379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.764 [INFO][6379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.774 [INFO][6396] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.774 [INFO][6396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.774 [INFO][6396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.778 [WARNING][6396] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.778 [INFO][6396] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" HandleID="k8s-pod-network.47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--apiserver--5847c846dc--7gbvk-eth0" Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.779 [INFO][6396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.781104 containerd[1824]: 2025-11-01 01:17:09.780 [INFO][6379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495" Nov 1 01:17:09.781431 containerd[1824]: time="2025-11-01T01:17:09.781129037Z" level=info msg="TearDown network for sandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" successfully" Nov 1 01:17:09.782477 containerd[1824]: time="2025-11-01T01:17:09.782434581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:09.782477 containerd[1824]: time="2025-11-01T01:17:09.782461279Z" level=info msg="RemovePodSandbox \"47e948a33de73f49c8858f1c068620217c8069e91ecfbb0fe71cd4123283b495\" returns successfully" Nov 1 01:17:09.782701 containerd[1824]: time="2025-11-01T01:17:09.782689918Z" level=info msg="StopPodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\"" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.800 [WARNING][6420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"07dd73e6-dcff-41d4-b90f-7314863a267d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0", Pod:"goldmane-666569f655-t2l45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia57a65b701d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.800 [INFO][6420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.800 [INFO][6420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" iface="eth0" netns="" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.800 [INFO][6420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.800 [INFO][6420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.809 [INFO][6438] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.810 [INFO][6438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.810 [INFO][6438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.814 [WARNING][6438] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.814 [INFO][6438] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.815 [INFO][6438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.816890 containerd[1824]: 2025-11-01 01:17:09.816 [INFO][6420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.817181 containerd[1824]: time="2025-11-01T01:17:09.816898844Z" level=info msg="TearDown network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" successfully" Nov 1 01:17:09.817181 containerd[1824]: time="2025-11-01T01:17:09.816919917Z" level=info msg="StopPodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" returns successfully" Nov 1 01:17:09.817222 containerd[1824]: time="2025-11-01T01:17:09.817176655Z" level=info msg="RemovePodSandbox for \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\"" Nov 1 01:17:09.817222 containerd[1824]: time="2025-11-01T01:17:09.817197815Z" level=info msg="Forcibly stopping sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\"" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.837 [WARNING][6463] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"07dd73e6-dcff-41d4-b90f-7314863a267d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"55bcdc054e4316fe75024a039bc50338b60b5b0b280927e4bc5bd05a7d38e7b0", Pod:"goldmane-666569f655-t2l45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia57a65b701d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.837 [INFO][6463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.837 [INFO][6463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" iface="eth0" netns="" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.837 [INFO][6463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.837 [INFO][6463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.850 [INFO][6475] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.850 [INFO][6475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.850 [INFO][6475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.854 [WARNING][6475] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.854 [INFO][6475] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" HandleID="k8s-pod-network.a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Workload="ci--4081.3.6--n--61efafd0e9-k8s-goldmane--666569f655--t2l45-eth0" Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.855 [INFO][6475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.857162 containerd[1824]: 2025-11-01 01:17:09.856 [INFO][6463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff" Nov 1 01:17:09.857162 containerd[1824]: time="2025-11-01T01:17:09.857130725Z" level=info msg="TearDown network for sandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" successfully" Nov 1 01:17:09.858919 containerd[1824]: time="2025-11-01T01:17:09.858876592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:09.858919 containerd[1824]: time="2025-11-01T01:17:09.858903410Z" level=info msg="RemovePodSandbox \"a4f4de81d3756b477fcee71a26dbb9156fb3811bd3f012b408dfe0025fb54aff\" returns successfully" Nov 1 01:17:09.859134 containerd[1824]: time="2025-11-01T01:17:09.859125100Z" level=info msg="StopPodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\"" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.876 [WARNING][6496] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0", GenerateName:"calico-kube-controllers-895fdb58f-", Namespace:"calico-system", SelfLink:"", UID:"c808102b-d4ca-4405-80f4-fc0935baaa15", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"895fdb58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829", Pod:"calico-kube-controllers-895fdb58f-xjcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76339371f5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.876 [INFO][6496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.876 [INFO][6496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" iface="eth0" netns="" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.876 [INFO][6496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.876 [INFO][6496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.886 [INFO][6513] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.886 [INFO][6513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.886 [INFO][6513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.890 [WARNING][6513] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.890 [INFO][6513] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.891 [INFO][6513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.893083 containerd[1824]: 2025-11-01 01:17:09.892 [INFO][6496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.893083 containerd[1824]: time="2025-11-01T01:17:09.893077187Z" level=info msg="TearDown network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" successfully" Nov 1 01:17:09.893451 containerd[1824]: time="2025-11-01T01:17:09.893094793Z" level=info msg="StopPodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" returns successfully" Nov 1 01:17:09.893451 containerd[1824]: time="2025-11-01T01:17:09.893363668Z" level=info msg="RemovePodSandbox for \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\"" Nov 1 01:17:09.893451 containerd[1824]: time="2025-11-01T01:17:09.893386189Z" level=info msg="Forcibly stopping sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\"" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.914 [WARNING][6537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0", GenerateName:"calico-kube-controllers-895fdb58f-", Namespace:"calico-system", SelfLink:"", UID:"c808102b-d4ca-4405-80f4-fc0935baaa15", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"895fdb58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"20a8029e94215d25f042b61b30292638d69efe5c47a41b064e1c32ead85f0829", Pod:"calico-kube-controllers-895fdb58f-xjcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76339371f5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.914 [INFO][6537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.914 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" iface="eth0" netns="" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.914 [INFO][6537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.914 [INFO][6537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.927 [INFO][6553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.927 [INFO][6553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.927 [INFO][6553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.931 [WARNING][6553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.931 [INFO][6553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" HandleID="k8s-pod-network.d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-calico--kube--controllers--895fdb58f--xjcnc-eth0" Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.932 [INFO][6553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.934120 containerd[1824]: 2025-11-01 01:17:09.933 [INFO][6537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a" Nov 1 01:17:09.934427 containerd[1824]: time="2025-11-01T01:17:09.934148287Z" level=info msg="TearDown network for sandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" successfully" Nov 1 01:17:09.935935 containerd[1824]: time="2025-11-01T01:17:09.935890202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:09.935935 containerd[1824]: time="2025-11-01T01:17:09.935924918Z" level=info msg="RemovePodSandbox \"d9544d4764784ff980e2d8e4c1336cdfa6aa28840d78454563c73c95cd16329a\" returns successfully" Nov 1 01:17:09.936188 containerd[1824]: time="2025-11-01T01:17:09.936175587Z" level=info msg="StopPodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\"" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.953 [WARNING][6579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe329cee-9aa5-425f-b021-f1def80c02c8", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146", Pod:"csi-node-driver-kckfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabccb52fa6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.953 [INFO][6579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.953 [INFO][6579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" iface="eth0" netns="" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.954 [INFO][6579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.954 [INFO][6579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.963 [INFO][6597] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.964 [INFO][6597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.964 [INFO][6597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.968 [WARNING][6597] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.968 [INFO][6597] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.969 [INFO][6597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:09.970978 containerd[1824]: 2025-11-01 01:17:09.970 [INFO][6579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:09.970978 containerd[1824]: time="2025-11-01T01:17:09.970965456Z" level=info msg="TearDown network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" successfully" Nov 1 01:17:09.970978 containerd[1824]: time="2025-11-01T01:17:09.970981046Z" level=info msg="StopPodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" returns successfully" Nov 1 01:17:09.971353 containerd[1824]: time="2025-11-01T01:17:09.971258840Z" level=info msg="RemovePodSandbox for \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\"" Nov 1 01:17:09.971353 containerd[1824]: time="2025-11-01T01:17:09.971273300Z" level=info msg="Forcibly stopping sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\"" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.989 [WARNING][6623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe329cee-9aa5-425f-b021-f1def80c02c8", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"b77c4219e677b692033758da3665eb374bd8a3a03ef4f9331b048a518bd80146", Pod:"csi-node-driver-kckfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabccb52fa6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.989 [INFO][6623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.989 [INFO][6623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" iface="eth0" netns="" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.989 [INFO][6623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.989 [INFO][6623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.999 [INFO][6642] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.999 [INFO][6642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:09.999 [INFO][6642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:10.003 [WARNING][6642] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:10.003 [INFO][6642] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" HandleID="k8s-pod-network.7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Workload="ci--4081.3.6--n--61efafd0e9-k8s-csi--node--driver--kckfw-eth0" Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:10.004 [INFO][6642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.005825 containerd[1824]: 2025-11-01 01:17:10.005 [INFO][6623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f" Nov 1 01:17:10.006115 containerd[1824]: time="2025-11-01T01:17:10.005825805Z" level=info msg="TearDown network for sandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" successfully" Nov 1 01:17:10.022006 containerd[1824]: time="2025-11-01T01:17:10.021976590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:10.022106 containerd[1824]: time="2025-11-01T01:17:10.022017126Z" level=info msg="RemovePodSandbox \"7de90ec73a8f1556262277f2e9f8cfe479e58739fede3ad0906665aa77ef530f\" returns successfully" Nov 1 01:17:10.022286 containerd[1824]: time="2025-11-01T01:17:10.022273452Z" level=info msg="StopPodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\"" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.039 [WARNING][6663] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c59cb744-7d2f-4d9a-b681-ac1bc163601e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d", Pod:"coredns-668d6bf9bc-4dj4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1d9c5871fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.039 [INFO][6663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.039 [INFO][6663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" iface="eth0" netns="" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.039 [INFO][6663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.039 [INFO][6663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.050 [INFO][6681] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.050 [INFO][6681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.050 [INFO][6681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.054 [WARNING][6681] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.054 [INFO][6681] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.055 [INFO][6681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.056791 containerd[1824]: 2025-11-01 01:17:10.056 [INFO][6663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.057094 containerd[1824]: time="2025-11-01T01:17:10.056790227Z" level=info msg="TearDown network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" successfully" Nov 1 01:17:10.057094 containerd[1824]: time="2025-11-01T01:17:10.056805517Z" level=info msg="StopPodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" returns successfully" Nov 1 01:17:10.057094 containerd[1824]: time="2025-11-01T01:17:10.057051614Z" level=info msg="RemovePodSandbox for \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\"" Nov 1 01:17:10.057094 containerd[1824]: time="2025-11-01T01:17:10.057067979Z" level=info msg="Forcibly stopping sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\"" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.076 [WARNING][6709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c59cb744-7d2f-4d9a-b681-ac1bc163601e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"472bd3f4cff1dab0fd6d3d79013821e7c70af61b7a5efd29a4b4bcbbba571d4d", Pod:"coredns-668d6bf9bc-4dj4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1d9c5871fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.076 [INFO][6709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.076 [INFO][6709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" iface="eth0" netns="" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.076 [INFO][6709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.076 [INFO][6709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.087 [INFO][6729] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.087 [INFO][6729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.087 [INFO][6729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.090 [WARNING][6729] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.090 [INFO][6729] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" HandleID="k8s-pod-network.362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--4dj4l-eth0" Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.091 [INFO][6729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.093609 containerd[1824]: 2025-11-01 01:17:10.092 [INFO][6709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9" Nov 1 01:17:10.093609 containerd[1824]: time="2025-11-01T01:17:10.093596176Z" level=info msg="TearDown network for sandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" successfully" Nov 1 01:17:10.095042 containerd[1824]: time="2025-11-01T01:17:10.095000175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:10.095042 containerd[1824]: time="2025-11-01T01:17:10.095024402Z" level=info msg="RemovePodSandbox \"362493d71f18c74ed56acf27a792f037278114e860860da95ea9dd993d0069f9\" returns successfully" Nov 1 01:17:10.095317 containerd[1824]: time="2025-11-01T01:17:10.095275389Z" level=info msg="StopPodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\"" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.111 [WARNING][6754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8933d3a0-6f08-44c4-b76e-2bacec659217", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b", Pod:"coredns-668d6bf9bc-pdzpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7913462f559", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.111 [INFO][6754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.111 [INFO][6754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" iface="eth0" netns="" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.111 [INFO][6754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.111 [INFO][6754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.121 [INFO][6771] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.121 [INFO][6771] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.121 [INFO][6771] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.125 [WARNING][6771] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.125 [INFO][6771] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.126 [INFO][6771] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.127951 containerd[1824]: 2025-11-01 01:17:10.127 [INFO][6754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.127951 containerd[1824]: time="2025-11-01T01:17:10.127913123Z" level=info msg="TearDown network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" successfully" Nov 1 01:17:10.127951 containerd[1824]: time="2025-11-01T01:17:10.127931366Z" level=info msg="StopPodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" returns successfully" Nov 1 01:17:10.128468 containerd[1824]: time="2025-11-01T01:17:10.128197435Z" level=info msg="RemovePodSandbox for \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\"" Nov 1 01:17:10.128468 containerd[1824]: time="2025-11-01T01:17:10.128218853Z" level=info msg="Forcibly stopping sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\"" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.145 [WARNING][6799] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8933d3a0-6f08-44c4-b76e-2bacec659217", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-61efafd0e9", ContainerID:"ed55687aa43c69e9c463d194d476460ddfa91db0a1f42e2de618a4ec1899904b", Pod:"coredns-668d6bf9bc-pdzpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7913462f559", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.145 [INFO][6799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.145 [INFO][6799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" iface="eth0" netns="" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.145 [INFO][6799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.145 [INFO][6799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.155 [INFO][6817] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.155 [INFO][6817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.155 [INFO][6817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.159 [WARNING][6817] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.159 [INFO][6817] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" HandleID="k8s-pod-network.fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Workload="ci--4081.3.6--n--61efafd0e9-k8s-coredns--668d6bf9bc--pdzpp-eth0" Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.160 [INFO][6817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.162022 containerd[1824]: 2025-11-01 01:17:10.161 [INFO][6799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a" Nov 1 01:17:10.162327 containerd[1824]: time="2025-11-01T01:17:10.162030164Z" level=info msg="TearDown network for sandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" successfully" Nov 1 01:17:10.163387 containerd[1824]: time="2025-11-01T01:17:10.163347224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:10.163387 containerd[1824]: time="2025-11-01T01:17:10.163372931Z" level=info msg="RemovePodSandbox \"fe3ff701bfb1080ed5eff1e14001e57b027b00b44297bc30a87a5f8604829b4a\" returns successfully" Nov 1 01:17:10.163669 containerd[1824]: time="2025-11-01T01:17:10.163626066Z" level=info msg="StopPodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\"" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.180 [WARNING][6842] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.180 [INFO][6842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.180 [INFO][6842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" iface="eth0" netns="" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.180 [INFO][6842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.180 [INFO][6842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.191 [INFO][6856] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.191 [INFO][6856] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.191 [INFO][6856] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.195 [WARNING][6856] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.195 [INFO][6856] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.196 [INFO][6856] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.198376 containerd[1824]: 2025-11-01 01:17:10.197 [INFO][6842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.198376 containerd[1824]: time="2025-11-01T01:17:10.198368747Z" level=info msg="TearDown network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" successfully" Nov 1 01:17:10.198775 containerd[1824]: time="2025-11-01T01:17:10.198388494Z" level=info msg="StopPodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" returns successfully" Nov 1 01:17:10.198775 containerd[1824]: time="2025-11-01T01:17:10.198654231Z" level=info msg="RemovePodSandbox for \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\"" Nov 1 01:17:10.198775 containerd[1824]: time="2025-11-01T01:17:10.198675828Z" level=info msg="Forcibly stopping sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\"" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.220 [WARNING][6876] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" WorkloadEndpoint="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.220 [INFO][6876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.220 [INFO][6876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" iface="eth0" netns="" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.220 [INFO][6876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.220 [INFO][6876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.234 [INFO][6892] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.234 [INFO][6892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.234 [INFO][6892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.239 [WARNING][6892] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.240 [INFO][6892] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" HandleID="k8s-pod-network.b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Workload="ci--4081.3.6--n--61efafd0e9-k8s-whisker--864686dcc7--xql57-eth0" Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.241 [INFO][6892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:17:10.243313 containerd[1824]: 2025-11-01 01:17:10.242 [INFO][6876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc" Nov 1 01:17:10.243670 containerd[1824]: time="2025-11-01T01:17:10.243344081Z" level=info msg="TearDown network for sandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" successfully" Nov 1 01:17:10.245061 containerd[1824]: time="2025-11-01T01:17:10.245047156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:17:10.245087 containerd[1824]: time="2025-11-01T01:17:10.245076358Z" level=info msg="RemovePodSandbox \"b2ea873162c10dcd4ca87ba48e1dfede86854106407775931944d0fd7a08c6dc\" returns successfully" Nov 1 01:17:14.599910 kubelet[3089]: E1101 01:17:14.599856 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:17:17.606764 kubelet[3089]: E1101 01:17:17.606457 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:17:17.607296 kubelet[3089]: E1101 01:17:17.606880 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:17:20.601522 kubelet[3089]: E1101 01:17:20.601380 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:17:22.601591 kubelet[3089]: E1101 01:17:22.601473 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:17:23.599517 containerd[1824]: time="2025-11-01T01:17:23.599491305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:17:23.953545 containerd[1824]: time="2025-11-01T01:17:23.953456490Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:23.962957 containerd[1824]: time="2025-11-01T01:17:23.962911446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:17:23.963009 containerd[1824]: time="2025-11-01T01:17:23.962981003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:17:23.963083 kubelet[3089]: E1101 01:17:23.963061 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:17:23.963340 kubelet[3089]: E1101 01:17:23.963090 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:17:23.963340 kubelet[3089]: E1101 01:17:23.963159 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f106df30827649c0a1b41319c8c22502,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:23.964874 containerd[1824]: time="2025-11-01T01:17:23.964818363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:17:24.306297 containerd[1824]: time="2025-11-01T01:17:24.306223061Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:24.306895 containerd[1824]: time="2025-11-01T01:17:24.306820745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:17:24.306988 containerd[1824]: time="2025-11-01T01:17:24.306916213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:17:24.307092 kubelet[3089]: E1101 01:17:24.307046 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:17:24.307092 kubelet[3089]: E1101 01:17:24.307077 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:17:24.307161 kubelet[3089]: E1101 01:17:24.307144 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:24.308328 kubelet[3089]: E1101 01:17:24.308288 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:17:25.604023 containerd[1824]: time="2025-11-01T01:17:25.603959251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:17:25.996412 containerd[1824]: time="2025-11-01T01:17:25.996381889Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:25.996980 containerd[1824]: time="2025-11-01T01:17:25.996960085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:17:25.997038 containerd[1824]: time="2025-11-01T01:17:25.997015866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:17:25.997138 kubelet[3089]: E1101 01:17:25.997111 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:17:25.997335 kubelet[3089]: E1101 01:17:25.997149 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:17:25.997335 kubelet[3089]: E1101 01:17:25.997230 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:25.998700 containerd[1824]: time="2025-11-01T01:17:25.998687290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:17:26.363140 containerd[1824]: time="2025-11-01T01:17:26.363052535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:26.363764 containerd[1824]: time="2025-11-01T01:17:26.363702322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:17:26.363829 containerd[1824]: time="2025-11-01T01:17:26.363772445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:17:26.363921 kubelet[3089]: E1101 01:17:26.363875 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:17:26.363921 kubelet[3089]: E1101 01:17:26.363911 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:17:26.364002 kubelet[3089]: E1101 01:17:26.363979 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:26.365808 kubelet[3089]: E1101 01:17:26.365757 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:17:28.601890 containerd[1824]: time="2025-11-01T01:17:28.601753799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:17:29.033794 containerd[1824]: time="2025-11-01T01:17:29.033726114Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:29.034277 containerd[1824]: time="2025-11-01T01:17:29.034251317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:17:29.034341 containerd[1824]: time="2025-11-01T01:17:29.034276602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:17:29.034460 kubelet[3089]: E1101 01:17:29.034411 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:17:29.034460 kubelet[3089]: E1101 01:17:29.034441 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:17:29.034686 kubelet[3089]: E1101 01:17:29.034513 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wrs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:29.035677 kubelet[3089]: E1101 01:17:29.035634 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:17:31.611576 containerd[1824]: time="2025-11-01T01:17:31.611498128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:17:31.982929 containerd[1824]: time="2025-11-01T01:17:31.982837473Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:31.983592 containerd[1824]: time="2025-11-01T01:17:31.983522190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:17:31.983632 containerd[1824]: time="2025-11-01T01:17:31.983584252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:17:31.983727 kubelet[3089]: E1101 01:17:31.983676 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:17:31.983727 kubelet[3089]: E1101 01:17:31.983708 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:17:31.983944 kubelet[3089]: E1101 01:17:31.983789 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt5lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:31.984976 kubelet[3089]: E1101 01:17:31.984931 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:17:32.601165 containerd[1824]: time="2025-11-01T01:17:32.601051377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:17:33.030112 containerd[1824]: time="2025-11-01T01:17:33.029982021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:33.030728 containerd[1824]: time="2025-11-01T01:17:33.030703661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:17:33.030769 containerd[1824]: time="2025-11-01T01:17:33.030745558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:17:33.030870 kubelet[3089]: E1101 01:17:33.030845 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:33.031067 kubelet[3089]: E1101 01:17:33.030877 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:33.031067 kubelet[3089]: E1101 01:17:33.030949 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:33.032122 kubelet[3089]: E1101 01:17:33.032074 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:17:35.600738 containerd[1824]: time="2025-11-01T01:17:35.600716119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:17:35.610753 kubelet[3089]: E1101 01:17:35.600772 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:17:35.997501 containerd[1824]: time="2025-11-01T01:17:35.997472725Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:17:35.997945 containerd[1824]: time="2025-11-01T01:17:35.997924265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:17:35.998005 containerd[1824]: time="2025-11-01T01:17:35.997986196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:17:35.998077 kubelet[3089]: E1101 01:17:35.998054 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:35.998122 kubelet[3089]: E1101 01:17:35.998088 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:17:35.998210 kubelet[3089]: E1101 01:17:35.998182 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlkp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:17:35.999808 kubelet[3089]: E1101 01:17:35.999793 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:17:40.602503 kubelet[3089]: E1101 01:17:40.602386 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:17:44.601361 kubelet[3089]: E1101 01:17:44.601274 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:17:46.599884 kubelet[3089]: E1101 01:17:46.599844 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:17:46.600461 kubelet[3089]: E1101 01:17:46.600183 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:17:47.600416 kubelet[3089]: E1101 01:17:47.600349 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:17:50.600345 kubelet[3089]: E1101 01:17:50.600262 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:17:54.601956 kubelet[3089]: E1101 01:17:54.601807 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:17:55.601423 kubelet[3089]: E1101 01:17:55.601273 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:17:58.599537 kubelet[3089]: E1101 01:17:58.599498 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:18:00.599254 kubelet[3089]: E1101 01:18:00.599197 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:18:02.600046 kubelet[3089]: E1101 01:18:02.599971 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:18:04.600155 kubelet[3089]: E1101 01:18:04.600092 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:18:05.600370 kubelet[3089]: E1101 01:18:05.600310 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:18:06.599794 kubelet[3089]: E1101 01:18:06.599769 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:18:10.600318 containerd[1824]: time="2025-11-01T01:18:10.600241997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:18:10.957964 containerd[1824]: time="2025-11-01T01:18:10.957849102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:10.958910 containerd[1824]: time="2025-11-01T01:18:10.958815384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:18:10.958910 containerd[1824]: time="2025-11-01T01:18:10.958890253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:18:10.959019 kubelet[3089]: E1101 01:18:10.958965 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:18:10.959019 kubelet[3089]: E1101 01:18:10.958993 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:18:10.959223 kubelet[3089]: E1101 01:18:10.959057 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f106df30827649c0a1b41319c8c22502,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:10.960550 containerd[1824]: time="2025-11-01T01:18:10.960495695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:18:11.374651 containerd[1824]: time="2025-11-01T01:18:11.374546469Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:11.375232 containerd[1824]: time="2025-11-01T01:18:11.375146678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:18:11.375264 containerd[1824]: time="2025-11-01T01:18:11.375226034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:18:11.375373 kubelet[3089]: E1101 01:18:11.375318 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:18:11.375373 kubelet[3089]: E1101 01:18:11.375351 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:18:11.375434 kubelet[3089]: E1101 01:18:11.375418 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:11.376605 kubelet[3089]: E1101 01:18:11.376549 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:18:13.601450 containerd[1824]: time="2025-11-01T01:18:13.601328561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:18:13.998266 containerd[1824]: time="2025-11-01T01:18:13.998185377Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:13.998769 containerd[1824]: time="2025-11-01T01:18:13.998722539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:18:13.998839 containerd[1824]: time="2025-11-01T01:18:13.998756470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:18:13.998930 kubelet[3089]: E1101 01:18:13.998876 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:18:13.998930 kubelet[3089]: E1101 01:18:13.998908 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:18:13.999130 kubelet[3089]: E1101 01:18:13.998986 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt5lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:14.000162 kubelet[3089]: E1101 01:18:14.000117 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:18:16.601920 containerd[1824]: time="2025-11-01T01:18:16.601831086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:18:17.025350 containerd[1824]: time="2025-11-01T01:18:17.025216057Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:17.026065 containerd[1824]: time="2025-11-01T01:18:17.025987639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:18:17.026065 containerd[1824]: time="2025-11-01T01:18:17.026046962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:18:17.026200 kubelet[3089]: E1101 01:18:17.026148 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:18:17.026200 kubelet[3089]: E1101 01:18:17.026179 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:18:17.026493 kubelet[3089]: E1101 01:18:17.026260 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlkp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:17.027436 kubelet[3089]: E1101 01:18:17.027396 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:18:17.601896 containerd[1824]: time="2025-11-01T01:18:17.601783746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:18:17.975525 containerd[1824]: time="2025-11-01T01:18:17.975430792Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:17.975993 containerd[1824]: time="2025-11-01T01:18:17.975960923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:18:17.976022 containerd[1824]: time="2025-11-01T01:18:17.976007966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:18:17.976125 kubelet[3089]: E1101 01:18:17.976091 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:18:17.976151 kubelet[3089]: E1101 01:18:17.976132 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:18:17.976287 kubelet[3089]: E1101 01:18:17.976199 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:17.977909 kubelet[3089]: E1101 01:18:17.977854 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:18:19.602850 containerd[1824]: time="2025-11-01T01:18:19.602778684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:18:19.975507 containerd[1824]: time="2025-11-01T01:18:19.975375500Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:19.976374 containerd[1824]: time="2025-11-01T01:18:19.976266169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:18:19.976374 containerd[1824]: time="2025-11-01T01:18:19.976308736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:18:19.976437 kubelet[3089]: E1101 01:18:19.976419 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:18:19.976632 kubelet[3089]: E1101 01:18:19.976461 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:18:19.976632 kubelet[3089]: E1101 01:18:19.976546 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:19.977899 containerd[1824]: time="2025-11-01T01:18:19.977858934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:18:20.348644 containerd[1824]: time="2025-11-01T01:18:20.348544231Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:20.349176 containerd[1824]: time="2025-11-01T01:18:20.349155721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:18:20.349250 containerd[1824]: time="2025-11-01T01:18:20.349183539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:18:20.349368 kubelet[3089]: E1101 01:18:20.349334 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:18:20.349422 kubelet[3089]: E1101 01:18:20.349378 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:18:20.349523 kubelet[3089]: E1101 01:18:20.349475 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:20.350662 kubelet[3089]: E1101 01:18:20.350644 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:18:21.600659 containerd[1824]: time="2025-11-01T01:18:21.600581058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:18:21.968481 containerd[1824]: time="2025-11-01T01:18:21.968400127Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:18:21.972145 containerd[1824]: time="2025-11-01T01:18:21.972057479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:18:21.972145 containerd[1824]: time="2025-11-01T01:18:21.972126795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:18:21.972399 kubelet[3089]: E1101 01:18:21.972331 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:18:21.972399 kubelet[3089]: E1101 01:18:21.972369 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:18:21.972598 kubelet[3089]: E1101 01:18:21.972445 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wrs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:18:21.973594 kubelet[3089]: E1101 01:18:21.973551 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:18:23.599891 kubelet[3089]: E1101 01:18:23.599857 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:18:28.601235 kubelet[3089]: E1101 01:18:28.601126 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:18:29.599691 kubelet[3089]: E1101 01:18:29.599612 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:18:31.599677 kubelet[3089]: E1101 01:18:31.599624 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:18:31.599957 kubelet[3089]: E1101 01:18:31.599792 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:18:32.600798 kubelet[3089]: E1101 01:18:32.600655 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:18:34.600959 kubelet[3089]: E1101 01:18:34.600902 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:18:40.600939 kubelet[3089]: E1101 01:18:40.600734 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:18:41.600034 kubelet[3089]: E1101 01:18:41.599945 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:18:43.601733 kubelet[3089]: E1101 01:18:43.601623 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:18:45.599486 kubelet[3089]: E1101 01:18:45.599456 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:18:45.599486 kubelet[3089]: E1101 01:18:45.599457 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:18:45.599825 kubelet[3089]: E1101 01:18:45.599707 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:18:51.600319 kubelet[3089]: E1101 01:18:51.600242 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:18:54.601402 kubelet[3089]: E1101 01:18:54.601292 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:18:56.600984 kubelet[3089]: E1101 01:18:56.600889 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:18:58.599474 kubelet[3089]: E1101 01:18:58.599446 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:18:58.599751 kubelet[3089]: E1101 01:18:58.599473 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:18:59.602149 kubelet[3089]: E1101 01:18:59.602020 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:19:05.601494 kubelet[3089]: E1101 01:19:05.601344 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:19:08.600712 kubelet[3089]: E1101 01:19:08.600615 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:19:09.607998 kubelet[3089]: E1101 01:19:09.607867 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:19:10.599558 kubelet[3089]: E1101 01:19:10.599487 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:19:11.599050 kubelet[3089]: E1101 01:19:11.599021 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:19:13.602568 kubelet[3089]: E1101 01:19:13.602452 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:19:16.601257 kubelet[3089]: E1101 01:19:16.601124 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:19:19.599570 kubelet[3089]: E1101 01:19:19.599545 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:19:20.601244 kubelet[3089]: E1101 01:19:20.601109 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:19:22.600157 kubelet[3089]: E1101 01:19:22.600118 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:19:23.599765 kubelet[3089]: E1101 01:19:23.599719 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:19:27.599174 kubelet[3089]: E1101 01:19:27.599149 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:19:27.599530 kubelet[3089]: E1101 01:19:27.599304 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:19:32.601110 kubelet[3089]: E1101 01:19:32.601004 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:19:34.601382 kubelet[3089]: E1101 01:19:34.601286 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:19:34.602380 containerd[1824]: time="2025-11-01T01:19:34.601765159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:19:34.950116 containerd[1824]: time="2025-11-01T01:19:34.950023497Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:34.950977 containerd[1824]: time="2025-11-01T01:19:34.950934286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:19:34.951032 containerd[1824]: time="2025-11-01T01:19:34.950999940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:19:34.951130 kubelet[3089]: E1101 01:19:34.951106 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:19:34.951165 kubelet[3089]: E1101 01:19:34.951140 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:19:34.951230 kubelet[3089]: E1101 01:19:34.951212 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f106df30827649c0a1b41319c8c22502,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:34.952632 containerd[1824]: time="2025-11-01T01:19:34.952589051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:19:35.322189 containerd[1824]: time="2025-11-01T01:19:35.322104317Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:35.322612 containerd[1824]: time="2025-11-01T01:19:35.322587856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:19:35.322725 containerd[1824]: time="2025-11-01T01:19:35.322667826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:19:35.322884 kubelet[3089]: E1101 01:19:35.322842 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:19:35.322947 kubelet[3089]: E1101 01:19:35.322910 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:19:35.323033 kubelet[3089]: E1101 01:19:35.322997 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:35.324144 kubelet[3089]: E1101 01:19:35.324127 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:19:35.599021 kubelet[3089]: E1101 01:19:35.598883 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:19:38.599867 containerd[1824]: time="2025-11-01T01:19:38.599822810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:19:38.973191 containerd[1824]: time="2025-11-01T01:19:38.973061991Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:38.974072 containerd[1824]: time="2025-11-01T01:19:38.974000410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:19:38.974111 containerd[1824]: time="2025-11-01T01:19:38.974076483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:19:38.974171 kubelet[3089]: E1101 01:19:38.974147 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:19:38.974351 kubelet[3089]: E1101 01:19:38.974181 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:19:38.974351 kubelet[3089]: E1101 01:19:38.974263 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt5lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:38.975417 kubelet[3089]: E1101 01:19:38.975401 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:19:39.603073 kubelet[3089]: E1101 01:19:39.602927 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:19:45.604618 containerd[1824]: time="2025-11-01T01:19:45.603167995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:19:45.954981 containerd[1824]: time="2025-11-01T01:19:45.953678900Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:45.958709 containerd[1824]: time="2025-11-01T01:19:45.958659682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:19:45.958836 containerd[1824]: time="2025-11-01T01:19:45.958664494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:19:45.958938 kubelet[3089]: E1101 01:19:45.958895 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:19:45.959345 kubelet[3089]: E1101 01:19:45.958947 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:19:45.959345 kubelet[3089]: E1101 01:19:45.959073 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:45.960270 kubelet[3089]: E1101 01:19:45.960243 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:19:47.602059 containerd[1824]: time="2025-11-01T01:19:47.601966304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:19:47.954145 containerd[1824]: time="2025-11-01T01:19:47.954051609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:47.955011 containerd[1824]: time="2025-11-01T01:19:47.954984113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:19:47.955096 containerd[1824]: time="2025-11-01T01:19:47.955050760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:19:47.955198 kubelet[3089]: E1101 01:19:47.955172 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:19:47.955570 kubelet[3089]: E1101 01:19:47.955213 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:19:47.955570 kubelet[3089]: E1101 01:19:47.955411 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wrs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:47.956678 kubelet[3089]: E1101 01:19:47.956650 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:19:50.599735 containerd[1824]: time="2025-11-01T01:19:50.599690301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:19:50.599975 kubelet[3089]: E1101 01:19:50.599746 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:19:50.954032 containerd[1824]: time="2025-11-01T01:19:50.953894085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:50.954995 containerd[1824]: time="2025-11-01T01:19:50.954905318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:19:50.955051 containerd[1824]: time="2025-11-01T01:19:50.954967867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:19:50.955104 kubelet[3089]: E1101 01:19:50.955082 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:19:50.955136 kubelet[3089]: E1101 01:19:50.955114 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:19:50.955288 kubelet[3089]: E1101 01:19:50.955194 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlkp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:50.956940 kubelet[3089]: E1101 01:19:50.956898 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:19:52.601621 containerd[1824]: time="2025-11-01T01:19:52.601500833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:19:53.013562 containerd[1824]: time="2025-11-01T01:19:53.013432236Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:53.014403 containerd[1824]: time="2025-11-01T01:19:53.014326927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:19:53.014468 containerd[1824]: time="2025-11-01T01:19:53.014393938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:19:53.014521 kubelet[3089]: E1101 01:19:53.014498 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:19:53.014699 kubelet[3089]: E1101 01:19:53.014531 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:19:53.014699 kubelet[3089]: E1101 01:19:53.014598 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:53.016118 containerd[1824]: time="2025-11-01T01:19:53.016105449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:19:53.388800 containerd[1824]: time="2025-11-01T01:19:53.388565482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:19:53.389459 containerd[1824]: time="2025-11-01T01:19:53.389391593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:19:53.389507 containerd[1824]: time="2025-11-01T01:19:53.389459367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:19:53.389559 kubelet[3089]: E1101 01:19:53.389534 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:19:53.389602 kubelet[3089]: E1101 01:19:53.389569 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:19:53.389678 kubelet[3089]: E1101 01:19:53.389655 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:19:53.390858 kubelet[3089]: E1101 01:19:53.390817 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:19:53.601413 kubelet[3089]: E1101 01:19:53.601275 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:19:56.600536 kubelet[3089]: E1101 01:19:56.600453 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:20:01.600623 kubelet[3089]: E1101 01:20:01.600536 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:20:01.602048 kubelet[3089]: E1101 01:20:01.601965 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:20:05.601477 kubelet[3089]: E1101 01:20:05.601253 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:20:05.601477 kubelet[3089]: E1101 01:20:05.601269 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:20:05.603025 kubelet[3089]: E1101 01:20:05.602444 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:20:08.599603 kubelet[3089]: E1101 01:20:08.599535 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:20:15.599412 kubelet[3089]: E1101 01:20:15.599367 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:20:16.600983 kubelet[3089]: E1101 01:20:16.600868 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:20:18.601463 kubelet[3089]: E1101 01:20:18.601328 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:20:19.604035 kubelet[3089]: E1101 01:20:19.603932 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:20:19.605192 kubelet[3089]: E1101 01:20:19.605069 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:20:20.601110 kubelet[3089]: E1101 01:20:20.600991 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:20:29.600917 kubelet[3089]: E1101 01:20:29.600856 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:20:30.601305 kubelet[3089]: E1101 01:20:30.601182 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:20:31.599562 kubelet[3089]: E1101 01:20:31.599533 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:20:33.601835 kubelet[3089]: E1101 01:20:33.601733 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:20:33.601835 kubelet[3089]: E1101 01:20:33.601775 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:20:33.603057 kubelet[3089]: E1101 01:20:33.601837 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:20:42.600066 kubelet[3089]: E1101 01:20:42.600030 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:20:43.600919 kubelet[3089]: E1101 01:20:43.600812 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:20:45.601102 kubelet[3089]: E1101 01:20:45.600956 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:20:45.602116 kubelet[3089]: E1101 01:20:45.601866 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:20:46.601154 kubelet[3089]: E1101 01:20:46.601027 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:20:46.601154 kubelet[3089]: E1101 01:20:46.601041 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:20:56.602536 kubelet[3089]: E1101 01:20:56.602394 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:20:57.601528 kubelet[3089]: E1101 01:20:57.601428 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:20:57.601994 kubelet[3089]: E1101 01:20:57.601913 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:20:57.602685 kubelet[3089]: E1101 01:20:57.602596 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:20:58.599466 kubelet[3089]: E1101 01:20:58.599432 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:20:59.601840 kubelet[3089]: E1101 01:20:59.601711 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:21:09.601222 kubelet[3089]: E1101 01:21:09.601147 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:21:10.599052 kubelet[3089]: E1101 01:21:10.599015 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:21:10.599234 kubelet[3089]: E1101 01:21:10.599060 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:21:10.599360 kubelet[3089]: E1101 01:21:10.599332 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:21:12.600946 kubelet[3089]: E1101 01:21:12.600816 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:21:14.599968 kubelet[3089]: E1101 01:21:14.599932 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:21:20.601915 kubelet[3089]: E1101 01:21:20.601791 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:21:21.602271 kubelet[3089]: E1101 01:21:21.602149 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:21:23.599932 kubelet[3089]: E1101 01:21:23.599874 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:21:23.599932 kubelet[3089]: E1101 01:21:23.599911 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:21:25.601164 kubelet[3089]: E1101 01:21:25.601080 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:21:27.599174 kubelet[3089]: E1101 01:21:27.599146 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:21:31.602250 kubelet[3089]: E1101 01:21:31.602128 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:21:32.601343 kubelet[3089]: E1101 01:21:32.601170 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:21:35.599757 kubelet[3089]: E1101 01:21:35.599703 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:21:38.599918 kubelet[3089]: E1101 01:21:38.599878 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:21:38.599918 kubelet[3089]: E1101 01:21:38.599878 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:21:41.600513 kubelet[3089]: E1101 01:21:41.600407 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:21:45.602410 kubelet[3089]: E1101 01:21:45.602290 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:21:47.600142 kubelet[3089]: E1101 01:21:47.600074 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:21:48.601579 kubelet[3089]: E1101 01:21:48.601490 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:21:49.601880 kubelet[3089]: E1101 01:21:49.601759 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:21:53.600518 kubelet[3089]: E1101 01:21:53.600424 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:21:54.600691 kubelet[3089]: E1101 01:21:54.600541 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:21:59.600022 kubelet[3089]: E1101 01:21:59.599998 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:22:00.600887 kubelet[3089]: E1101 01:22:00.600792 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:22:00.602099 kubelet[3089]: E1101 01:22:00.601798 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:22:02.600199 kubelet[3089]: E1101 01:22:02.600112 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:22:04.600379 kubelet[3089]: E1101 01:22:04.600243 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:22:06.601272 kubelet[3089]: E1101 01:22:06.601151 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:22:11.602540 kubelet[3089]: E1101 01:22:11.602431 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:22:13.600642 kubelet[3089]: E1101 01:22:13.600566 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:22:14.599418 kubelet[3089]: E1101 01:22:14.599387 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:22:15.599034 kubelet[3089]: E1101 01:22:15.598982 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:22:19.601906 kubelet[3089]: E1101 01:22:19.601802 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:22:21.601364 containerd[1824]: time="2025-11-01T01:22:21.601233708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:22:21.940691 containerd[1824]: time="2025-11-01T01:22:21.940641667Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:21.941247 containerd[1824]: time="2025-11-01T01:22:21.941175403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:22:21.941344 containerd[1824]: time="2025-11-01T01:22:21.941215097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:22:21.941432 kubelet[3089]: E1101 01:22:21.941405 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:22:21.941663 kubelet[3089]: E1101 01:22:21.941442 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:22:21.941663 kubelet[3089]: E1101 01:22:21.941529 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt5lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t2l45_calico-system(07dd73e6-dcff-41d4-b90f-7314863a267d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:21.942707 kubelet[3089]: E1101 01:22:21.942693 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:22:22.600647 containerd[1824]: time="2025-11-01T01:22:22.600607312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:22:22.945411 containerd[1824]: time="2025-11-01T01:22:22.945284855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:22.962967 containerd[1824]: time="2025-11-01T01:22:22.962875437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:22:22.962967 containerd[1824]: time="2025-11-01T01:22:22.962922604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:22:22.963140 kubelet[3089]: E1101 01:22:22.963083 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:22:22.963456 kubelet[3089]: E1101 01:22:22.963154 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:22:22.963456 kubelet[3089]: E1101 01:22:22.963274 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f106df30827649c0a1b41319c8c22502,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:22.965175 containerd[1824]: time="2025-11-01T01:22:22.965119975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:22:23.319274 containerd[1824]: time="2025-11-01T01:22:23.319178817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:23.319696 containerd[1824]: time="2025-11-01T01:22:23.319631883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:22:23.319766 containerd[1824]: time="2025-11-01T01:22:23.319666210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:22:23.319877 kubelet[3089]: E1101 01:22:23.319809 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:22:23.319877 kubelet[3089]: E1101 01:22:23.319842 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:22:23.319997 kubelet[3089]: E1101 01:22:23.319938 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zc5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8c77c549-chp4w_calico-system(a870b1c4-6e9d-4a96-936e-df1c8a98c970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:23.321101 kubelet[3089]: E1101 01:22:23.321055 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:22:26.599648 kubelet[3089]: E1101 01:22:26.599613 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:22:26.958388 systemd[1]: Started sshd@9-139.178.94.199:22-139.178.89.65:49998.service - OpenSSH per-connection server daemon (139.178.89.65:49998). Nov 1 01:22:27.020916 sshd[7418]: Accepted publickey for core from 139.178.89.65 port 49998 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:27.022487 sshd[7418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:27.027515 systemd-logind[1809]: New session 12 of user core. Nov 1 01:22:27.035448 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 01:22:27.175776 sshd[7418]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:27.178559 systemd[1]: sshd@9-139.178.94.199:22-139.178.89.65:49998.service: Deactivated successfully. Nov 1 01:22:27.179888 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:22:27.180434 systemd-logind[1809]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:22:27.181150 systemd-logind[1809]: Removed session 12. Nov 1 01:22:27.601153 kubelet[3089]: E1101 01:22:27.601021 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:22:29.600442 containerd[1824]: time="2025-11-01T01:22:29.600359914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:22:29.974408 containerd[1824]: time="2025-11-01T01:22:29.974382106Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:29.974970 containerd[1824]: time="2025-11-01T01:22:29.974950923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:22:29.975027 containerd[1824]: time="2025-11-01T01:22:29.975002224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:22:29.975151 kubelet[3089]: E1101 01:22:29.975123 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:22:29.975351 kubelet[3089]: E1101 01:22:29.975161 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:22:29.975351 kubelet[3089]: E1101 01:22:29.975282 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wrs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-895fdb58f-xjcnc_calico-system(c808102b-d4ca-4405-80f4-fc0935baaa15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:29.976412 kubelet[3089]: E1101 01:22:29.976393 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:22:32.189911 systemd[1]: Started sshd@10-139.178.94.199:22-139.178.89.65:50004.service - OpenSSH per-connection server daemon (139.178.89.65:50004). Nov 1 01:22:32.234391 sshd[7449]: Accepted publickey for core from 139.178.89.65 port 50004 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:32.235167 sshd[7449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:32.237823 systemd-logind[1809]: New session 13 of user core. Nov 1 01:22:32.259379 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 01:22:32.344124 sshd[7449]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:32.345800 systemd[1]: sshd@10-139.178.94.199:22-139.178.89.65:50004.service: Deactivated successfully. Nov 1 01:22:32.346718 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:22:32.347374 systemd-logind[1809]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:22:32.347912 systemd-logind[1809]: Removed session 13. Nov 1 01:22:34.599970 containerd[1824]: time="2025-11-01T01:22:34.599940146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:22:34.964134 containerd[1824]: time="2025-11-01T01:22:34.964018272Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:34.965011 containerd[1824]: time="2025-11-01T01:22:34.964942229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:22:34.965061 containerd[1824]: time="2025-11-01T01:22:34.965009040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:22:34.965155 kubelet[3089]: E1101 01:22:34.965131 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:22:34.965352 kubelet[3089]: E1101 01:22:34.965164 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:22:34.965352 kubelet[3089]: E1101 01:22:34.965266 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-7gbvk_calico-apiserver(f2ac6185-3f80-4cfa-971b-e2d87f342f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:34.966477 kubelet[3089]: E1101 01:22:34.966434 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:22:35.599647 kubelet[3089]: E1101 01:22:35.599593 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:22:37.371409 systemd[1]: Started sshd@11-139.178.94.199:22-139.178.89.65:47722.service - OpenSSH per-connection server daemon (139.178.89.65:47722). Nov 1 01:22:37.465397 sshd[7475]: Accepted publickey for core from 139.178.89.65 port 47722 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:37.466454 sshd[7475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:37.469591 systemd-logind[1809]: New session 14 of user core. Nov 1 01:22:37.484463 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 01:22:37.599148 kubelet[3089]: E1101 01:22:37.599103 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:22:37.615004 sshd[7475]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:37.631336 systemd[1]: sshd@11-139.178.94.199:22-139.178.89.65:47722.service: Deactivated successfully. Nov 1 01:22:37.632374 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:22:37.633140 systemd-logind[1809]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:22:37.633951 systemd[1]: Started sshd@12-139.178.94.199:22-139.178.89.65:47736.service - OpenSSH per-connection server daemon (139.178.89.65:47736). Nov 1 01:22:37.634543 systemd-logind[1809]: Removed session 14. Nov 1 01:22:37.673595 sshd[7502]: Accepted publickey for core from 139.178.89.65 port 47736 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:37.674660 sshd[7502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:37.677723 systemd-logind[1809]: New session 15 of user core. Nov 1 01:22:37.694298 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 01:22:37.803124 sshd[7502]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:37.815514 systemd[1]: sshd@12-139.178.94.199:22-139.178.89.65:47736.service: Deactivated successfully. Nov 1 01:22:37.816718 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:22:37.817591 systemd-logind[1809]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:22:37.818532 systemd[1]: Started sshd@13-139.178.94.199:22-139.178.89.65:47740.service - OpenSSH per-connection server daemon (139.178.89.65:47740). Nov 1 01:22:37.819132 systemd-logind[1809]: Removed session 15. Nov 1 01:22:37.852824 sshd[7527]: Accepted publickey for core from 139.178.89.65 port 47740 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:37.853648 sshd[7527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:37.856410 systemd-logind[1809]: New session 16 of user core. Nov 1 01:22:37.865386 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 01:22:37.944249 sshd[7527]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:37.946433 systemd[1]: sshd@13-139.178.94.199:22-139.178.89.65:47740.service: Deactivated successfully. Nov 1 01:22:37.947636 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:22:37.948090 systemd-logind[1809]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:22:37.948631 systemd-logind[1809]: Removed session 16. Nov 1 01:22:40.601152 containerd[1824]: time="2025-11-01T01:22:40.601035021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:22:40.968494 containerd[1824]: time="2025-11-01T01:22:40.968350382Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:40.969342 containerd[1824]: time="2025-11-01T01:22:40.969251809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:22:40.969342 containerd[1824]: time="2025-11-01T01:22:40.969325465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:22:40.969457 kubelet[3089]: E1101 01:22:40.969409 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:22:40.969706 kubelet[3089]: E1101 01:22:40.969453 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:22:40.969706 kubelet[3089]: E1101 01:22:40.969638 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlkp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5847c846dc-9sdnh_calico-apiserver(e53afa66-571d-49bb-8168-fa6f398b3e23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:40.969803 containerd[1824]: time="2025-11-01T01:22:40.969699711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:22:40.970821 kubelet[3089]: E1101 01:22:40.970773 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:22:41.349811 containerd[1824]: time="2025-11-01T01:22:41.349645986Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:41.350337 containerd[1824]: time="2025-11-01T01:22:41.350252009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:22:41.350337 containerd[1824]: time="2025-11-01T01:22:41.350313058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:22:41.350470 kubelet[3089]: E1101 01:22:41.350434 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:22:41.350518 kubelet[3089]: E1101 01:22:41.350471 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:22:41.350593 kubelet[3089]: E1101 01:22:41.350541 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:41.352104 containerd[1824]: time="2025-11-01T01:22:41.352092203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:22:41.601122 kubelet[3089]: E1101 01:22:41.600874 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:22:41.731843 containerd[1824]: time="2025-11-01T01:22:41.731785532Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:22:41.732284 containerd[1824]: time="2025-11-01T01:22:41.732184433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:22:41.732284 containerd[1824]: time="2025-11-01T01:22:41.732215017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:22:41.732419 kubelet[3089]: E1101 01:22:41.732371 3089 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:22:41.732419 kubelet[3089]: E1101 01:22:41.732402 3089 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:22:41.732492 kubelet[3089]: E1101 01:22:41.732472 3089 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kckfw_calico-system(fe329cee-9aa5-425f-b021-f1def80c02c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:22:41.733652 kubelet[3089]: E1101 01:22:41.733607 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:22:42.960125 systemd[1]: Started sshd@14-139.178.94.199:22-139.178.89.65:47742.service - OpenSSH per-connection server daemon (139.178.89.65:47742). Nov 1 01:22:43.036929 sshd[7560]: Accepted publickey for core from 139.178.89.65 port 47742 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:43.038269 sshd[7560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:43.042612 systemd-logind[1809]: New session 17 of user core. Nov 1 01:22:43.059465 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 01:22:43.183085 sshd[7560]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:43.184590 systemd[1]: sshd@14-139.178.94.199:22-139.178.89.65:47742.service: Deactivated successfully. Nov 1 01:22:43.185488 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:22:43.186180 systemd-logind[1809]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:22:43.186843 systemd-logind[1809]: Removed session 17. Nov 1 01:22:46.104152 update_engine[1811]: I20251101 01:22:46.104117 1811 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 01:22:46.104152 update_engine[1811]: I20251101 01:22:46.104151 1811 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 01:22:46.104473 update_engine[1811]: I20251101 01:22:46.104281 1811 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 01:22:46.104602 update_engine[1811]: I20251101 01:22:46.104588 1811 omaha_request_params.cc:62] Current group set to lts Nov 1 01:22:46.104672 update_engine[1811]: I20251101 01:22:46.104660 1811 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 01:22:46.104672 update_engine[1811]: I20251101 01:22:46.104669 1811 update_attempter.cc:643] Scheduling an action processor start. Nov 1 01:22:46.104716 update_engine[1811]: I20251101 01:22:46.104679 1811 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:22:46.104716 update_engine[1811]: I20251101 01:22:46.104701 1811 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 01:22:46.104763 update_engine[1811]: I20251101 01:22:46.104743 1811 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 01:22:46.104763 update_engine[1811]: I20251101 01:22:46.104753 1811 omaha_request_action.cc:272] Request: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: Nov 1 01:22:46.104763 update_engine[1811]: I20251101 01:22:46.104756 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:22:46.104984 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 01:22:46.105740 update_engine[1811]: I20251101 01:22:46.105712 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:22:46.105963 update_engine[1811]: I20251101 01:22:46.105919 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:22:46.106510 update_engine[1811]: E20251101 01:22:46.106487 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:22:46.106572 update_engine[1811]: I20251101 01:22:46.106534 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 01:22:48.199150 systemd[1]: Started sshd@15-139.178.94.199:22-139.178.89.65:38102.service - OpenSSH per-connection server daemon (139.178.89.65:38102). Nov 1 01:22:48.269302 sshd[7589]: Accepted publickey for core from 139.178.89.65 port 38102 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:48.270497 sshd[7589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:48.274262 systemd-logind[1809]: New session 18 of user core. Nov 1 01:22:48.290655 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 01:22:48.402910 sshd[7589]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:48.404679 systemd[1]: sshd@15-139.178.94.199:22-139.178.89.65:38102.service: Deactivated successfully. Nov 1 01:22:48.405589 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:22:48.406036 systemd-logind[1809]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:22:48.406653 systemd-logind[1809]: Removed session 18. Nov 1 01:22:49.602473 kubelet[3089]: E1101 01:22:49.602386 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:22:49.602473 kubelet[3089]: E1101 01:22:49.602387 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:22:50.600484 kubelet[3089]: E1101 01:22:50.600402 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:22:53.427488 systemd[1]: Started sshd@16-139.178.94.199:22-139.178.89.65:38114.service - OpenSSH per-connection server daemon (139.178.89.65:38114). Nov 1 01:22:53.510779 sshd[7652]: Accepted publickey for core from 139.178.89.65 port 38114 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:53.511919 sshd[7652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:53.515291 systemd-logind[1809]: New session 19 of user core. Nov 1 01:22:53.529411 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 01:22:53.608702 sshd[7652]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:53.610132 systemd[1]: sshd@16-139.178.94.199:22-139.178.89.65:38114.service: Deactivated successfully. Nov 1 01:22:53.611094 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:22:53.611769 systemd-logind[1809]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:22:53.612212 systemd-logind[1809]: Removed session 19. Nov 1 01:22:54.599140 kubelet[3089]: E1101 01:22:54.599118 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:22:54.599140 kubelet[3089]: E1101 01:22:54.599116 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:22:56.103511 update_engine[1811]: I20251101 01:22:56.103349 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:22:56.104391 update_engine[1811]: I20251101 01:22:56.103904 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:22:56.104628 update_engine[1811]: I20251101 01:22:56.104465 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:22:56.105408 update_engine[1811]: E20251101 01:22:56.105278 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:22:56.105576 update_engine[1811]: I20251101 01:22:56.105480 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 01:22:56.600394 kubelet[3089]: E1101 01:22:56.600359 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:22:58.632393 systemd[1]: Started sshd@17-139.178.94.199:22-139.178.89.65:51260.service - OpenSSH per-connection server daemon (139.178.89.65:51260). Nov 1 01:22:58.665662 sshd[7678]: Accepted publickey for core from 139.178.89.65 port 51260 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:58.666524 sshd[7678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:58.669084 systemd-logind[1809]: New session 20 of user core. Nov 1 01:22:58.684386 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 01:22:58.763238 sshd[7678]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:58.780943 systemd[1]: sshd@17-139.178.94.199:22-139.178.89.65:51260.service: Deactivated successfully. Nov 1 01:22:58.781775 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:22:58.782479 systemd-logind[1809]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:22:58.783240 systemd[1]: Started sshd@18-139.178.94.199:22-139.178.89.65:51264.service - OpenSSH per-connection server daemon (139.178.89.65:51264). Nov 1 01:22:58.783758 systemd-logind[1809]: Removed session 20. Nov 1 01:22:58.814593 sshd[7704]: Accepted publickey for core from 139.178.89.65 port 51264 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:58.815473 sshd[7704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:58.818087 systemd-logind[1809]: New session 21 of user core. Nov 1 01:22:58.827316 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 01:22:59.049760 sshd[7704]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:59.072762 systemd[1]: sshd@18-139.178.94.199:22-139.178.89.65:51264.service: Deactivated successfully. Nov 1 01:22:59.076813 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:22:59.080276 systemd-logind[1809]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:22:59.083154 systemd[1]: Started sshd@19-139.178.94.199:22-139.178.89.65:51274.service - OpenSSH per-connection server daemon (139.178.89.65:51274). Nov 1 01:22:59.084879 systemd-logind[1809]: Removed session 21. Nov 1 01:22:59.134729 sshd[7729]: Accepted publickey for core from 139.178.89.65 port 51274 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:59.135758 sshd[7729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:59.139187 systemd-logind[1809]: New session 22 of user core. Nov 1 01:22:59.156344 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 01:22:59.879159 sshd[7729]: pam_unix(sshd:session): session closed for user core Nov 1 01:22:59.893041 systemd[1]: sshd@19-139.178.94.199:22-139.178.89.65:51274.service: Deactivated successfully. Nov 1 01:22:59.893910 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:22:59.894611 systemd-logind[1809]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:22:59.895261 systemd[1]: Started sshd@20-139.178.94.199:22-139.178.89.65:51282.service - OpenSSH per-connection server daemon (139.178.89.65:51282). Nov 1 01:22:59.895742 systemd-logind[1809]: Removed session 22. Nov 1 01:22:59.926476 sshd[7760]: Accepted publickey for core from 139.178.89.65 port 51282 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:22:59.927254 sshd[7760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:22:59.929666 systemd-logind[1809]: New session 23 of user core. Nov 1 01:22:59.945678 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 01:23:00.132137 sshd[7760]: pam_unix(sshd:session): session closed for user core Nov 1 01:23:00.147958 systemd[1]: sshd@20-139.178.94.199:22-139.178.89.65:51282.service: Deactivated successfully. Nov 1 01:23:00.148746 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:23:00.149459 systemd-logind[1809]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:23:00.150088 systemd[1]: Started sshd@21-139.178.94.199:22-139.178.89.65:51288.service - OpenSSH per-connection server daemon (139.178.89.65:51288). Nov 1 01:23:00.150549 systemd-logind[1809]: Removed session 23. Nov 1 01:23:00.182089 sshd[7784]: Accepted publickey for core from 139.178.89.65 port 51288 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:23:00.182846 sshd[7784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:23:00.185476 systemd-logind[1809]: New session 24 of user core. Nov 1 01:23:00.203366 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 01:23:00.280422 sshd[7784]: pam_unix(sshd:session): session closed for user core Nov 1 01:23:00.281956 systemd[1]: sshd@21-139.178.94.199:22-139.178.89.65:51288.service: Deactivated successfully. Nov 1 01:23:00.282901 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:23:00.283572 systemd-logind[1809]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:23:00.284121 systemd-logind[1809]: Removed session 24. Nov 1 01:23:01.601003 kubelet[3089]: E1101 01:23:01.600865 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:23:02.602243 kubelet[3089]: E1101 01:23:02.602124 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:23:04.601082 kubelet[3089]: E1101 01:23:04.600940 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d" Nov 1 01:23:05.295543 systemd[1]: Started sshd@22-139.178.94.199:22-139.178.89.65:51300.service - OpenSSH per-connection server daemon (139.178.89.65:51300). Nov 1 01:23:05.336466 sshd[7830]: Accepted publickey for core from 139.178.89.65 port 51300 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:23:05.337466 sshd[7830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:23:05.340462 systemd-logind[1809]: New session 25 of user core. Nov 1 01:23:05.352316 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 01:23:05.460956 sshd[7830]: pam_unix(sshd:session): session closed for user core Nov 1 01:23:05.463012 systemd[1]: sshd@22-139.178.94.199:22-139.178.89.65:51300.service: Deactivated successfully. Nov 1 01:23:05.464175 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:23:05.465104 systemd-logind[1809]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:23:05.466019 systemd-logind[1809]: Removed session 25. Nov 1 01:23:05.601869 kubelet[3089]: E1101 01:23:05.601635 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-9sdnh" podUID="e53afa66-571d-49bb-8168-fa6f398b3e23" Nov 1 01:23:06.102822 update_engine[1811]: I20251101 01:23:06.102684 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:23:06.103724 update_engine[1811]: I20251101 01:23:06.103253 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:23:06.103847 update_engine[1811]: I20251101 01:23:06.103734 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:23:06.104508 update_engine[1811]: E20251101 01:23:06.104399 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:23:06.104698 update_engine[1811]: I20251101 01:23:06.104542 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 1 01:23:06.601429 kubelet[3089]: E1101 01:23:06.601333 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-895fdb58f-xjcnc" podUID="c808102b-d4ca-4405-80f4-fc0935baaa15" Nov 1 01:23:07.602496 kubelet[3089]: E1101 01:23:07.602376 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kckfw" podUID="fe329cee-9aa5-425f-b021-f1def80c02c8" Nov 1 01:23:10.483120 systemd[1]: Started sshd@23-139.178.94.199:22-139.178.89.65:50620.service - OpenSSH per-connection server daemon (139.178.89.65:50620). Nov 1 01:23:10.574449 sshd[7861]: Accepted publickey for core from 139.178.89.65 port 50620 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:23:10.576129 sshd[7861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:23:10.581303 systemd-logind[1809]: New session 26 of user core. Nov 1 01:23:10.596684 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 01:23:10.747757 sshd[7861]: pam_unix(sshd:session): session closed for user core Nov 1 01:23:10.749345 systemd[1]: sshd@23-139.178.94.199:22-139.178.89.65:50620.service: Deactivated successfully. Nov 1 01:23:10.750279 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 01:23:10.751029 systemd-logind[1809]: Session 26 logged out. Waiting for processes to exit. Nov 1 01:23:10.751757 systemd-logind[1809]: Removed session 26. Nov 1 01:23:13.602234 kubelet[3089]: E1101 01:23:13.602112 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8c77c549-chp4w" podUID="a870b1c4-6e9d-4a96-936e-df1c8a98c970" Nov 1 01:23:15.767613 systemd[1]: Started sshd@24-139.178.94.199:22-139.178.89.65:50624.service - OpenSSH per-connection server daemon (139.178.89.65:50624). Nov 1 01:23:15.802530 sshd[7886]: Accepted publickey for core from 139.178.89.65 port 50624 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:23:15.803188 sshd[7886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:23:15.805739 systemd-logind[1809]: New session 27 of user core. Nov 1 01:23:15.806368 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 01:23:15.940214 sshd[7886]: pam_unix(sshd:session): session closed for user core Nov 1 01:23:15.944679 systemd[1]: sshd@24-139.178.94.199:22-139.178.89.65:50624.service: Deactivated successfully. Nov 1 01:23:15.946929 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 01:23:15.947981 systemd-logind[1809]: Session 27 logged out. Waiting for processes to exit. Nov 1 01:23:15.949401 systemd-logind[1809]: Removed session 27. Nov 1 01:23:16.102044 update_engine[1811]: I20251101 01:23:16.101930 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:23:16.102265 update_engine[1811]: I20251101 01:23:16.102087 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:23:16.102265 update_engine[1811]: I20251101 01:23:16.102227 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:23:16.102858 update_engine[1811]: E20251101 01:23:16.102813 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:23:16.102858 update_engine[1811]: I20251101 01:23:16.102843 1811 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 01:23:16.102858 update_engine[1811]: I20251101 01:23:16.102850 1811 omaha_request_action.cc:617] Omaha request response: Nov 1 01:23:16.102944 update_engine[1811]: E20251101 01:23:16.102893 1811 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 1 01:23:16.102944 update_engine[1811]: I20251101 01:23:16.102908 1811 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 1 01:23:16.102944 update_engine[1811]: I20251101 01:23:16.102912 1811 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:23:16.102944 update_engine[1811]: I20251101 01:23:16.102916 1811 update_attempter.cc:306] Processing Done. Nov 1 01:23:16.102944 update_engine[1811]: E20251101 01:23:16.102924 1811 update_attempter.cc:619] Update failed. Nov 1 01:23:16.102944 update_engine[1811]: I20251101 01:23:16.102928 1811 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 1 01:23:16.102944 update_engine[1811]: I20251101 01:23:16.102931 1811 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 1 01:23:16.102944 update_engine[1811]: I20251101 01:23:16.102936 1811 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 1 01:23:16.103080 update_engine[1811]: I20251101 01:23:16.102977 1811 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:23:16.103080 update_engine[1811]: I20251101 01:23:16.102993 1811 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 01:23:16.103080 update_engine[1811]: I20251101 01:23:16.102996 1811 omaha_request_action.cc:272] Request: Nov 1 01:23:16.103080 update_engine[1811]: Nov 1 01:23:16.103080 update_engine[1811]: Nov 1 01:23:16.103080 update_engine[1811]: Nov 1 01:23:16.103080 update_engine[1811]: Nov 1 01:23:16.103080 update_engine[1811]: Nov 1 01:23:16.103080 update_engine[1811]: Nov 1 01:23:16.103080 update_engine[1811]: I20251101 01:23:16.102999 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:23:16.103257 update_engine[1811]: I20251101 01:23:16.103083 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:23:16.103257 update_engine[1811]: I20251101 01:23:16.103178 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:23:16.103296 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 1 01:23:16.104001 update_engine[1811]: E20251101 01:23:16.103957 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:23:16.104001 update_engine[1811]: I20251101 01:23:16.103982 1811 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 01:23:16.104001 update_engine[1811]: I20251101 01:23:16.103988 1811 omaha_request_action.cc:617] Omaha request response: Nov 1 01:23:16.104001 update_engine[1811]: I20251101 01:23:16.103992 1811 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:23:16.104001 update_engine[1811]: I20251101 01:23:16.103994 1811 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:23:16.104001 update_engine[1811]: I20251101 01:23:16.103998 1811 update_attempter.cc:306] Processing Done. Nov 1 01:23:16.104001 update_engine[1811]: I20251101 01:23:16.104001 1811 update_attempter.cc:310] Error event sent. Nov 1 01:23:16.104133 update_engine[1811]: I20251101 01:23:16.104007 1811 update_check_scheduler.cc:74] Next update check in 43m15s Nov 1 01:23:16.104195 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 1 01:23:16.601112 kubelet[3089]: E1101 01:23:16.601028 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5847c846dc-7gbvk" podUID="f2ac6185-3f80-4cfa-971b-e2d87f342f5e" Nov 1 01:23:17.600552 kubelet[3089]: E1101 01:23:17.600485 3089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t2l45" podUID="07dd73e6-dcff-41d4-b90f-7314863a267d"