Nov 1 01:35:01.028689 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:35:01.028703 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:35:01.028710 kernel: BIOS-provided physical RAM map: Nov 1 01:35:01.028715 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:35:01.028719 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:35:01.028723 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:35:01.028727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:35:01.028732 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:35:01.028736 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Nov 1 01:35:01.028740 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Nov 1 01:35:01.028744 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Nov 1 01:35:01.028749 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Nov 1 01:35:01.028753 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 1 01:35:01.028758 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 1 01:35:01.028763 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 1 01:35:01.028768 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 1 01:35:01.028773 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:35:01.028778 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:35:01.028783 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:35:01.028787 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:35:01.028792 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:35:01.028796 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:35:01.028801 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:35:01.028806 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:35:01.028810 kernel: NX (Execute Disable) protection: active Nov 1 01:35:01.028815 kernel: APIC: Static calls initialized Nov 1 01:35:01.028820 kernel: SMBIOS 3.2.1 present. Nov 1 01:35:01.028824 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 1 01:35:01.028830 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:35:01.028835 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:35:01.028839 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:35:01.028845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:35:01.028849 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:35:01.028854 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 01:35:01.028859 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:35:01.028864 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:35:01.028869 kernel: Using GB pages for direct mapping Nov 1 01:35:01.028875 kernel: ACPI: Early table checksum verification disabled Nov 1 01:35:01.028880 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:35:01.028884 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:35:01.028891 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 1 01:35:01.028896 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:35:01.028901 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 1 01:35:01.028906 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 1 01:35:01.028913 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:35:01.028918 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:35:01.028923 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:35:01.028928 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:35:01.028933 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:35:01.028938 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:35:01.028943 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:35:01.028949 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028954 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:35:01.028959 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:35:01.028964 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028969 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028974 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:35:01.028979 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:35:01.028984 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028989 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028995 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:35:01.029000 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:35:01.029005 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:35:01.029011 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:35:01.029016 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:35:01.029021 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:35:01.029026 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:35:01.029031 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:35:01.029037 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:35:01.029042 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:35:01.029047 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:35:01.029052 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 1 01:35:01.029057 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 1 01:35:01.029062 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 1 01:35:01.029067 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 1 01:35:01.029072 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 1 01:35:01.029077 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 1 01:35:01.029083 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 1 01:35:01.029088 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 1 01:35:01.029093 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 1 01:35:01.029098 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 1 01:35:01.029103 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 1 01:35:01.029108 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 1 01:35:01.029113 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 1 01:35:01.029118 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 1 01:35:01.029123 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 1 01:35:01.029129 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 1 01:35:01.029134 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 1 01:35:01.029139 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 1 01:35:01.029144 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 1 01:35:01.029149 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 1 01:35:01.029154 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 1 01:35:01.029159 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 1 01:35:01.029164 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 1 01:35:01.029169 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 1 01:35:01.029175 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 1 01:35:01.029180 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 1 01:35:01.029185 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 1 01:35:01.029190 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 1 01:35:01.029194 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 1 01:35:01.029200 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 1 01:35:01.029205 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 1 01:35:01.029214 kernel: No NUMA configuration found Nov 1 01:35:01.029219 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:35:01.029244 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:35:01.029250 kernel: Zone ranges: Nov 1 01:35:01.029270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:35:01.029276 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:35:01.029281 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:35:01.029286 kernel: Movable zone start for each node Nov 1 01:35:01.029291 kernel: Early memory node ranges Nov 1 01:35:01.029296 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:35:01.029301 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:35:01.029306 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Nov 1 01:35:01.029312 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Nov 1 01:35:01.029317 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 1 01:35:01.029322 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:35:01.029327 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:35:01.029335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:35:01.029341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:35:01.029347 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:35:01.029352 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:35:01.029358 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:35:01.029364 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:35:01.029369 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 1 01:35:01.029375 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:35:01.029380 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:35:01.029385 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:35:01.029391 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:35:01.029396 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:35:01.029401 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:35:01.029408 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:35:01.029413 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:35:01.029418 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:35:01.029424 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:35:01.029429 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:35:01.029434 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:35:01.029440 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:35:01.029445 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:35:01.029450 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:35:01.029457 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:35:01.029462 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:35:01.029467 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:35:01.029472 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:35:01.029478 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:35:01.029483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:35:01.029488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:35:01.029494 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:35:01.029499 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:35:01.029505 kernel: TSC deadline timer available Nov 1 01:35:01.029511 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:35:01.029516 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:35:01.029522 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:35:01.029527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:35:01.029533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:35:01.029538 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:35:01.029544 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:35:01.029549 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:35:01.029556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:35:01.029561 kernel: random: crng init done Nov 1 01:35:01.029567 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:35:01.029572 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:35:01.029578 kernel: Fallback order for Node 0: 0 Nov 1 01:35:01.029583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 1 01:35:01.029588 kernel: Policy zone: Normal Nov 1 01:35:01.029594 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:35:01.029599 kernel: software IO TLB: area num 16. Nov 1 01:35:01.029605 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 732408K reserved, 0K cma-reserved) Nov 1 01:35:01.029611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:35:01.029616 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:35:01.029622 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:35:01.029627 kernel: Dynamic Preempt: voluntary Nov 1 01:35:01.029633 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:35:01.029638 kernel: rcu: RCU event tracing is enabled. Nov 1 01:35:01.029644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:35:01.029650 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:35:01.029656 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:35:01.029661 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:35:01.029667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:35:01.029672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:35:01.029677 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:35:01.029683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:35:01.029688 kernel: Console: colour dummy device 80x25 Nov 1 01:35:01.029693 kernel: printk: console [tty0] enabled Nov 1 01:35:01.029699 kernel: printk: console [ttyS1] enabled Nov 1 01:35:01.029705 kernel: ACPI: Core revision 20230628 Nov 1 01:35:01.029710 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:35:01.029716 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:35:01.029721 kernel: DMAR: Host address width 39 Nov 1 01:35:01.029726 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:35:01.029732 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:35:01.029737 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 1 01:35:01.029743 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:35:01.029748 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:35:01.029754 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:35:01.029760 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:35:01.029765 kernel: x2apic enabled Nov 1 01:35:01.029770 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 01:35:01.029776 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:35:01.029782 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:35:01.029787 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:35:01.029792 kernel: process: using mwait in idle threads Nov 1 01:35:01.029798 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:35:01.029804 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:35:01.029809 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:35:01.029815 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:35:01.029820 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:35:01.029825 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:35:01.029831 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:35:01.029836 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:35:01.029841 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:35:01.029846 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:35:01.029852 kernel: TAA: Mitigation: TSX disabled Nov 1 01:35:01.029857 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:35:01.029863 kernel: SRBDS: Mitigation: Microcode Nov 1 01:35:01.029869 kernel: GDS: Mitigation: Microcode Nov 1 01:35:01.029874 kernel: active return thunk: its_return_thunk Nov 1 01:35:01.029879 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:35:01.029885 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 01:35:01.029890 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:35:01.029896 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:35:01.029901 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:35:01.029906 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:35:01.029912 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:35:01.029917 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:35:01.029923 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:35:01.029928 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:35:01.029934 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:35:01.029939 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:35:01.029944 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:35:01.029950 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:35:01.029955 kernel: landlock: Up and running. Nov 1 01:35:01.029960 kernel: SELinux: Initializing. Nov 1 01:35:01.029966 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.029971 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.029977 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:35:01.029982 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:35:01.029988 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:35:01.029994 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:35:01.029999 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:35:01.030005 kernel: ... version: 4 Nov 1 01:35:01.030010 kernel: ... bit width: 48 Nov 1 01:35:01.030015 kernel: ... generic registers: 4 Nov 1 01:35:01.030021 kernel: ... value mask: 0000ffffffffffff Nov 1 01:35:01.030026 kernel: ... max period: 00007fffffffffff Nov 1 01:35:01.030031 kernel: ... fixed-purpose events: 3 Nov 1 01:35:01.030038 kernel: ... event mask: 000000070000000f Nov 1 01:35:01.030043 kernel: signal: max sigframe size: 2032 Nov 1 01:35:01.030048 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:35:01.030054 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:35:01.030059 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:35:01.030065 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:35:01.030070 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:35:01.030075 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:35:01.030081 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 01:35:01.030087 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:35:01.030093 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:35:01.030098 kernel: smpboot: Max logical packages: 1 Nov 1 01:35:01.030104 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:35:01.030109 kernel: devtmpfs: initialized Nov 1 01:35:01.030115 kernel: x86/mm: Memory block size: 128MB Nov 1 01:35:01.030120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Nov 1 01:35:01.030125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 1 01:35:01.030132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:35:01.030137 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:35:01.030143 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:35:01.030148 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:35:01.030153 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:35:01.030159 kernel: audit: type=2000 audit(1761960895.040:1): state=initialized audit_enabled=0 res=1 Nov 1 01:35:01.030164 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:35:01.030169 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:35:01.030175 kernel: cpuidle: using governor menu Nov 1 01:35:01.030181 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:35:01.030186 kernel: dca service started, version 1.12.1 Nov 1 01:35:01.030192 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:35:01.030197 kernel: PCI: Using configuration type 1 for base access Nov 1 01:35:01.030202 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:35:01.030208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:35:01.030216 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:35:01.030222 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:35:01.030227 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:35:01.030254 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:35:01.030259 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:35:01.030265 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:35:01.030270 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:35:01.030289 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:35:01.030294 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030300 kernel: ACPI: SSDT 0xFFFF8ECFC1007800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:35:01.030305 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030311 kernel: ACPI: SSDT 0xFFFF8ECFC0FFC000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:35:01.030317 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030322 kernel: ACPI: SSDT 0xFFFF8ECFC0FE5D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:35:01.030328 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030333 kernel: ACPI: SSDT 0xFFFF8ECFC0FFE000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:35:01.030338 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030343 kernel: ACPI: SSDT 0xFFFF8ECFC100F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:35:01.030349 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030354 kernel: ACPI: SSDT 0xFFFF8ECFC1002400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:35:01.030360 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 01:35:01.030365 kernel: ACPI: Interpreter enabled Nov 1 01:35:01.030371 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:35:01.030377 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:35:01.030382 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:35:01.030387 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:35:01.030392 kernel: HEST: Table parsing has been initialized. Nov 1 01:35:01.030398 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:35:01.030403 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:35:01.030409 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 01:35:01.030414 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:35:01.030421 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 01:35:01.030426 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 01:35:01.030432 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 01:35:01.030437 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 01:35:01.030442 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 01:35:01.030448 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 01:35:01.030453 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 01:35:01.030458 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 01:35:01.030464 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 01:35:01.030470 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 01:35:01.030476 kernel: ACPI: \PIN_: New power resource Nov 1 01:35:01.030481 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:35:01.030565 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:35:01.030661 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:35:01.030713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:35:01.030721 kernel: PCI host bridge to bus 0000:00 Nov 1 01:35:01.030774 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:35:01.030820 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:35:01.030864 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:35:01.030908 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:35:01.030951 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:35:01.030995 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:35:01.031054 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:35:01.031116 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:35:01.031167 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.031246 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:35:01.031313 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:35:01.031367 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:35:01.031418 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:35:01.031475 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:35:01.031527 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:35:01.031577 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:35:01.031631 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:35:01.031681 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:35:01.031730 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:35:01.031786 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:35:01.031837 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:35:01.031893 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:35:01.031943 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:35:01.031996 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:35:01.032047 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:35:01.032100 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:35:01.032153 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:35:01.032216 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:35:01.032303 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:35:01.032358 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:35:01.032408 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:35:01.032458 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:35:01.032515 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:35:01.032566 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:35:01.032615 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:35:01.032665 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:35:01.032714 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:35:01.032764 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:35:01.032816 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:35:01.032866 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:35:01.032920 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:35:01.032971 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033029 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:35:01.033082 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033138 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:35:01.033189 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033283 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:35:01.033334 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033389 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:35:01.033443 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033498 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:35:01.033548 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:35:01.033603 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:35:01.033656 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:35:01.033707 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:35:01.033760 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:35:01.033816 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:35:01.033867 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:35:01.033924 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:35:01.033977 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:35:01.034029 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:35:01.034083 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:35:01.034135 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:35:01.034188 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:35:01.034288 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:35:01.034342 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:35:01.034393 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:35:01.034444 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:35:01.034500 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:35:01.034551 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:35:01.034602 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:35:01.034652 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:35:01.034703 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:35:01.034753 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:35:01.034810 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:35:01.034863 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:35:01.034917 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:35:01.034968 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:35:01.035019 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:35:01.035071 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.035122 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:35:01.035172 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:35:01.035247 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:35:01.035327 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:35:01.035379 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:35:01.035430 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:35:01.035485 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:35:01.035536 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:35:01.035587 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.035638 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:35:01.035691 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:35:01.035744 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:35:01.035794 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:35:01.035849 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:35:01.035902 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:35:01.035953 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:35:01.036004 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:35:01.036055 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:35:01.036108 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.036158 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.036217 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:35:01.036322 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:35:01.036376 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:35:01.036430 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:35:01.036485 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:35:01.036540 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:35:01.036594 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:35:01.036646 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:35:01.036699 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:35:01.036751 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.036802 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.036810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:35:01.036816 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:35:01.036824 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:35:01.036829 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:35:01.036835 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:35:01.036841 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:35:01.036847 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:35:01.036852 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:35:01.036858 kernel: iommu: Default domain type: Translated Nov 1 01:35:01.036864 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:35:01.036869 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:35:01.036876 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:35:01.036881 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:35:01.036887 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Nov 1 01:35:01.036893 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 1 01:35:01.036898 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 1 01:35:01.036904 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:35:01.036909 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:35:01.036962 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:35:01.037014 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:35:01.037071 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:35:01.037079 kernel: vgaarb: loaded Nov 1 01:35:01.037085 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:35:01.037091 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:35:01.037097 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:35:01.037102 kernel: pnp: PnP ACPI init Nov 1 01:35:01.037153 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:35:01.037202 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:35:01.037309 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:35:01.037361 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:35:01.037408 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:35:01.037456 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:35:01.037503 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:35:01.037549 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:35:01.037598 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:35:01.037644 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:35:01.037692 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:35:01.037738 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:35:01.037784 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:35:01.037834 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 01:35:01.037880 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:35:01.037928 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:35:01.037974 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:35:01.038019 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:35:01.038065 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:35:01.038111 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:35:01.038159 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 01:35:01.038169 kernel: pnp: PnP ACPI: found 9 devices Nov 1 01:35:01.038176 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:35:01.038182 kernel: NET: Registered PF_INET protocol family Nov 1 01:35:01.038188 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:35:01.038194 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:35:01.038200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:35:01.038206 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:35:01.038216 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 01:35:01.038222 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:35:01.038248 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.038255 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.038261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:35:01.038267 kernel: NET: Registered PF_XDP protocol family Nov 1 01:35:01.038339 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:35:01.038389 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:35:01.038441 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:35:01.038493 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038545 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038599 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038651 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038702 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:35:01.038754 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:35:01.038803 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:35:01.038853 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:35:01.038907 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:35:01.038956 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:35:01.039007 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:35:01.039056 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:35:01.039106 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:35:01.039155 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:35:01.039205 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:35:01.039305 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:35:01.039357 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.039408 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.039459 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:35:01.039509 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.039559 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.039605 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:35:01.039650 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:35:01.039697 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:35:01.039741 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:35:01.039785 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:35:01.039829 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:35:01.039880 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:35:01.039926 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:35:01.039977 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:35:01.040025 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:35:01.040079 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:35:01.040125 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:35:01.040175 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:35:01.040246 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:35:01.040316 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:35:01.040364 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:35:01.040374 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:35:01.040380 kernel: DMAR: No ATSR found Nov 1 01:35:01.040386 kernel: DMAR: No SATC found Nov 1 01:35:01.040391 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:35:01.040443 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:35:01.040493 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:35:01.040546 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:35:01.040596 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:35:01.040648 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:35:01.040699 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:35:01.040748 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:35:01.040797 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:35:01.040846 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:35:01.040895 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:35:01.040944 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:35:01.040994 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:35:01.041045 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:35:01.041096 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:35:01.041145 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:35:01.041195 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:35:01.041274 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:35:01.041344 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:35:01.041393 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:35:01.041443 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:35:01.041497 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:35:01.041548 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:35:01.041600 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:35:01.041652 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:35:01.041704 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:35:01.041755 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:35:01.041809 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:35:01.041817 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:35:01.041823 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:35:01.041831 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 1 01:35:01.041837 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:35:01.041843 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:35:01.041848 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:35:01.041854 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:35:01.041908 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:35:01.041916 kernel: Initialise system trusted keyrings Nov 1 01:35:01.041922 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:35:01.041930 kernel: Key type asymmetric registered Nov 1 01:35:01.041935 kernel: Asymmetric key parser 'x509' registered Nov 1 01:35:01.041941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:35:01.041947 kernel: io scheduler mq-deadline registered Nov 1 01:35:01.041952 kernel: io scheduler kyber registered Nov 1 01:35:01.041958 kernel: io scheduler bfq registered Nov 1 01:35:01.042008 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:35:01.042059 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:35:01.042111 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:35:01.042162 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:35:01.042215 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:35:01.042309 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:35:01.042363 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:35:01.042372 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:35:01.042378 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:35:01.042384 kernel: pstore: Using crash dump compression: deflate Nov 1 01:35:01.042391 kernel: pstore: Registered erst as persistent store backend Nov 1 01:35:01.042397 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:35:01.042403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:35:01.042409 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:35:01.042415 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:35:01.042420 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:35:01.042470 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:35:01.042479 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:35:01.042527 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:35:01.042575 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:35:01.042621 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:34:59 UTC (1761960899) Nov 1 01:35:01.042668 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:35:01.042676 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:35:01.042682 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:35:01.042688 kernel: intel_pstate: HWP enabled Nov 1 01:35:01.042693 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:35:01.042701 kernel: vesafb: scrolling: redraw Nov 1 01:35:01.042706 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:35:01.042712 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000e73cd204, using 768k, total 768k Nov 1 01:35:01.042718 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:35:01.042724 kernel: fb0: VESA VGA frame buffer device Nov 1 01:35:01.042729 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:35:01.042735 kernel: Segment Routing with IPv6 Nov 1 01:35:01.042741 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:35:01.042746 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:35:01.042752 kernel: Key type dns_resolver registered Nov 1 01:35:01.042759 kernel: microcode: Current revision: 0x000000fc Nov 1 01:35:01.042764 kernel: microcode: Updated early from: 0x000000f4 Nov 1 01:35:01.042770 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:35:01.042776 kernel: IPI shorthand broadcast: enabled Nov 1 01:35:01.042781 kernel: sched_clock: Marking stable (2441000665, 1369134861)->(4407756867, -597621341) Nov 1 01:35:01.042787 kernel: registered taskstats version 1 Nov 1 01:35:01.042793 kernel: Loading compiled-in X.509 certificates Nov 1 01:35:01.042798 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:35:01.042804 kernel: Key type .fscrypt registered Nov 1 01:35:01.042810 kernel: Key type fscrypt-provisioning registered Nov 1 01:35:01.042816 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:35:01.042822 kernel: ima: No architecture policies found Nov 1 01:35:01.042827 kernel: clk: Disabling unused clocks Nov 1 01:35:01.042833 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:35:01.042839 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:35:01.042844 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:35:01.042850 kernel: Run /init as init process Nov 1 01:35:01.042856 kernel: with arguments: Nov 1 01:35:01.042863 kernel: /init Nov 1 01:35:01.042868 kernel: with environment: Nov 1 01:35:01.042874 kernel: HOME=/ Nov 1 01:35:01.042879 kernel: TERM=linux Nov 1 01:35:01.042886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:35:01.042893 systemd[1]: Detected architecture x86-64. Nov 1 01:35:01.042899 systemd[1]: Running in initrd. Nov 1 01:35:01.042906 systemd[1]: No hostname configured, using default hostname. Nov 1 01:35:01.042912 systemd[1]: Hostname set to . Nov 1 01:35:01.042918 systemd[1]: Initializing machine ID from random generator. Nov 1 01:35:01.042924 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:35:01.042930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:35:01.042936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:35:01.042942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:35:01.042948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:35:01.042955 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:35:01.042961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:35:01.042968 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:35:01.042974 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 1 01:35:01.042980 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 1 01:35:01.042986 kernel: clocksource: Switched to clocksource tsc Nov 1 01:35:01.042992 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:35:01.042999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:35:01.043004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:35:01.043010 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:35:01.043016 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:35:01.043022 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:35:01.043028 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:35:01.043034 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:35:01.043040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:35:01.043046 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:35:01.043053 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:35:01.043059 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:35:01.043065 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:35:01.043071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:35:01.043077 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:35:01.043083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:35:01.043089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:35:01.043095 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:35:01.043102 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:35:01.043108 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:35:01.043124 systemd-journald[267]: Collecting audit messages is disabled. Nov 1 01:35:01.043138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:35:01.043146 systemd-journald[267]: Journal started Nov 1 01:35:01.043159 systemd-journald[267]: Runtime Journal (/run/log/journal/c48972979ba34fb2920f011672342197) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:35:01.076813 systemd-modules-load[269]: Inserted module 'overlay' Nov 1 01:35:01.125317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:01.125331 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:35:01.125340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:35:01.149671 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:35:01.149764 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:35:01.149849 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:35:01.168473 systemd-modules-load[269]: Inserted module 'br_netfilter' Nov 1 01:35:01.169420 kernel: Bridge firewalling registered Nov 1 01:35:01.179548 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:35:01.234558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:35:01.238123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:35:01.269067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:01.291146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:35:01.311978 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:35:01.353466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:35:01.364835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:35:01.376997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:35:01.381975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:35:01.383340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:35:01.384542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:35:01.396563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:01.397623 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:35:01.417959 systemd-resolved[299]: Positive Trust Anchors: Nov 1 01:35:01.417971 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:35:01.418014 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:35:01.527597 dracut-cmdline[310]: dracut-dracut-053 Nov 1 01:35:01.527597 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:35:01.420799 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 1 01:35:01.421606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:35:01.437557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:35:01.677242 kernel: SCSI subsystem initialized Nov 1 01:35:01.700242 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:35:01.723255 kernel: iscsi: registered transport (tcp) Nov 1 01:35:01.755667 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:35:01.755685 kernel: QLogic iSCSI HBA Driver Nov 1 01:35:01.787687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:35:01.814519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:35:01.868853 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:35:01.868873 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:35:01.888469 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:35:01.947243 kernel: raid6: avx2x4 gen() 52338 MB/s Nov 1 01:35:01.979287 kernel: raid6: avx2x2 gen() 51895 MB/s Nov 1 01:35:02.015599 kernel: raid6: avx2x1 gen() 43965 MB/s Nov 1 01:35:02.015616 kernel: raid6: using algorithm avx2x4 gen() 52338 MB/s Nov 1 01:35:02.062555 kernel: raid6: .... xor() 20190 MB/s, rmw enabled Nov 1 01:35:02.062572 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:35:02.103262 kernel: xor: automatically using best checksumming function avx Nov 1 01:35:02.220274 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:35:02.226483 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:35:02.256520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:35:02.263847 systemd-udevd[497]: Using default interface naming scheme 'v255'. Nov 1 01:35:02.267335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:35:02.300451 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:35:02.336886 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Nov 1 01:35:02.357328 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:35:02.381442 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:35:02.466271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:35:02.498993 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:35:02.499011 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:35:02.514053 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:35:02.548876 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:35:02.548894 kernel: ACPI: bus type USB registered Nov 1 01:35:02.548903 kernel: usbcore: registered new interface driver usbfs Nov 1 01:35:02.537637 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:35:02.604477 kernel: usbcore: registered new interface driver hub Nov 1 01:35:02.604496 kernel: usbcore: registered new device driver usb Nov 1 01:35:02.604512 kernel: PTP clock support registered Nov 1 01:35:02.604527 kernel: libata version 3.00 loaded. Nov 1 01:35:02.592890 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:35:02.625766 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:35:02.625782 kernel: AES CTR mode by8 optimization enabled Nov 1 01:35:02.620421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:35:03.056020 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:35:03.056131 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:35:03.056216 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:35:03.056296 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:35:03.056369 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:35:03.056442 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:35:03.056512 kernel: scsi host0: ahci Nov 1 01:35:03.056585 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:35:03.056683 kernel: scsi host1: ahci Nov 1 01:35:03.056816 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:35:03.056947 kernel: scsi host2: ahci Nov 1 01:35:03.057078 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:35:03.057152 kernel: scsi host3: ahci Nov 1 01:35:03.057256 kernel: hub 1-0:1.0: USB hub found Nov 1 01:35:03.057326 kernel: scsi host4: ahci Nov 1 01:35:03.057387 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:35:03.057451 kernel: scsi host5: ahci Nov 1 01:35:03.057514 kernel: hub 2-0:1.0: USB hub found Nov 1 01:35:03.057583 kernel: scsi host6: ahci Nov 1 01:35:03.057643 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:35:03.057703 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:35:03.057712 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:35:03.057720 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:35:03.057729 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:35:03.057737 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:35:03.057744 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:35:03.057752 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:35:03.057759 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:35:03.057774 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:35:02.689436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:35:03.094336 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:35:03.121231 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:35:03.121322 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:35:03.136518 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9e Nov 1 01:35:03.150033 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:35:03.165862 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:35:03.195572 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 1 01:35:03.195668 kernel: hub 1-14:1.0: USB hub found Nov 1 01:35:03.195745 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:35:03.195812 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:35:03.217263 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:35:03.219408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:35:03.431694 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:35:03.431784 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:35:03.431794 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9f Nov 1 01:35:03.431865 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:35:03.431931 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.431940 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:35:03.432004 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432013 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:35:03.432021 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432028 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432038 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432045 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:35:03.432053 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:35:03.387165 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:35:03.557294 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:35:03.557306 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:35:03.557470 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:35:03.557479 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 1 01:35:03.557549 kernel: ata1.00: Features: NCQ-prio Nov 1 01:35:03.557558 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:35:03.557574 kernel: ata2.00: Features: NCQ-prio Nov 1 01:35:03.515751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:35:03.619303 kernel: ata1.00: configured for UDMA/133 Nov 1 01:35:03.619393 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:35:03.619486 kernel: ata2.00: configured for UDMA/133 Nov 1 01:35:03.619496 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:35:03.515780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:03.653407 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:35:03.653530 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:35:03.602075 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:35:03.700166 kernel: usbcore: registered new interface driver usbhid Nov 1 01:35:03.700181 kernel: usbhid: USB HID core driver Nov 1 01:35:03.700190 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:35:03.649332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:35:04.319353 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:35:04.319373 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.319382 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:35:04.319474 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:35:04.319483 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:35:04.319558 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:35:04.319627 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:35:04.319689 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:35:04.319752 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:35:04.319814 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:35:04.319875 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 01:35:04.319935 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.319944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:35:04.319952 kernel: GPT:9289727 != 937703087 Nov 1 01:35:04.319959 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:35:04.319966 kernel: GPT:9289727 != 937703087 Nov 1 01:35:04.319974 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:35:04.319982 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.319989 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:35:04.320049 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 1 01:35:04.320117 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:35:04.320179 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:35:04.320320 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:35:04.320329 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:35:04.320400 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:35:04.320468 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:35:04.320530 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:35:04.320594 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:35:04.320655 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:35:04.320716 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:35:04.320780 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 01:35:04.320843 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:35:04.320907 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:35:04.320915 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:35:03.649371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:04.349320 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 1 01:35:03.688272 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:04.406825 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 1 01:35:04.406924 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sdb3 scanned by (udev-worker) (561) Nov 1 01:35:04.268357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:04.440491 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (671) Nov 1 01:35:04.355864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 01:35:04.429967 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 01:35:04.460571 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:35:04.485397 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:35:04.507605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:04.555787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:35:04.582346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:35:04.620350 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.620364 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.598696 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:35:04.662296 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.662307 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.662314 disk-uuid[721]: Primary Header is updated. Nov 1 01:35:04.662314 disk-uuid[721]: Secondary Entries is updated. Nov 1 01:35:04.662314 disk-uuid[721]: Secondary Header is updated. Nov 1 01:35:04.683675 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.703266 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.726076 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:05.681594 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:05.701258 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:05.701332 disk-uuid[722]: The operation has completed successfully. Nov 1 01:35:05.740188 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:35:05.740313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:35:05.764518 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:35:05.809328 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:35:05.809396 sh[750]: Success Nov 1 01:35:05.855162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:35:05.872117 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:35:05.878172 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:35:05.937874 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:35:05.937894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:05.959349 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:35:05.978365 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:35:05.996468 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:35:06.037258 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 01:35:06.039459 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:35:06.048717 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:35:06.054300 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:35:06.116240 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:06.116288 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:06.116296 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:35:06.123806 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:35:06.191967 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:35:06.191981 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:35:06.216215 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:06.221726 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:35:06.244600 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:35:06.254546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:35:06.267462 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:35:06.309596 ignition[924]: Ignition 2.19.0 Nov 1 01:35:06.309601 ignition[924]: Stage: fetch-offline Nov 1 01:35:06.311814 unknown[924]: fetched base config from "system" Nov 1 01:35:06.309627 ignition[924]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:06.311818 unknown[924]: fetched user config from "system" Nov 1 01:35:06.309632 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:06.312702 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:35:06.309690 ignition[924]: parsed url from cmdline: "" Nov 1 01:35:06.319433 systemd-networkd[934]: lo: Link UP Nov 1 01:35:06.309692 ignition[924]: no config URL provided Nov 1 01:35:06.319436 systemd-networkd[934]: lo: Gained carrier Nov 1 01:35:06.309695 ignition[924]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:35:06.321839 systemd-networkd[934]: Enumeration completed Nov 1 01:35:06.309718 ignition[924]: parsing config with SHA512: 36ffc34bc91b59d8189ee4bb1be0aaf0a8c2e58f56268a9ba0ef2f399c23791a7aec15c66214ed04fa2f22e71171dd70b6cf4545ec143c9c1f6a3194ffea468e Nov 1 01:35:06.322517 systemd-networkd[934]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.312066 ignition[924]: fetch-offline: fetch-offline passed Nov 1 01:35:06.331519 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:35:06.312069 ignition[924]: POST message to Packet Timeline Nov 1 01:35:06.349621 systemd[1]: Reached target network.target - Network. Nov 1 01:35:06.312071 ignition[924]: POST Status error: resource requires networking Nov 1 01:35:06.350426 systemd-networkd[934]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.312107 ignition[924]: Ignition finished successfully Nov 1 01:35:06.357501 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:35:06.384083 ignition[946]: Ignition 2.19.0 Nov 1 01:35:06.371426 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:35:06.559357 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:35:06.384090 ignition[946]: Stage: kargs Nov 1 01:35:06.378589 systemd-networkd[934]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.384315 ignition[946]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:06.551788 systemd-networkd[934]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.384327 ignition[946]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:06.385310 ignition[946]: kargs: kargs passed Nov 1 01:35:06.385315 ignition[946]: POST message to Packet Timeline Nov 1 01:35:06.385330 ignition[946]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:06.386038 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55175->[::1]:53: read: connection refused Nov 1 01:35:06.586539 ignition[946]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:35:06.586810 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48741->[::1]:53: read: connection refused Nov 1 01:35:06.739322 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:35:06.740439 systemd-networkd[934]: eno1: Link UP Nov 1 01:35:06.740605 systemd-networkd[934]: eno2: Link UP Nov 1 01:35:06.740757 systemd-networkd[934]: enp1s0f0np0: Link UP Nov 1 01:35:06.740935 systemd-networkd[934]: enp1s0f0np0: Gained carrier Nov 1 01:35:06.749477 systemd-networkd[934]: enp1s0f1np1: Link UP Nov 1 01:35:06.773396 systemd-networkd[934]: enp1s0f0np0: DHCPv4 address 139.178.94.199/31, gateway 139.178.94.198 acquired from 145.40.83.140 Nov 1 01:35:06.987094 ignition[946]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:35:06.988246 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41336->[::1]:53: read: connection refused Nov 1 01:35:07.607885 systemd-networkd[934]: enp1s0f1np1: Gained carrier Nov 1 01:35:07.788796 ignition[946]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:35:07.789859 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41371->[::1]:53: read: connection refused Nov 1 01:35:07.799546 systemd-networkd[934]: enp1s0f0np0: Gained IPv6LL Nov 1 01:35:09.391642 ignition[946]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:35:09.392777 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44642->[::1]:53: read: connection refused Nov 1 01:35:09.591807 systemd-networkd[934]: enp1s0f1np1: Gained IPv6LL Nov 1 01:35:12.596251 ignition[946]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:35:13.747074 ignition[946]: GET result: OK Nov 1 01:35:14.474112 ignition[946]: Ignition finished successfully Nov 1 01:35:14.493013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:35:14.518516 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:35:14.524853 ignition[964]: Ignition 2.19.0 Nov 1 01:35:14.524857 ignition[964]: Stage: disks Nov 1 01:35:14.524963 ignition[964]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:14.524970 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:14.525571 ignition[964]: disks: disks passed Nov 1 01:35:14.525574 ignition[964]: POST message to Packet Timeline Nov 1 01:35:14.525584 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:15.648307 ignition[964]: GET result: OK Nov 1 01:35:16.059803 ignition[964]: Ignition finished successfully Nov 1 01:35:16.063451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:35:16.078585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:35:16.096555 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:35:16.117654 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:35:16.138586 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:35:16.148808 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:35:16.190455 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:35:16.225111 systemd-fsck[983]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 01:35:16.234728 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:35:16.263420 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:35:16.364230 kernel: EXT4-fs (sdb9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:35:16.364790 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:35:16.373634 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:35:16.406289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:35:16.414793 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:35:16.539538 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (992) Nov 1 01:35:16.539552 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:16.539560 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:16.539568 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:35:16.539578 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:35:16.539585 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:35:16.457861 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 01:35:16.539913 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 01:35:16.580352 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:35:16.580378 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:35:16.609525 coreos-metadata[994]: Nov 01 01:35:16.601 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:35:16.591411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:35:16.656423 coreos-metadata[1010]: Nov 01 01:35:16.602 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:35:16.626530 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:35:16.660521 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:35:16.699402 initrd-setup-root[1024]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:35:16.709351 initrd-setup-root[1031]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:35:16.720341 initrd-setup-root[1038]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:35:16.731319 initrd-setup-root[1045]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:35:16.761549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:35:16.785484 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:35:16.823457 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:16.802022 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:35:16.832891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:35:16.856704 ignition[1112]: INFO : Ignition 2.19.0 Nov 1 01:35:16.856704 ignition[1112]: INFO : Stage: mount Nov 1 01:35:16.871425 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:16.871425 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:16.871425 ignition[1112]: INFO : mount: mount passed Nov 1 01:35:16.871425 ignition[1112]: INFO : POST message to Packet Timeline Nov 1 01:35:16.871425 ignition[1112]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:16.857986 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:35:17.683348 coreos-metadata[1010]: Nov 01 01:35:17.683 INFO Fetch successful Nov 1 01:35:17.698692 coreos-metadata[994]: Nov 01 01:35:17.698 INFO Fetch successful Nov 1 01:35:17.733061 coreos-metadata[994]: Nov 01 01:35:17.733 INFO wrote hostname ci-4081.3.6-n-4452d0b810 to /sysroot/etc/hostname Nov 1 01:35:17.733177 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:35:17.733269 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 01:35:17.758543 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:35:18.497611 ignition[1112]: INFO : GET result: OK Nov 1 01:35:18.873672 ignition[1112]: INFO : Ignition finished successfully Nov 1 01:35:18.875924 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:35:18.905532 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:35:18.916178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:35:18.964232 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1137) Nov 1 01:35:18.993317 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:18.993333 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:19.010703 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:35:19.047995 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:35:19.048011 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:35:19.061077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:35:19.088684 ignition[1154]: INFO : Ignition 2.19.0 Nov 1 01:35:19.088684 ignition[1154]: INFO : Stage: files Nov 1 01:35:19.103433 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:19.103433 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:19.103433 ignition[1154]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:35:19.103433 ignition[1154]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:35:19.093495 unknown[1154]: wrote ssh authorized keys file for user: core Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:19.513513 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:35:19.732464 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 01:35:20.007584 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:20.007584 ignition[1154]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: files passed Nov 1 01:35:20.037427 ignition[1154]: INFO : POST message to Packet Timeline Nov 1 01:35:20.037427 ignition[1154]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:21.040386 ignition[1154]: INFO : GET result: OK Nov 1 01:35:21.933183 ignition[1154]: INFO : Ignition finished successfully Nov 1 01:35:21.936554 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:35:21.968344 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:35:21.968751 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:35:21.987728 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:35:21.987801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:35:22.021899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:35:22.042657 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:35:22.083410 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:35:22.083410 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:35:22.123396 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:35:22.083505 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:35:22.151666 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:35:22.151739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:35:22.183893 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:35:22.193414 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:35:22.213725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:35:22.227762 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:35:22.309598 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:35:22.328653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:35:22.386119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:35:22.386661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:35:22.417953 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:35:22.436900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:35:22.437338 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:35:22.464036 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:35:22.484901 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:35:22.502891 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:35:22.520900 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:35:22.541888 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:35:22.562919 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:35:22.582873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:35:22.604037 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:35:22.624911 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:35:22.644898 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:35:22.662744 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:35:22.663150 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:35:22.688038 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:35:22.707913 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:35:22.728774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:35:22.729287 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:35:22.750762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:35:22.751164 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:35:22.782890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:35:22.783391 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:35:22.803109 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:35:22.820757 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:35:22.821165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:35:22.841907 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:35:22.859897 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:35:22.877872 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:35:22.878185 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:35:22.897930 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:35:22.898267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:35:22.920991 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:35:22.921436 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:35:22.939989 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:35:22.940416 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:35:23.064421 ignition[1220]: INFO : Ignition 2.19.0 Nov 1 01:35:23.064421 ignition[1220]: INFO : Stage: umount Nov 1 01:35:23.064421 ignition[1220]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:23.064421 ignition[1220]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:23.064421 ignition[1220]: INFO : umount: umount passed Nov 1 01:35:23.064421 ignition[1220]: INFO : POST message to Packet Timeline Nov 1 01:35:23.064421 ignition[1220]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:22.957984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:35:22.958413 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:35:22.991472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:35:22.992485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:35:22.992564 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:35:23.042475 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:35:23.055464 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:35:23.055549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:35:23.075479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:35:23.075567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:35:23.126619 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:35:23.128460 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:35:23.128713 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:35:23.148364 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:35:23.148623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:35:24.128018 ignition[1220]: INFO : GET result: OK Nov 1 01:35:24.833620 ignition[1220]: INFO : Ignition finished successfully Nov 1 01:35:24.836834 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:35:24.837133 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:35:24.853667 systemd[1]: Stopped target network.target - Network. Nov 1 01:35:24.868470 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:35:24.868663 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:35:24.886603 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:35:24.886761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:35:24.904650 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:35:24.904810 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:35:24.923629 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:35:24.923794 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:35:24.942591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:35:24.942762 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:35:24.962021 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:35:24.971382 systemd-networkd[934]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:35:24.979461 systemd-networkd[934]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:35:24.980691 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:35:25.001291 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:35:25.001568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:35:25.021506 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:35:25.021851 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:35:25.042497 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:35:25.042637 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:35:25.073437 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:35:25.099412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:35:25.099663 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:35:25.118717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:35:25.118893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:35:25.136700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:35:25.136869 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:35:25.156706 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:35:25.156876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:35:25.175954 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:35:25.198398 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:35:25.198767 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:35:25.230839 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:35:25.230870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:35:25.253521 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:35:25.253550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:35:25.273555 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:35:25.273641 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:35:25.305773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:35:25.305913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:35:25.352340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:35:25.352539 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:25.637212 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Nov 1 01:35:25.637240 systemd-journald[267]: Failed to send stream file descriptor to service manager: Connection refused Nov 1 01:35:25.405530 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:35:25.415471 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:35:25.415661 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:35:25.444437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:35:25.444615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:25.466517 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:35:25.466740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:35:25.488138 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:35:25.488404 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:35:25.510243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:35:25.545688 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:35:25.566104 systemd[1]: Switching root. Nov 1 01:35:25.745320 systemd-journald[267]: Journal stopped Nov 1 01:35:01.028689 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:35:01.028703 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:35:01.028710 kernel: BIOS-provided physical RAM map: Nov 1 01:35:01.028715 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:35:01.028719 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:35:01.028723 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:35:01.028727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:35:01.028732 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:35:01.028736 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Nov 1 01:35:01.028740 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Nov 1 01:35:01.028744 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Nov 1 01:35:01.028749 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Nov 1 01:35:01.028753 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 1 01:35:01.028758 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 1 01:35:01.028763 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 1 01:35:01.028768 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 1 01:35:01.028773 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:35:01.028778 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:35:01.028783 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:35:01.028787 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:35:01.028792 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:35:01.028796 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:35:01.028801 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:35:01.028806 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:35:01.028810 kernel: NX (Execute Disable) protection: active Nov 1 01:35:01.028815 kernel: APIC: Static calls initialized Nov 1 01:35:01.028820 kernel: SMBIOS 3.2.1 present. Nov 1 01:35:01.028824 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 1 01:35:01.028830 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:35:01.028835 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:35:01.028839 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:35:01.028845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:35:01.028849 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:35:01.028854 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 01:35:01.028859 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:35:01.028864 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:35:01.028869 kernel: Using GB pages for direct mapping Nov 1 01:35:01.028875 kernel: ACPI: Early table checksum verification disabled Nov 1 01:35:01.028880 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:35:01.028884 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:35:01.028891 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 1 01:35:01.028896 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:35:01.028901 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 1 01:35:01.028906 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 1 01:35:01.028913 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:35:01.028918 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:35:01.028923 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:35:01.028928 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:35:01.028933 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:35:01.028938 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:35:01.028943 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:35:01.028949 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028954 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:35:01.028959 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:35:01.028964 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028969 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028974 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:35:01.028979 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:35:01.028984 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028989 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:35:01.028995 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:35:01.029000 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:35:01.029005 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:35:01.029011 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:35:01.029016 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:35:01.029021 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:35:01.029026 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:35:01.029031 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:35:01.029037 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:35:01.029042 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:35:01.029047 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:35:01.029052 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 1 01:35:01.029057 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 1 01:35:01.029062 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 1 01:35:01.029067 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 1 01:35:01.029072 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 1 01:35:01.029077 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 1 01:35:01.029083 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 1 01:35:01.029088 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 1 01:35:01.029093 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 1 01:35:01.029098 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 1 01:35:01.029103 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 1 01:35:01.029108 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 1 01:35:01.029113 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 1 01:35:01.029118 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 1 01:35:01.029123 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 1 01:35:01.029129 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 1 01:35:01.029134 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 1 01:35:01.029139 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 1 01:35:01.029144 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 1 01:35:01.029149 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 1 01:35:01.029154 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 1 01:35:01.029159 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 1 01:35:01.029164 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 1 01:35:01.029169 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 1 01:35:01.029175 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 1 01:35:01.029180 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 1 01:35:01.029185 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 1 01:35:01.029190 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 1 01:35:01.029194 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 1 01:35:01.029200 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 1 01:35:01.029205 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 1 01:35:01.029214 kernel: No NUMA configuration found Nov 1 01:35:01.029219 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:35:01.029244 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:35:01.029250 kernel: Zone ranges: Nov 1 01:35:01.029270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:35:01.029276 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:35:01.029281 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:35:01.029286 kernel: Movable zone start for each node Nov 1 01:35:01.029291 kernel: Early memory node ranges Nov 1 01:35:01.029296 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:35:01.029301 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:35:01.029306 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Nov 1 01:35:01.029312 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Nov 1 01:35:01.029317 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 1 01:35:01.029322 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:35:01.029327 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:35:01.029335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:35:01.029341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:35:01.029347 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:35:01.029352 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:35:01.029358 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:35:01.029364 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:35:01.029369 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 1 01:35:01.029375 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:35:01.029380 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:35:01.029385 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:35:01.029391 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:35:01.029396 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:35:01.029401 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:35:01.029408 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:35:01.029413 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:35:01.029418 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:35:01.029424 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:35:01.029429 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:35:01.029434 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:35:01.029440 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:35:01.029445 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:35:01.029450 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:35:01.029457 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:35:01.029462 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:35:01.029467 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:35:01.029472 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:35:01.029478 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:35:01.029483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:35:01.029488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:35:01.029494 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:35:01.029499 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:35:01.029505 kernel: TSC deadline timer available Nov 1 01:35:01.029511 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:35:01.029516 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:35:01.029522 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:35:01.029527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:35:01.029533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:35:01.029538 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:35:01.029544 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:35:01.029549 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:35:01.029556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:35:01.029561 kernel: random: crng init done Nov 1 01:35:01.029567 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:35:01.029572 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:35:01.029578 kernel: Fallback order for Node 0: 0 Nov 1 01:35:01.029583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 1 01:35:01.029588 kernel: Policy zone: Normal Nov 1 01:35:01.029594 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:35:01.029599 kernel: software IO TLB: area num 16. Nov 1 01:35:01.029605 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 732408K reserved, 0K cma-reserved) Nov 1 01:35:01.029611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:35:01.029616 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:35:01.029622 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:35:01.029627 kernel: Dynamic Preempt: voluntary Nov 1 01:35:01.029633 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:35:01.029638 kernel: rcu: RCU event tracing is enabled. Nov 1 01:35:01.029644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:35:01.029650 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:35:01.029656 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:35:01.029661 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:35:01.029667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:35:01.029672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:35:01.029677 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:35:01.029683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:35:01.029688 kernel: Console: colour dummy device 80x25 Nov 1 01:35:01.029693 kernel: printk: console [tty0] enabled Nov 1 01:35:01.029699 kernel: printk: console [ttyS1] enabled Nov 1 01:35:01.029705 kernel: ACPI: Core revision 20230628 Nov 1 01:35:01.029710 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:35:01.029716 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:35:01.029721 kernel: DMAR: Host address width 39 Nov 1 01:35:01.029726 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:35:01.029732 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:35:01.029737 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 1 01:35:01.029743 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:35:01.029748 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:35:01.029754 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:35:01.029760 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:35:01.029765 kernel: x2apic enabled Nov 1 01:35:01.029770 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 01:35:01.029776 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:35:01.029782 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:35:01.029787 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:35:01.029792 kernel: process: using mwait in idle threads Nov 1 01:35:01.029798 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:35:01.029804 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:35:01.029809 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:35:01.029815 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:35:01.029820 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:35:01.029825 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:35:01.029831 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:35:01.029836 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:35:01.029841 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:35:01.029846 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:35:01.029852 kernel: TAA: Mitigation: TSX disabled Nov 1 01:35:01.029857 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:35:01.029863 kernel: SRBDS: Mitigation: Microcode Nov 1 01:35:01.029869 kernel: GDS: Mitigation: Microcode Nov 1 01:35:01.029874 kernel: active return thunk: its_return_thunk Nov 1 01:35:01.029879 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:35:01.029885 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 01:35:01.029890 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:35:01.029896 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:35:01.029901 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:35:01.029906 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:35:01.029912 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:35:01.029917 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:35:01.029923 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:35:01.029928 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:35:01.029934 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:35:01.029939 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:35:01.029944 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:35:01.029950 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:35:01.029955 kernel: landlock: Up and running. Nov 1 01:35:01.029960 kernel: SELinux: Initializing. Nov 1 01:35:01.029966 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.029971 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.029977 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:35:01.029982 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:35:01.029988 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:35:01.029994 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:35:01.029999 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:35:01.030005 kernel: ... version: 4 Nov 1 01:35:01.030010 kernel: ... bit width: 48 Nov 1 01:35:01.030015 kernel: ... generic registers: 4 Nov 1 01:35:01.030021 kernel: ... value mask: 0000ffffffffffff Nov 1 01:35:01.030026 kernel: ... max period: 00007fffffffffff Nov 1 01:35:01.030031 kernel: ... fixed-purpose events: 3 Nov 1 01:35:01.030038 kernel: ... event mask: 000000070000000f Nov 1 01:35:01.030043 kernel: signal: max sigframe size: 2032 Nov 1 01:35:01.030048 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:35:01.030054 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:35:01.030059 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:35:01.030065 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:35:01.030070 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:35:01.030075 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:35:01.030081 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 01:35:01.030087 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:35:01.030093 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:35:01.030098 kernel: smpboot: Max logical packages: 1 Nov 1 01:35:01.030104 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:35:01.030109 kernel: devtmpfs: initialized Nov 1 01:35:01.030115 kernel: x86/mm: Memory block size: 128MB Nov 1 01:35:01.030120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Nov 1 01:35:01.030125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 1 01:35:01.030132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:35:01.030137 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:35:01.030143 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:35:01.030148 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:35:01.030153 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:35:01.030159 kernel: audit: type=2000 audit(1761960895.040:1): state=initialized audit_enabled=0 res=1 Nov 1 01:35:01.030164 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:35:01.030169 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:35:01.030175 kernel: cpuidle: using governor menu Nov 1 01:35:01.030181 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:35:01.030186 kernel: dca service started, version 1.12.1 Nov 1 01:35:01.030192 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:35:01.030197 kernel: PCI: Using configuration type 1 for base access Nov 1 01:35:01.030202 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:35:01.030208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:35:01.030216 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:35:01.030222 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:35:01.030227 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:35:01.030254 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:35:01.030259 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:35:01.030265 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:35:01.030270 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:35:01.030289 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:35:01.030294 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030300 kernel: ACPI: SSDT 0xFFFF8ECFC1007800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:35:01.030305 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030311 kernel: ACPI: SSDT 0xFFFF8ECFC0FFC000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:35:01.030317 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030322 kernel: ACPI: SSDT 0xFFFF8ECFC0FE5D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:35:01.030328 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030333 kernel: ACPI: SSDT 0xFFFF8ECFC0FFE000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:35:01.030338 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030343 kernel: ACPI: SSDT 0xFFFF8ECFC100F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:35:01.030349 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:35:01.030354 kernel: ACPI: SSDT 0xFFFF8ECFC1002400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:35:01.030360 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 01:35:01.030365 kernel: ACPI: Interpreter enabled Nov 1 01:35:01.030371 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:35:01.030377 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:35:01.030382 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:35:01.030387 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:35:01.030392 kernel: HEST: Table parsing has been initialized. Nov 1 01:35:01.030398 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:35:01.030403 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:35:01.030409 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 01:35:01.030414 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:35:01.030421 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 01:35:01.030426 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 01:35:01.030432 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 01:35:01.030437 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 01:35:01.030442 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 01:35:01.030448 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 01:35:01.030453 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 01:35:01.030458 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 01:35:01.030464 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 01:35:01.030470 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 01:35:01.030476 kernel: ACPI: \PIN_: New power resource Nov 1 01:35:01.030481 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:35:01.030565 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:35:01.030661 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:35:01.030713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:35:01.030721 kernel: PCI host bridge to bus 0000:00 Nov 1 01:35:01.030774 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:35:01.030820 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:35:01.030864 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:35:01.030908 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:35:01.030951 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:35:01.030995 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:35:01.031054 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:35:01.031116 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:35:01.031167 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.031246 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:35:01.031313 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:35:01.031367 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:35:01.031418 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:35:01.031475 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:35:01.031527 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:35:01.031577 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:35:01.031631 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:35:01.031681 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:35:01.031730 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:35:01.031786 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:35:01.031837 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:35:01.031893 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:35:01.031943 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:35:01.031996 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:35:01.032047 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:35:01.032100 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:35:01.032153 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:35:01.032216 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:35:01.032303 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:35:01.032358 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:35:01.032408 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:35:01.032458 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:35:01.032515 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:35:01.032566 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:35:01.032615 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:35:01.032665 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:35:01.032714 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:35:01.032764 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:35:01.032816 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:35:01.032866 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:35:01.032920 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:35:01.032971 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033029 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:35:01.033082 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033138 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:35:01.033189 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033283 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:35:01.033334 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033389 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:35:01.033443 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.033498 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:35:01.033548 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:35:01.033603 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:35:01.033656 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:35:01.033707 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:35:01.033760 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:35:01.033816 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:35:01.033867 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:35:01.033924 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:35:01.033977 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:35:01.034029 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:35:01.034083 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:35:01.034135 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:35:01.034188 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:35:01.034288 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:35:01.034342 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:35:01.034393 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:35:01.034444 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:35:01.034500 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:35:01.034551 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:35:01.034602 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:35:01.034652 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:35:01.034703 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:35:01.034753 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:35:01.034810 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:35:01.034863 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:35:01.034917 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:35:01.034968 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:35:01.035019 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:35:01.035071 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.035122 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:35:01.035172 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:35:01.035247 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:35:01.035327 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:35:01.035379 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:35:01.035430 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:35:01.035485 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:35:01.035536 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:35:01.035587 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:35:01.035638 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:35:01.035691 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:35:01.035744 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:35:01.035794 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:35:01.035849 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:35:01.035902 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:35:01.035953 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:35:01.036004 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:35:01.036055 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:35:01.036108 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.036158 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.036217 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:35:01.036322 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:35:01.036376 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:35:01.036430 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:35:01.036485 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:35:01.036540 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:35:01.036594 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:35:01.036646 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:35:01.036699 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:35:01.036751 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.036802 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.036810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:35:01.036816 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:35:01.036824 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:35:01.036829 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:35:01.036835 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:35:01.036841 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:35:01.036847 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:35:01.036852 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:35:01.036858 kernel: iommu: Default domain type: Translated Nov 1 01:35:01.036864 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:35:01.036869 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:35:01.036876 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:35:01.036881 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:35:01.036887 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Nov 1 01:35:01.036893 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 1 01:35:01.036898 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 1 01:35:01.036904 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:35:01.036909 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:35:01.036962 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:35:01.037014 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:35:01.037071 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:35:01.037079 kernel: vgaarb: loaded Nov 1 01:35:01.037085 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:35:01.037091 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:35:01.037097 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:35:01.037102 kernel: pnp: PnP ACPI init Nov 1 01:35:01.037153 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:35:01.037202 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:35:01.037309 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:35:01.037361 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:35:01.037408 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:35:01.037456 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:35:01.037503 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:35:01.037549 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:35:01.037598 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:35:01.037644 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:35:01.037692 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:35:01.037738 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:35:01.037784 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:35:01.037834 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 01:35:01.037880 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:35:01.037928 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:35:01.037974 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:35:01.038019 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:35:01.038065 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:35:01.038111 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:35:01.038159 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 01:35:01.038169 kernel: pnp: PnP ACPI: found 9 devices Nov 1 01:35:01.038176 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:35:01.038182 kernel: NET: Registered PF_INET protocol family Nov 1 01:35:01.038188 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:35:01.038194 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:35:01.038200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:35:01.038206 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:35:01.038216 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 01:35:01.038222 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:35:01.038248 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.038255 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:35:01.038261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:35:01.038267 kernel: NET: Registered PF_XDP protocol family Nov 1 01:35:01.038339 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:35:01.038389 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:35:01.038441 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:35:01.038493 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038545 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038599 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038651 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:35:01.038702 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:35:01.038754 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:35:01.038803 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:35:01.038853 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:35:01.038907 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:35:01.038956 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:35:01.039007 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:35:01.039056 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:35:01.039106 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:35:01.039155 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:35:01.039205 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:35:01.039305 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:35:01.039357 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.039408 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.039459 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:35:01.039509 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:35:01.039559 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:35:01.039605 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:35:01.039650 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:35:01.039697 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:35:01.039741 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:35:01.039785 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:35:01.039829 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:35:01.039880 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:35:01.039926 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:35:01.039977 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:35:01.040025 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:35:01.040079 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:35:01.040125 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:35:01.040175 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:35:01.040246 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:35:01.040316 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:35:01.040364 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:35:01.040374 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:35:01.040380 kernel: DMAR: No ATSR found Nov 1 01:35:01.040386 kernel: DMAR: No SATC found Nov 1 01:35:01.040391 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:35:01.040443 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:35:01.040493 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:35:01.040546 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:35:01.040596 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:35:01.040648 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:35:01.040699 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:35:01.040748 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:35:01.040797 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:35:01.040846 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:35:01.040895 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:35:01.040944 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:35:01.040994 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:35:01.041045 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:35:01.041096 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:35:01.041145 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:35:01.041195 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:35:01.041274 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:35:01.041344 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:35:01.041393 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:35:01.041443 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:35:01.041497 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:35:01.041548 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:35:01.041600 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:35:01.041652 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:35:01.041704 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:35:01.041755 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:35:01.041809 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:35:01.041817 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:35:01.041823 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:35:01.041831 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 1 01:35:01.041837 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:35:01.041843 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:35:01.041848 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:35:01.041854 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:35:01.041908 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:35:01.041916 kernel: Initialise system trusted keyrings Nov 1 01:35:01.041922 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:35:01.041930 kernel: Key type asymmetric registered Nov 1 01:35:01.041935 kernel: Asymmetric key parser 'x509' registered Nov 1 01:35:01.041941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:35:01.041947 kernel: io scheduler mq-deadline registered Nov 1 01:35:01.041952 kernel: io scheduler kyber registered Nov 1 01:35:01.041958 kernel: io scheduler bfq registered Nov 1 01:35:01.042008 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:35:01.042059 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:35:01.042111 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:35:01.042162 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:35:01.042215 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:35:01.042309 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:35:01.042363 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:35:01.042372 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:35:01.042378 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:35:01.042384 kernel: pstore: Using crash dump compression: deflate Nov 1 01:35:01.042391 kernel: pstore: Registered erst as persistent store backend Nov 1 01:35:01.042397 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:35:01.042403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:35:01.042409 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:35:01.042415 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:35:01.042420 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:35:01.042470 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:35:01.042479 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:35:01.042527 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:35:01.042575 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:35:01.042621 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:34:59 UTC (1761960899) Nov 1 01:35:01.042668 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:35:01.042676 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:35:01.042682 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:35:01.042688 kernel: intel_pstate: HWP enabled Nov 1 01:35:01.042693 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:35:01.042701 kernel: vesafb: scrolling: redraw Nov 1 01:35:01.042706 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:35:01.042712 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000e73cd204, using 768k, total 768k Nov 1 01:35:01.042718 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:35:01.042724 kernel: fb0: VESA VGA frame buffer device Nov 1 01:35:01.042729 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:35:01.042735 kernel: Segment Routing with IPv6 Nov 1 01:35:01.042741 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:35:01.042746 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:35:01.042752 kernel: Key type dns_resolver registered Nov 1 01:35:01.042759 kernel: microcode: Current revision: 0x000000fc Nov 1 01:35:01.042764 kernel: microcode: Updated early from: 0x000000f4 Nov 1 01:35:01.042770 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:35:01.042776 kernel: IPI shorthand broadcast: enabled Nov 1 01:35:01.042781 kernel: sched_clock: Marking stable (2441000665, 1369134861)->(4407756867, -597621341) Nov 1 01:35:01.042787 kernel: registered taskstats version 1 Nov 1 01:35:01.042793 kernel: Loading compiled-in X.509 certificates Nov 1 01:35:01.042798 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:35:01.042804 kernel: Key type .fscrypt registered Nov 1 01:35:01.042810 kernel: Key type fscrypt-provisioning registered Nov 1 01:35:01.042816 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:35:01.042822 kernel: ima: No architecture policies found Nov 1 01:35:01.042827 kernel: clk: Disabling unused clocks Nov 1 01:35:01.042833 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:35:01.042839 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:35:01.042844 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:35:01.042850 kernel: Run /init as init process Nov 1 01:35:01.042856 kernel: with arguments: Nov 1 01:35:01.042863 kernel: /init Nov 1 01:35:01.042868 kernel: with environment: Nov 1 01:35:01.042874 kernel: HOME=/ Nov 1 01:35:01.042879 kernel: TERM=linux Nov 1 01:35:01.042886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:35:01.042893 systemd[1]: Detected architecture x86-64. Nov 1 01:35:01.042899 systemd[1]: Running in initrd. Nov 1 01:35:01.042906 systemd[1]: No hostname configured, using default hostname. Nov 1 01:35:01.042912 systemd[1]: Hostname set to . Nov 1 01:35:01.042918 systemd[1]: Initializing machine ID from random generator. Nov 1 01:35:01.042924 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:35:01.042930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:35:01.042936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:35:01.042942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:35:01.042948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:35:01.042955 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:35:01.042961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:35:01.042968 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:35:01.042974 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 1 01:35:01.042980 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 1 01:35:01.042986 kernel: clocksource: Switched to clocksource tsc Nov 1 01:35:01.042992 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:35:01.042999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:35:01.043004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:35:01.043010 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:35:01.043016 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:35:01.043022 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:35:01.043028 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:35:01.043034 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:35:01.043040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:35:01.043046 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:35:01.043053 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:35:01.043059 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:35:01.043065 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:35:01.043071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:35:01.043077 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:35:01.043083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:35:01.043089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:35:01.043095 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:35:01.043102 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:35:01.043108 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:35:01.043124 systemd-journald[267]: Collecting audit messages is disabled. Nov 1 01:35:01.043138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:35:01.043146 systemd-journald[267]: Journal started Nov 1 01:35:01.043159 systemd-journald[267]: Runtime Journal (/run/log/journal/c48972979ba34fb2920f011672342197) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:35:01.076813 systemd-modules-load[269]: Inserted module 'overlay' Nov 1 01:35:01.125317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:01.125331 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:35:01.125340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:35:01.149671 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:35:01.149764 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:35:01.149849 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:35:01.168473 systemd-modules-load[269]: Inserted module 'br_netfilter' Nov 1 01:35:01.169420 kernel: Bridge firewalling registered Nov 1 01:35:01.179548 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:35:01.234558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:35:01.238123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:35:01.269067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:01.291146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:35:01.311978 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:35:01.353466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:35:01.364835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:35:01.376997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:35:01.381975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:35:01.383340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:35:01.384542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:35:01.396563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:01.397623 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:35:01.417959 systemd-resolved[299]: Positive Trust Anchors: Nov 1 01:35:01.417971 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:35:01.418014 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:35:01.527597 dracut-cmdline[310]: dracut-dracut-053 Nov 1 01:35:01.527597 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:35:01.420799 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 1 01:35:01.421606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:35:01.437557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:35:01.677242 kernel: SCSI subsystem initialized Nov 1 01:35:01.700242 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:35:01.723255 kernel: iscsi: registered transport (tcp) Nov 1 01:35:01.755667 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:35:01.755685 kernel: QLogic iSCSI HBA Driver Nov 1 01:35:01.787687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:35:01.814519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:35:01.868853 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:35:01.868873 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:35:01.888469 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:35:01.947243 kernel: raid6: avx2x4 gen() 52338 MB/s Nov 1 01:35:01.979287 kernel: raid6: avx2x2 gen() 51895 MB/s Nov 1 01:35:02.015599 kernel: raid6: avx2x1 gen() 43965 MB/s Nov 1 01:35:02.015616 kernel: raid6: using algorithm avx2x4 gen() 52338 MB/s Nov 1 01:35:02.062555 kernel: raid6: .... xor() 20190 MB/s, rmw enabled Nov 1 01:35:02.062572 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:35:02.103262 kernel: xor: automatically using best checksumming function avx Nov 1 01:35:02.220274 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:35:02.226483 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:35:02.256520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:35:02.263847 systemd-udevd[497]: Using default interface naming scheme 'v255'. Nov 1 01:35:02.267335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:35:02.300451 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:35:02.336886 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Nov 1 01:35:02.357328 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:35:02.381442 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:35:02.466271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:35:02.498993 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:35:02.499011 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:35:02.514053 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:35:02.548876 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:35:02.548894 kernel: ACPI: bus type USB registered Nov 1 01:35:02.548903 kernel: usbcore: registered new interface driver usbfs Nov 1 01:35:02.537637 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:35:02.604477 kernel: usbcore: registered new interface driver hub Nov 1 01:35:02.604496 kernel: usbcore: registered new device driver usb Nov 1 01:35:02.604512 kernel: PTP clock support registered Nov 1 01:35:02.604527 kernel: libata version 3.00 loaded. Nov 1 01:35:02.592890 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:35:02.625766 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:35:02.625782 kernel: AES CTR mode by8 optimization enabled Nov 1 01:35:02.620421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:35:03.056020 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:35:03.056131 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:35:03.056216 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:35:03.056296 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:35:03.056369 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:35:03.056442 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:35:03.056512 kernel: scsi host0: ahci Nov 1 01:35:03.056585 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:35:03.056683 kernel: scsi host1: ahci Nov 1 01:35:03.056816 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:35:03.056947 kernel: scsi host2: ahci Nov 1 01:35:03.057078 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:35:03.057152 kernel: scsi host3: ahci Nov 1 01:35:03.057256 kernel: hub 1-0:1.0: USB hub found Nov 1 01:35:03.057326 kernel: scsi host4: ahci Nov 1 01:35:03.057387 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:35:03.057451 kernel: scsi host5: ahci Nov 1 01:35:03.057514 kernel: hub 2-0:1.0: USB hub found Nov 1 01:35:03.057583 kernel: scsi host6: ahci Nov 1 01:35:03.057643 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:35:03.057703 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:35:03.057712 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:35:03.057720 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:35:03.057729 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:35:03.057737 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:35:03.057744 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:35:03.057752 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:35:03.057759 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:35:03.057774 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:35:02.689436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:35:03.094336 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:35:03.121231 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:35:03.121322 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:35:03.136518 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9e Nov 1 01:35:03.150033 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:35:03.165862 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:35:03.195572 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 1 01:35:03.195668 kernel: hub 1-14:1.0: USB hub found Nov 1 01:35:03.195745 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:35:03.195812 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:35:03.217263 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:35:03.219408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:35:03.431694 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:35:03.431784 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:35:03.431794 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9f Nov 1 01:35:03.431865 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:35:03.431931 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.431940 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:35:03.432004 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432013 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:35:03.432021 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432028 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432038 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:35:03.432045 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:35:03.432053 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:35:03.387165 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:35:03.557294 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:35:03.557306 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:35:03.557470 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:35:03.557479 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 1 01:35:03.557549 kernel: ata1.00: Features: NCQ-prio Nov 1 01:35:03.557558 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:35:03.557574 kernel: ata2.00: Features: NCQ-prio Nov 1 01:35:03.515751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:35:03.619303 kernel: ata1.00: configured for UDMA/133 Nov 1 01:35:03.619393 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:35:03.619486 kernel: ata2.00: configured for UDMA/133 Nov 1 01:35:03.619496 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:35:03.515780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:03.653407 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:35:03.653530 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:35:03.602075 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:35:03.700166 kernel: usbcore: registered new interface driver usbhid Nov 1 01:35:03.700181 kernel: usbhid: USB HID core driver Nov 1 01:35:03.700190 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:35:03.649332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:35:04.319353 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:35:04.319373 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.319382 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:35:04.319474 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:35:04.319483 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:35:04.319558 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:35:04.319627 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:35:04.319689 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:35:04.319752 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:35:04.319814 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:35:04.319875 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 01:35:04.319935 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.319944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:35:04.319952 kernel: GPT:9289727 != 937703087 Nov 1 01:35:04.319959 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:35:04.319966 kernel: GPT:9289727 != 937703087 Nov 1 01:35:04.319974 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:35:04.319982 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.319989 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:35:04.320049 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 1 01:35:04.320117 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:35:04.320179 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:35:04.320320 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:35:04.320329 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:35:04.320400 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:35:04.320468 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:35:04.320530 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:35:04.320594 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:35:04.320655 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:35:04.320716 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:35:04.320780 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 01:35:04.320843 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:35:04.320907 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:35:04.320915 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:35:03.649371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:04.349320 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 1 01:35:03.688272 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:04.406825 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 1 01:35:04.406924 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sdb3 scanned by (udev-worker) (561) Nov 1 01:35:04.268357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:04.440491 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (671) Nov 1 01:35:04.355864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 01:35:04.429967 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 01:35:04.460571 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:35:04.485397 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:35:04.507605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:04.555787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:35:04.582346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:35:04.620350 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.620364 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.598696 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:35:04.662296 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.662307 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.662314 disk-uuid[721]: Primary Header is updated. Nov 1 01:35:04.662314 disk-uuid[721]: Secondary Entries is updated. Nov 1 01:35:04.662314 disk-uuid[721]: Secondary Header is updated. Nov 1 01:35:04.683675 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:04.703266 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:04.726076 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:05.681594 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:35:05.701258 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:35:05.701332 disk-uuid[722]: The operation has completed successfully. Nov 1 01:35:05.740188 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:35:05.740313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:35:05.764518 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:35:05.809328 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:35:05.809396 sh[750]: Success Nov 1 01:35:05.855162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:35:05.872117 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:35:05.878172 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:35:05.937874 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:35:05.937894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:05.959349 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:35:05.978365 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:35:05.996468 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:35:06.037258 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 01:35:06.039459 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:35:06.048717 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:35:06.054300 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:35:06.116240 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:06.116288 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:06.116296 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:35:06.123806 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:35:06.191967 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:35:06.191981 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:35:06.216215 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:06.221726 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:35:06.244600 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:35:06.254546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:35:06.267462 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:35:06.309596 ignition[924]: Ignition 2.19.0 Nov 1 01:35:06.309601 ignition[924]: Stage: fetch-offline Nov 1 01:35:06.311814 unknown[924]: fetched base config from "system" Nov 1 01:35:06.309627 ignition[924]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:06.311818 unknown[924]: fetched user config from "system" Nov 1 01:35:06.309632 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:06.312702 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:35:06.309690 ignition[924]: parsed url from cmdline: "" Nov 1 01:35:06.319433 systemd-networkd[934]: lo: Link UP Nov 1 01:35:06.309692 ignition[924]: no config URL provided Nov 1 01:35:06.319436 systemd-networkd[934]: lo: Gained carrier Nov 1 01:35:06.309695 ignition[924]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:35:06.321839 systemd-networkd[934]: Enumeration completed Nov 1 01:35:06.309718 ignition[924]: parsing config with SHA512: 36ffc34bc91b59d8189ee4bb1be0aaf0a8c2e58f56268a9ba0ef2f399c23791a7aec15c66214ed04fa2f22e71171dd70b6cf4545ec143c9c1f6a3194ffea468e Nov 1 01:35:06.322517 systemd-networkd[934]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.312066 ignition[924]: fetch-offline: fetch-offline passed Nov 1 01:35:06.331519 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:35:06.312069 ignition[924]: POST message to Packet Timeline Nov 1 01:35:06.349621 systemd[1]: Reached target network.target - Network. Nov 1 01:35:06.312071 ignition[924]: POST Status error: resource requires networking Nov 1 01:35:06.350426 systemd-networkd[934]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.312107 ignition[924]: Ignition finished successfully Nov 1 01:35:06.357501 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:35:06.384083 ignition[946]: Ignition 2.19.0 Nov 1 01:35:06.371426 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:35:06.559357 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:35:06.384090 ignition[946]: Stage: kargs Nov 1 01:35:06.378589 systemd-networkd[934]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.384315 ignition[946]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:06.551788 systemd-networkd[934]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:35:06.384327 ignition[946]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:06.385310 ignition[946]: kargs: kargs passed Nov 1 01:35:06.385315 ignition[946]: POST message to Packet Timeline Nov 1 01:35:06.385330 ignition[946]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:06.386038 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55175->[::1]:53: read: connection refused Nov 1 01:35:06.586539 ignition[946]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:35:06.586810 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48741->[::1]:53: read: connection refused Nov 1 01:35:06.739322 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:35:06.740439 systemd-networkd[934]: eno1: Link UP Nov 1 01:35:06.740605 systemd-networkd[934]: eno2: Link UP Nov 1 01:35:06.740757 systemd-networkd[934]: enp1s0f0np0: Link UP Nov 1 01:35:06.740935 systemd-networkd[934]: enp1s0f0np0: Gained carrier Nov 1 01:35:06.749477 systemd-networkd[934]: enp1s0f1np1: Link UP Nov 1 01:35:06.773396 systemd-networkd[934]: enp1s0f0np0: DHCPv4 address 139.178.94.199/31, gateway 139.178.94.198 acquired from 145.40.83.140 Nov 1 01:35:06.987094 ignition[946]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:35:06.988246 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41336->[::1]:53: read: connection refused Nov 1 01:35:07.607885 systemd-networkd[934]: enp1s0f1np1: Gained carrier Nov 1 01:35:07.788796 ignition[946]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:35:07.789859 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41371->[::1]:53: read: connection refused Nov 1 01:35:07.799546 systemd-networkd[934]: enp1s0f0np0: Gained IPv6LL Nov 1 01:35:09.391642 ignition[946]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:35:09.392777 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44642->[::1]:53: read: connection refused Nov 1 01:35:09.591807 systemd-networkd[934]: enp1s0f1np1: Gained IPv6LL Nov 1 01:35:12.596251 ignition[946]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:35:13.747074 ignition[946]: GET result: OK Nov 1 01:35:14.474112 ignition[946]: Ignition finished successfully Nov 1 01:35:14.493013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:35:14.518516 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:35:14.524853 ignition[964]: Ignition 2.19.0 Nov 1 01:35:14.524857 ignition[964]: Stage: disks Nov 1 01:35:14.524963 ignition[964]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:14.524970 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:14.525571 ignition[964]: disks: disks passed Nov 1 01:35:14.525574 ignition[964]: POST message to Packet Timeline Nov 1 01:35:14.525584 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:15.648307 ignition[964]: GET result: OK Nov 1 01:35:16.059803 ignition[964]: Ignition finished successfully Nov 1 01:35:16.063451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:35:16.078585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:35:16.096555 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:35:16.117654 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:35:16.138586 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:35:16.148808 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:35:16.190455 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:35:16.225111 systemd-fsck[983]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 01:35:16.234728 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:35:16.263420 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:35:16.364230 kernel: EXT4-fs (sdb9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:35:16.364790 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:35:16.373634 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:35:16.406289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:35:16.414793 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:35:16.539538 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (992) Nov 1 01:35:16.539552 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:16.539560 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:16.539568 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:35:16.539578 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:35:16.539585 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:35:16.457861 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 01:35:16.539913 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 01:35:16.580352 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:35:16.580378 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:35:16.609525 coreos-metadata[994]: Nov 01 01:35:16.601 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:35:16.591411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:35:16.656423 coreos-metadata[1010]: Nov 01 01:35:16.602 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:35:16.626530 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:35:16.660521 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:35:16.699402 initrd-setup-root[1024]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:35:16.709351 initrd-setup-root[1031]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:35:16.720341 initrd-setup-root[1038]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:35:16.731319 initrd-setup-root[1045]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:35:16.761549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:35:16.785484 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:35:16.823457 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:16.802022 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:35:16.832891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:35:16.856704 ignition[1112]: INFO : Ignition 2.19.0 Nov 1 01:35:16.856704 ignition[1112]: INFO : Stage: mount Nov 1 01:35:16.871425 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:16.871425 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:16.871425 ignition[1112]: INFO : mount: mount passed Nov 1 01:35:16.871425 ignition[1112]: INFO : POST message to Packet Timeline Nov 1 01:35:16.871425 ignition[1112]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:16.857986 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:35:17.683348 coreos-metadata[1010]: Nov 01 01:35:17.683 INFO Fetch successful Nov 1 01:35:17.698692 coreos-metadata[994]: Nov 01 01:35:17.698 INFO Fetch successful Nov 1 01:35:17.733061 coreos-metadata[994]: Nov 01 01:35:17.733 INFO wrote hostname ci-4081.3.6-n-4452d0b810 to /sysroot/etc/hostname Nov 1 01:35:17.733177 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:35:17.733269 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 01:35:17.758543 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:35:18.497611 ignition[1112]: INFO : GET result: OK Nov 1 01:35:18.873672 ignition[1112]: INFO : Ignition finished successfully Nov 1 01:35:18.875924 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:35:18.905532 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:35:18.916178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:35:18.964232 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1137) Nov 1 01:35:18.993317 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:35:18.993333 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:35:19.010703 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:35:19.047995 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:35:19.048011 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:35:19.061077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:35:19.088684 ignition[1154]: INFO : Ignition 2.19.0 Nov 1 01:35:19.088684 ignition[1154]: INFO : Stage: files Nov 1 01:35:19.103433 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:19.103433 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:19.103433 ignition[1154]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:35:19.103433 ignition[1154]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:35:19.103433 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:35:19.093495 unknown[1154]: wrote ssh authorized keys file for user: core Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:19.265485 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:19.513513 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:35:19.732464 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 01:35:20.007584 ignition[1154]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:35:20.007584 ignition[1154]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:35:20.037427 ignition[1154]: INFO : files: files passed Nov 1 01:35:20.037427 ignition[1154]: INFO : POST message to Packet Timeline Nov 1 01:35:20.037427 ignition[1154]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:21.040386 ignition[1154]: INFO : GET result: OK Nov 1 01:35:21.933183 ignition[1154]: INFO : Ignition finished successfully Nov 1 01:35:21.936554 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:35:21.968344 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:35:21.968751 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:35:21.987728 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:35:21.987801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:35:22.021899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:35:22.042657 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:35:22.083410 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:35:22.083410 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:35:22.123396 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:35:22.083505 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:35:22.151666 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:35:22.151739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:35:22.183893 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:35:22.193414 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:35:22.213725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:35:22.227762 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:35:22.309598 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:35:22.328653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:35:22.386119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:35:22.386661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:35:22.417953 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:35:22.436900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:35:22.437338 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:35:22.464036 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:35:22.484901 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:35:22.502891 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:35:22.520900 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:35:22.541888 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:35:22.562919 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:35:22.582873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:35:22.604037 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:35:22.624911 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:35:22.644898 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:35:22.662744 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:35:22.663150 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:35:22.688038 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:35:22.707913 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:35:22.728774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:35:22.729287 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:35:22.750762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:35:22.751164 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:35:22.782890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:35:22.783391 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:35:22.803109 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:35:22.820757 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:35:22.821165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:35:22.841907 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:35:22.859897 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:35:22.877872 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:35:22.878185 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:35:22.897930 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:35:22.898267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:35:22.920991 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:35:22.921436 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:35:22.939989 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:35:22.940416 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:35:23.064421 ignition[1220]: INFO : Ignition 2.19.0 Nov 1 01:35:23.064421 ignition[1220]: INFO : Stage: umount Nov 1 01:35:23.064421 ignition[1220]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:35:23.064421 ignition[1220]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:35:23.064421 ignition[1220]: INFO : umount: umount passed Nov 1 01:35:23.064421 ignition[1220]: INFO : POST message to Packet Timeline Nov 1 01:35:23.064421 ignition[1220]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:35:22.957984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:35:22.958413 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:35:22.991472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:35:22.992485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:35:22.992564 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:35:23.042475 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:35:23.055464 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:35:23.055549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:35:23.075479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:35:23.075567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:35:23.126619 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:35:23.128460 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:35:23.128713 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:35:23.148364 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:35:23.148623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:35:24.128018 ignition[1220]: INFO : GET result: OK Nov 1 01:35:24.833620 ignition[1220]: INFO : Ignition finished successfully Nov 1 01:35:24.836834 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:35:24.837133 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:35:24.853667 systemd[1]: Stopped target network.target - Network. Nov 1 01:35:24.868470 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:35:24.868663 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:35:24.886603 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:35:24.886761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:35:24.904650 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:35:24.904810 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:35:24.923629 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:35:24.923794 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:35:24.942591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:35:24.942762 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:35:24.962021 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:35:24.971382 systemd-networkd[934]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:35:24.979461 systemd-networkd[934]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:35:24.980691 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:35:25.001291 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:35:25.001568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:35:25.021506 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:35:25.021851 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:35:25.042497 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:35:25.042637 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:35:25.073437 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:35:25.099412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:35:25.099663 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:35:25.118717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:35:25.118893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:35:25.136700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:35:25.136869 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:35:25.156706 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:35:25.156876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:35:25.175954 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:35:25.198398 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:35:25.198767 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:35:25.230839 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:35:25.230870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:35:25.253521 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:35:25.253550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:35:25.273555 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:35:25.273641 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:35:25.305773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:35:25.305913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:35:25.352340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:35:25.352539 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:35:25.637212 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Nov 1 01:35:25.637240 systemd-journald[267]: Failed to send stream file descriptor to service manager: Connection refused Nov 1 01:35:25.405530 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:35:25.415471 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:35:25.415661 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:35:25.444437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:35:25.444615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:25.466517 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:35:25.466740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:35:25.488138 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:35:25.488404 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:35:25.510243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:35:25.545688 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:35:25.566104 systemd[1]: Switching root. Nov 1 01:35:25.745320 systemd-journald[267]: Journal stopped Nov 1 01:35:28.219701 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:35:28.219717 kernel: SELinux: policy capability open_perms=1 Nov 1 01:35:28.219725 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:35:28.219733 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:35:28.219738 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:35:28.219744 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:35:28.219750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:35:28.219758 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:35:28.219763 kernel: audit: type=1403 audit(1761960925.999:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:35:28.219771 systemd[1]: Successfully loaded SELinux policy in 157.569ms. Nov 1 01:35:28.219779 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.415ms. Nov 1 01:35:28.219787 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:35:28.219793 systemd[1]: Detected architecture x86-64. Nov 1 01:35:28.219800 systemd[1]: Detected first boot. Nov 1 01:35:28.219807 systemd[1]: Hostname set to . Nov 1 01:35:28.219815 systemd[1]: Initializing machine ID from random generator. Nov 1 01:35:28.219822 zram_generator::config[1286]: No configuration found. Nov 1 01:35:28.219829 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:35:28.219836 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:35:28.219843 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Nov 1 01:35:28.219850 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 01:35:28.219857 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 01:35:28.219865 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 01:35:28.219872 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 01:35:28.219879 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 01:35:28.219886 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 01:35:28.219893 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 01:35:28.219900 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 01:35:28.219907 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:35:28.219916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:35:28.219923 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 01:35:28.219930 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 01:35:28.219937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 01:35:28.219944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:35:28.219951 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 1 01:35:28.219958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:35:28.219965 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 01:35:28.219973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:35:28.219980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:35:28.219987 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:35:28.219996 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:35:28.220004 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 01:35:28.220011 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 01:35:28.220018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:35:28.220026 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:35:28.220033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:35:28.220041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:35:28.220048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:35:28.220055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 01:35:28.220063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 01:35:28.220071 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 01:35:28.220079 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 01:35:28.220086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:35:28.220093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 01:35:28.220101 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 01:35:28.220108 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 01:35:28.220115 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 01:35:28.220124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:35:28.220131 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:35:28.220139 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 01:35:28.220146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:35:28.220153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:35:28.220161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:35:28.220168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 01:35:28.220175 kernel: ACPI: bus type drm_connector registered Nov 1 01:35:28.220182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:35:28.220190 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:35:28.220198 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 01:35:28.220206 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 01:35:28.220218 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:35:28.220226 kernel: fuse: init (API version 7.39) Nov 1 01:35:28.220232 kernel: loop: module loaded Nov 1 01:35:28.220248 systemd-journald[1406]: Collecting audit messages is disabled. Nov 1 01:35:28.220265 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:35:28.220273 systemd-journald[1406]: Journal started Nov 1 01:35:28.220288 systemd-journald[1406]: Runtime Journal (/run/log/journal/f2c84f4161e5461f9ce8fa0f2d85cd98) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:35:28.273301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 01:35:28.307264 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 01:35:28.340264 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:35:28.390266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:35:28.411215 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:35:28.421999 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 01:35:28.432495 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 01:35:28.442497 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 01:35:28.452508 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 01:35:28.462428 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 01:35:28.472471 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 01:35:28.482648 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 01:35:28.493858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:35:28.505829 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:35:28.506097 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 01:35:28.518252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:35:28.518729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:35:28.531175 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:35:28.531637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:35:28.543146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:35:28.543610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:35:28.555129 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:35:28.555565 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 01:35:28.566084 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:35:28.566503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:35:28.576640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:35:28.586589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 01:35:28.597628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 01:35:28.608643 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:35:28.629792 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 01:35:28.655789 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 01:35:28.667110 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 01:35:28.677457 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:35:28.678844 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 01:35:28.689190 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 01:35:28.700349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:35:28.714676 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 01:35:28.717429 systemd-journald[1406]: Time spent on flushing to /var/log/journal/f2c84f4161e5461f9ce8fa0f2d85cd98 is 12.921ms for 1357 entries. Nov 1 01:35:28.717429 systemd-journald[1406]: System Journal (/var/log/journal/f2c84f4161e5461f9ce8fa0f2d85cd98) is 8.0M, max 195.6M, 187.6M free. Nov 1 01:35:28.752891 systemd-journald[1406]: Received client request to flush runtime journal. Nov 1 01:35:28.732327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:35:28.732979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:35:28.739188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:35:28.761941 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 01:35:28.773402 systemd-tmpfiles[1445]: ACLs are not supported, ignoring. Nov 1 01:35:28.773412 systemd-tmpfiles[1445]: ACLs are not supported, ignoring. Nov 1 01:35:28.774337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 01:35:28.785440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 01:35:28.796515 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 01:35:28.807477 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 01:35:28.818482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:35:28.828455 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:35:28.842092 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 01:35:28.865408 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 01:35:28.875622 udevadm[1452]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 01:35:28.881778 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 01:35:28.904359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:35:28.912120 systemd-tmpfiles[1467]: ACLs are not supported, ignoring. Nov 1 01:35:28.912131 systemd-tmpfiles[1467]: ACLs are not supported, ignoring. Nov 1 01:35:28.915573 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:35:29.062710 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 01:35:29.092467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:35:29.104878 systemd-udevd[1473]: Using default interface naming scheme 'v255'. Nov 1 01:35:29.125584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:35:29.142255 systemd[1]: Found device dev-ttyS1.device - /dev/ttyS1. Nov 1 01:35:29.177560 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 01:35:29.177602 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (1540) Nov 1 01:35:29.177618 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 01:35:29.216150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:35:29.235216 kernel: IPMI message handler: version 39.2 Nov 1 01:35:29.235242 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:35:29.235266 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:35:29.287633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:35:29.302219 kernel: ipmi device interface Nov 1 01:35:29.302267 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 01:35:29.307990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:35:29.335224 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 01:35:29.374219 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 01:35:29.385552 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 01:35:29.400216 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 01:35:29.400335 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 01:35:29.400663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:35:29.445214 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 01:35:29.445255 kernel: ipmi_si: IPMI System Interface driver Nov 1 01:35:29.453240 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 01:35:29.481318 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 01:35:29.481439 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 01:35:29.481451 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 01:35:29.481462 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 01:35:29.558603 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 01:35:29.598150 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 01:35:29.598253 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 01:35:29.620217 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 01:35:29.665897 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 1 01:35:29.666023 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 1 01:35:29.678215 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 01:35:29.716359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:35:29.754254 kernel: intel_rapl_common: Found RAPL domain package Nov 1 01:35:29.754358 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 1 01:35:29.754461 kernel: intel_rapl_common: Found RAPL domain core Nov 1 01:35:29.790125 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 01:35:29.808432 systemd-networkd[1561]: lo: Link UP Nov 1 01:35:29.808435 systemd-networkd[1561]: lo: Gained carrier Nov 1 01:35:29.811467 systemd-networkd[1561]: bond0: netdev ready Nov 1 01:35:29.812402 systemd-networkd[1561]: Enumeration completed Nov 1 01:35:29.812484 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:35:29.816766 systemd-networkd[1561]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:42:d4.network. Nov 1 01:35:29.849339 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 01:35:29.857215 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 01:35:29.877249 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 01:35:29.878651 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 01:35:29.902373 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 01:35:29.910507 lvm[1595]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:35:29.943499 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 01:35:29.954616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:35:29.986637 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 01:35:29.996372 lvm[1598]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:35:30.044289 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 01:35:30.055491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:35:30.066284 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:35:30.066307 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:35:30.076347 systemd[1]: Reached target machines.target - Containers. Nov 1 01:35:30.086059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 01:35:30.113603 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 01:35:30.125625 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 01:35:30.135520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:35:30.147506 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 01:35:30.161403 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 01:35:30.173075 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 01:35:30.173578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 01:35:30.199240 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 01:35:30.202218 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 01:35:30.214136 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:35:30.214554 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 01:35:30.242005 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:35:30.280216 kernel: loop1: detected capacity change from 0 to 8 Nov 1 01:35:30.280257 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:35:30.332215 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 1 01:35:30.333101 systemd-networkd[1561]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:42:d5.network. Nov 1 01:35:30.370225 kernel: loop2: detected capacity change from 0 to 224512 Nov 1 01:35:30.439163 kernel: loop3: detected capacity change from 0 to 142488 Nov 1 01:35:30.490217 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:35:30.512802 systemd-networkd[1561]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 01:35:30.513214 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 1 01:35:30.514279 systemd-networkd[1561]: enp1s0f0np0: Link UP Nov 1 01:35:30.514426 systemd-networkd[1561]: enp1s0f0np0: Gained carrier Nov 1 01:35:30.528010 ldconfig[1603]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:35:30.529840 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 01:35:30.535222 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:35:30.540425 systemd-networkd[1561]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:42:d4.network. Nov 1 01:35:30.540566 systemd-networkd[1561]: enp1s0f1np1: Link UP Nov 1 01:35:30.540747 systemd-networkd[1561]: enp1s0f1np1: Gained carrier Nov 1 01:35:30.558214 kernel: loop4: detected capacity change from 0 to 140768 Nov 1 01:35:30.566380 systemd-networkd[1561]: bond0: Link UP Nov 1 01:35:30.566541 systemd-networkd[1561]: bond0: Gained carrier Nov 1 01:35:30.590213 kernel: loop5: detected capacity change from 0 to 8 Nov 1 01:35:30.607214 kernel: loop6: detected capacity change from 0 to 224512 Nov 1 01:35:30.630307 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 1 01:35:30.630340 kernel: loop7: detected capacity change from 0 to 142488 Nov 1 01:35:30.630358 kernel: bond0: active interface up! Nov 1 01:35:30.641804 (sd-merge)[1621]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 1 01:35:30.642035 (sd-merge)[1621]: Merged extensions into '/usr'. Nov 1 01:35:30.667839 systemd[1]: Reloading requested from client PID 1607 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 01:35:30.667846 systemd[1]: Reloading... Nov 1 01:35:30.700263 zram_generator::config[1648]: No configuration found. Nov 1 01:35:30.761445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:35:30.787248 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 01:35:30.813663 systemd[1]: Reloading finished in 145 ms. Nov 1 01:35:30.828498 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 01:35:30.851380 systemd[1]: Starting ensure-sysext.service... Nov 1 01:35:30.858947 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:35:30.880339 systemd[1]: Reloading requested from client PID 1709 ('systemctl') (unit ensure-sysext.service)... Nov 1 01:35:30.880347 systemd[1]: Reloading... Nov 1 01:35:30.888333 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:35:30.888542 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 01:35:30.889039 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:35:30.889211 systemd-tmpfiles[1710]: ACLs are not supported, ignoring. Nov 1 01:35:30.889249 systemd-tmpfiles[1710]: ACLs are not supported, ignoring. Nov 1 01:35:30.891240 systemd-tmpfiles[1710]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:35:30.891244 systemd-tmpfiles[1710]: Skipping /boot Nov 1 01:35:30.895430 systemd-tmpfiles[1710]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:35:30.895435 systemd-tmpfiles[1710]: Skipping /boot Nov 1 01:35:30.915217 zram_generator::config[1738]: No configuration found. Nov 1 01:35:30.975938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:35:31.027616 systemd[1]: Reloading finished in 147 ms. Nov 1 01:35:31.038181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:35:31.064149 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:35:31.074258 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 01:35:31.086086 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 01:35:31.090696 augenrules[1820]: No rules Nov 1 01:35:31.098406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:35:31.108966 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 01:35:31.120768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:35:31.130555 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 01:35:31.141482 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 01:35:31.159988 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 01:35:31.172202 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:35:31.172340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:35:31.173015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:35:31.182799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:35:31.191755 systemd-resolved[1826]: Positive Trust Anchors: Nov 1 01:35:31.191761 systemd-resolved[1826]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:35:31.191785 systemd-resolved[1826]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:35:31.193873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:35:31.194370 systemd-resolved[1826]: Using system hostname 'ci-4081.3.6-n-4452d0b810'. Nov 1 01:35:31.203284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:35:31.204060 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 01:35:31.213251 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:35:31.213315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:35:31.213865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:35:31.226028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:35:31.226524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:35:31.238462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:35:31.238551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:35:31.249444 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:35:31.249528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:35:31.259491 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 01:35:31.271608 systemd[1]: Reached target network.target - Network. Nov 1 01:35:31.279304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:35:31.290317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:35:31.290484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:35:31.297357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:35:31.307883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:35:31.317851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:35:31.328899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:35:31.338376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:35:31.338455 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:35:31.338510 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:35:31.339194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:35:31.339319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:35:31.350548 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:35:31.350628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:35:31.360499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:35:31.360578 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:35:31.371594 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:35:31.371672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:35:31.382321 systemd[1]: Finished ensure-sysext.service. Nov 1 01:35:31.391714 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:35:31.391746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:35:31.403349 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 01:35:31.450709 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 01:35:31.461488 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:35:31.471335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 01:35:31.482303 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 01:35:31.493300 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 01:35:31.504306 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:35:31.504323 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:35:31.512287 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 01:35:31.522346 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 01:35:31.532452 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 01:35:31.543415 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:35:31.552458 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 01:35:31.563992 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 01:35:31.573707 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 01:35:31.584472 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 01:35:31.594434 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:35:31.605253 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:35:31.614326 systemd[1]: System is tainted: cgroupsv1 Nov 1 01:35:31.614347 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:35:31.614360 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:35:31.620238 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 01:35:31.631018 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 01:35:31.644225 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 01:35:31.653834 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 01:35:31.658890 coreos-metadata[1872]: Nov 01 01:35:31.658 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:35:31.664905 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 01:35:31.665174 dbus-daemon[1873]: [system] SELinux support is enabled Nov 1 01:35:31.667040 jq[1876]: false Nov 1 01:35:31.675389 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 01:35:31.676012 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 01:35:31.683425 extend-filesystems[1878]: Found loop4 Nov 1 01:35:31.683425 extend-filesystems[1878]: Found loop5 Nov 1 01:35:31.729512 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 1 01:35:31.729542 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (1544) Nov 1 01:35:31.729559 extend-filesystems[1878]: Found loop6 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found loop7 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sda Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb1 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb2 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb3 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found usr Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb4 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb6 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb7 Nov 1 01:35:31.729559 extend-filesystems[1878]: Found sdb9 Nov 1 01:35:31.729559 extend-filesystems[1878]: Checking size of /dev/sdb9 Nov 1 01:35:31.729559 extend-filesystems[1878]: Resized partition /dev/sdb9 Nov 1 01:35:31.686906 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 01:35:31.883556 extend-filesystems[1887]: resize2fs 1.47.1 (20-May-2024) Nov 1 01:35:31.745093 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 01:35:31.782338 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 01:35:31.817679 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 01:35:31.844279 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 1 01:35:31.853039 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 01:35:31.860048 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 01:35:31.908805 update_engine[1907]: I20251101 01:35:31.875140 1907 main.cc:92] Flatcar Update Engine starting Nov 1 01:35:31.908805 update_engine[1907]: I20251101 01:35:31.875810 1907 update_check_scheduler.cc:74] Next update check in 9m52s Nov 1 01:35:31.869531 systemd-logind[1905]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 01:35:31.909067 jq[1908]: true Nov 1 01:35:31.869541 systemd-logind[1905]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 01:35:31.869551 systemd-logind[1905]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 01:35:31.869844 systemd-logind[1905]: New seat seat0. Nov 1 01:35:31.883594 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 01:35:31.901570 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 01:35:31.926357 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:35:31.926492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 01:35:31.926677 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:35:31.926790 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 01:35:31.936651 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:35:31.936774 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 01:35:31.950135 (ntainerd)[1914]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 01:35:31.951550 jq[1913]: true Nov 1 01:35:31.953535 dbus-daemon[1873]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:35:31.955892 tar[1912]: linux-amd64/LICENSE Nov 1 01:35:31.956076 tar[1912]: linux-amd64/helm Nov 1 01:35:31.957927 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 01:35:31.958063 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 1 01:35:31.958155 systemd[1]: Started update-engine.service - Update Engine. Nov 1 01:35:31.972743 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:35:31.972854 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 01:35:31.983312 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:35:31.983388 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 01:35:31.995688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:35:32.005266 bash[1943]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:35:32.014438 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 01:35:32.026262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 01:35:32.033684 sshd_keygen[1904]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:35:32.038591 locksmithd[1945]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:35:32.054426 systemd[1]: Starting sshkeys.service... Nov 1 01:35:32.062723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 01:35:32.074816 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 01:35:32.085626 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 01:35:32.098147 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 01:35:32.110704 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:35:32.110833 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 01:35:32.119314 systemd-networkd[1561]: bond0: Gained IPv6LL Nov 1 01:35:32.122093 coreos-metadata[1978]: Nov 01 01:35:32.122 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:35:32.125000 containerd[1914]: time="2025-11-01T01:35:32.124961275Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 01:35:32.136517 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 01:35:32.137917 containerd[1914]: time="2025-11-01T01:35:32.137803760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138605 containerd[1914]: time="2025-11-01T01:35:32.138556078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138605 containerd[1914]: time="2025-11-01T01:35:32.138572805Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:35:32.138605 containerd[1914]: time="2025-11-01T01:35:32.138582585Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:35:32.138677 containerd[1914]: time="2025-11-01T01:35:32.138668969Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 01:35:32.138694 containerd[1914]: time="2025-11-01T01:35:32.138680356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138722 containerd[1914]: time="2025-11-01T01:35:32.138713965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138741 containerd[1914]: time="2025-11-01T01:35:32.138722741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138847 containerd[1914]: time="2025-11-01T01:35:32.138838233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138863 containerd[1914]: time="2025-11-01T01:35:32.138848106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138863 containerd[1914]: time="2025-11-01T01:35:32.138856405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138889 containerd[1914]: time="2025-11-01T01:35:32.138862188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.138912 containerd[1914]: time="2025-11-01T01:35:32.138905048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.139030 containerd[1914]: time="2025-11-01T01:35:32.139022513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:35:32.139113 containerd[1914]: time="2025-11-01T01:35:32.139104551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:35:32.139131 containerd[1914]: time="2025-11-01T01:35:32.139114018Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:35:32.139166 containerd[1914]: time="2025-11-01T01:35:32.139159334Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:35:32.139195 containerd[1914]: time="2025-11-01T01:35:32.139189132Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:35:32.147744 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 01:35:32.151192 containerd[1914]: time="2025-11-01T01:35:32.151173903Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:35:32.151229 containerd[1914]: time="2025-11-01T01:35:32.151204759Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:35:32.151229 containerd[1914]: time="2025-11-01T01:35:32.151220100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 01:35:32.151275 containerd[1914]: time="2025-11-01T01:35:32.151229228Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 01:35:32.151275 containerd[1914]: time="2025-11-01T01:35:32.151238642Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:35:32.151335 containerd[1914]: time="2025-11-01T01:35:32.151326197Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:35:32.151502 containerd[1914]: time="2025-11-01T01:35:32.151493427Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:35:32.151559 containerd[1914]: time="2025-11-01T01:35:32.151550734Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 01:35:32.151577 containerd[1914]: time="2025-11-01T01:35:32.151560893Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 01:35:32.151577 containerd[1914]: time="2025-11-01T01:35:32.151568915Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 01:35:32.151608 containerd[1914]: time="2025-11-01T01:35:32.151576577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151608 containerd[1914]: time="2025-11-01T01:35:32.151583596Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151608 containerd[1914]: time="2025-11-01T01:35:32.151590877Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151608 containerd[1914]: time="2025-11-01T01:35:32.151598532Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151608 containerd[1914]: time="2025-11-01T01:35:32.151606135Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151613874Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151620893Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151628387Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151640100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151648610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151655505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151662895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151674 containerd[1914]: time="2025-11-01T01:35:32.151669881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151681634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151688750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151695607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151702351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151710717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151717214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151723631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151730794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151739517Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151751102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151758128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151784 containerd[1914]: time="2025-11-01T01:35:32.151763894Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151789489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151800372Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151806806Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151813179Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151818563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151825337Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151834436Z" level=info msg="NRI interface is disabled by configuration." Nov 1 01:35:32.151966 containerd[1914]: time="2025-11-01T01:35:32.151840845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:35:32.152095 containerd[1914]: time="2025-11-01T01:35:32.152066451Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:35:32.152178 containerd[1914]: time="2025-11-01T01:35:32.152099914Z" level=info msg="Connect containerd service" Nov 1 01:35:32.152178 containerd[1914]: time="2025-11-01T01:35:32.152117732Z" level=info msg="using legacy CRI server" Nov 1 01:35:32.152178 containerd[1914]: time="2025-11-01T01:35:32.152122731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 01:35:32.152380 containerd[1914]: time="2025-11-01T01:35:32.152359171Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:35:32.152820 containerd[1914]: time="2025-11-01T01:35:32.152806744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:35:32.152916 containerd[1914]: time="2025-11-01T01:35:32.152896076Z" level=info msg="Start subscribing containerd event" Nov 1 01:35:32.152938 containerd[1914]: time="2025-11-01T01:35:32.152924660Z" level=info msg="Start recovering state" Nov 1 01:35:32.152973 containerd[1914]: time="2025-11-01T01:35:32.152965937Z" level=info msg="Start event monitor" Nov 1 01:35:32.152991 containerd[1914]: time="2025-11-01T01:35:32.152971424Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:35:32.153006 containerd[1914]: time="2025-11-01T01:35:32.152999998Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:35:32.153021 containerd[1914]: time="2025-11-01T01:35:32.152975093Z" level=info msg="Start snapshots syncer" Nov 1 01:35:32.153047 containerd[1914]: time="2025-11-01T01:35:32.153022633Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:35:32.153047 containerd[1914]: time="2025-11-01T01:35:32.153027705Z" level=info msg="Start streaming server" Nov 1 01:35:32.153100 containerd[1914]: time="2025-11-01T01:35:32.153064495Z" level=info msg="containerd successfully booted in 0.028740s" Nov 1 01:35:32.158646 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 01:35:32.178461 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 01:35:32.198443 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 1 01:35:32.208498 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 01:35:32.229214 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 1 01:35:32.253219 extend-filesystems[1887]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 1 01:35:32.253219 extend-filesystems[1887]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 01:35:32.253219 extend-filesystems[1887]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 1 01:35:32.294313 extend-filesystems[1878]: Resized filesystem in /dev/sdb9 Nov 1 01:35:32.294359 tar[1912]: linux-amd64/README.md Nov 1 01:35:32.253644 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:35:32.253781 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 01:35:32.312564 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 01:35:32.323813 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 01:35:32.333933 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 01:35:32.355410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:35:32.365972 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 01:35:32.383434 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 01:35:33.128159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:35:33.139881 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:35:33.203373 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 1 01:35:33.203523 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Nov 1 01:35:33.307272 kernel: mlx5_core 0000:01:00.0: lag map: port 1:2 port 2:2 Nov 1 01:35:33.384242 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 1 01:35:33.574649 kubelet[2037]: E1101 01:35:33.574570 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:35:33.575784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:35:33.575871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:35:33.875011 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 01:35:33.896521 systemd[1]: Started sshd@0-139.178.94.199:22-139.178.89.65:48850.service - OpenSSH per-connection server daemon (139.178.89.65:48850). Nov 1 01:35:33.944066 sshd[2057]: Accepted publickey for core from 139.178.89.65 port 48850 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:33.945252 sshd[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:33.950808 systemd-logind[1905]: New session 1 of user core. Nov 1 01:35:33.951490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 01:35:33.972640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 01:35:33.988081 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 01:35:34.017639 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 01:35:34.035164 (systemd)[2063]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:35:34.116572 systemd[2063]: Queued start job for default target default.target. Nov 1 01:35:34.116738 systemd[2063]: Created slice app.slice - User Application Slice. Nov 1 01:35:34.116750 systemd[2063]: Reached target paths.target - Paths. Nov 1 01:35:34.116758 systemd[2063]: Reached target timers.target - Timers. Nov 1 01:35:34.125461 systemd[2063]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 01:35:34.128786 systemd[2063]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 01:35:34.128814 systemd[2063]: Reached target sockets.target - Sockets. Nov 1 01:35:34.128823 systemd[2063]: Reached target basic.target - Basic System. Nov 1 01:35:34.128844 systemd[2063]: Reached target default.target - Main User Target. Nov 1 01:35:34.128859 systemd[2063]: Startup finished in 89ms. Nov 1 01:35:34.128989 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 01:35:34.141553 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 01:35:34.215425 systemd[1]: Started sshd@1-139.178.94.199:22-139.178.89.65:48860.service - OpenSSH per-connection server daemon (139.178.89.65:48860). Nov 1 01:35:34.235899 sshd[2076]: Accepted publickey for core from 139.178.89.65 port 48860 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:34.236586 sshd[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:34.239018 systemd-logind[1905]: New session 2 of user core. Nov 1 01:35:34.248396 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 01:35:34.307938 sshd[2076]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:34.325657 systemd[1]: Started sshd@2-139.178.94.199:22-139.178.89.65:48870.service - OpenSSH per-connection server daemon (139.178.89.65:48870). Nov 1 01:35:34.337488 systemd[1]: sshd@1-139.178.94.199:22-139.178.89.65:48860.service: Deactivated successfully. Nov 1 01:35:34.339695 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 01:35:34.340828 systemd-logind[1905]: Session 2 logged out. Waiting for processes to exit. Nov 1 01:35:34.342595 systemd-logind[1905]: Removed session 2. Nov 1 01:35:34.359647 sshd[2082]: Accepted publickey for core from 139.178.89.65 port 48870 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:34.360308 sshd[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:34.362998 systemd-logind[1905]: New session 3 of user core. Nov 1 01:35:34.375541 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 01:35:34.449191 sshd[2082]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:34.455598 systemd[1]: sshd@2-139.178.94.199:22-139.178.89.65:48870.service: Deactivated successfully. Nov 1 01:35:34.461744 systemd-logind[1905]: Session 3 logged out. Waiting for processes to exit. Nov 1 01:35:34.462366 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 01:35:34.465309 systemd-logind[1905]: Removed session 3. Nov 1 01:35:36.706023 systemd-timesyncd[1866]: Contacted time server 23.143.196.199:123 (0.flatcar.pool.ntp.org). Nov 1 01:35:36.706147 systemd-timesyncd[1866]: Initial clock synchronization to Sat 2025-11-01 01:35:36.890212 UTC. Nov 1 01:35:37.229556 login[2003]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:35:37.230619 login[2005]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:35:37.232199 systemd-logind[1905]: New session 4 of user core. Nov 1 01:35:37.232986 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 01:35:37.234288 systemd-logind[1905]: New session 5 of user core. Nov 1 01:35:37.235004 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 01:35:37.770608 coreos-metadata[1872]: Nov 01 01:35:37.770 INFO Fetch successful Nov 1 01:35:37.869732 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 01:35:37.870905 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 1 01:35:38.183955 coreos-metadata[1978]: Nov 01 01:35:38.183 INFO Fetch successful Nov 1 01:35:38.269506 unknown[1978]: wrote ssh authorized keys file for user: core Nov 1 01:35:38.291292 update-ssh-keys[2131]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:35:38.291658 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 01:35:38.292815 systemd[1]: Finished sshkeys.service. Nov 1 01:35:38.630803 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 1 01:35:38.632138 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 01:35:38.632638 systemd[1]: Startup finished in 28.614s (kernel) + 12.790s (userspace) = 41.405s. Nov 1 01:35:43.669592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:35:43.687543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:35:43.919277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:35:43.922131 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:35:43.947263 kubelet[2151]: E1101 01:35:43.947212 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:35:43.949762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:35:43.949901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:35:44.586616 systemd[1]: Started sshd@3-139.178.94.199:22-139.178.89.65:45388.service - OpenSSH per-connection server daemon (139.178.89.65:45388). Nov 1 01:35:44.608628 sshd[2170]: Accepted publickey for core from 139.178.89.65 port 45388 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:44.609334 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:44.612059 systemd-logind[1905]: New session 6 of user core. Nov 1 01:35:44.630732 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 01:35:44.684888 sshd[2170]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:44.694530 systemd[1]: Started sshd@4-139.178.94.199:22-139.178.89.65:45404.service - OpenSSH per-connection server daemon (139.178.89.65:45404). Nov 1 01:35:44.694834 systemd[1]: sshd@3-139.178.94.199:22-139.178.89.65:45388.service: Deactivated successfully. Nov 1 01:35:44.695742 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:35:44.696189 systemd-logind[1905]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:35:44.696905 systemd-logind[1905]: Removed session 6. Nov 1 01:35:44.717104 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 45404 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:44.717910 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:44.721051 systemd-logind[1905]: New session 7 of user core. Nov 1 01:35:44.731522 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 01:35:44.785718 sshd[2175]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:44.807955 systemd[1]: Started sshd@5-139.178.94.199:22-139.178.89.65:45410.service - OpenSSH per-connection server daemon (139.178.89.65:45410). Nov 1 01:35:44.809632 systemd[1]: sshd@4-139.178.94.199:22-139.178.89.65:45404.service: Deactivated successfully. Nov 1 01:35:44.813349 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:35:44.816790 systemd-logind[1905]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:35:44.819868 systemd-logind[1905]: Removed session 7. Nov 1 01:35:44.852783 sshd[2183]: Accepted publickey for core from 139.178.89.65 port 45410 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:44.853409 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:44.855685 systemd-logind[1905]: New session 8 of user core. Nov 1 01:35:44.869509 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 01:35:44.932765 sshd[2183]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:44.955956 systemd[1]: Started sshd@6-139.178.94.199:22-139.178.89.65:45418.service - OpenSSH per-connection server daemon (139.178.89.65:45418). Nov 1 01:35:44.957588 systemd[1]: sshd@5-139.178.94.199:22-139.178.89.65:45410.service: Deactivated successfully. Nov 1 01:35:44.961301 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:35:44.963205 systemd-logind[1905]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:35:44.966384 systemd-logind[1905]: Removed session 8. Nov 1 01:35:44.998640 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 45418 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:45.000073 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:45.005051 systemd-logind[1905]: New session 9 of user core. Nov 1 01:35:45.014608 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 01:35:45.081067 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:35:45.081223 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:35:45.103433 sudo[2198]: pam_unix(sudo:session): session closed for user root Nov 1 01:35:45.104505 sshd[2191]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:45.113534 systemd[1]: Started sshd@7-139.178.94.199:22-139.178.89.65:45420.service - OpenSSH per-connection server daemon (139.178.89.65:45420). Nov 1 01:35:45.113795 systemd[1]: sshd@6-139.178.94.199:22-139.178.89.65:45418.service: Deactivated successfully. Nov 1 01:35:45.115024 systemd-logind[1905]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:35:45.115294 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:35:45.116067 systemd-logind[1905]: Removed session 9. Nov 1 01:35:45.136734 sshd[2200]: Accepted publickey for core from 139.178.89.65 port 45420 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:45.137643 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:45.141212 systemd-logind[1905]: New session 10 of user core. Nov 1 01:35:45.149596 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 01:35:45.214265 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:35:45.215106 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:35:45.224070 sudo[2208]: pam_unix(sudo:session): session closed for user root Nov 1 01:35:45.238297 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:35:45.239120 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:35:45.271538 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 01:35:45.272610 auditctl[2211]: No rules Nov 1 01:35:45.272773 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:35:45.272886 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 01:35:45.274131 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:35:45.289095 augenrules[2230]: No rules Nov 1 01:35:45.289475 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:35:45.290039 sudo[2207]: pam_unix(sudo:session): session closed for user root Nov 1 01:35:45.290979 sshd[2200]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:45.292654 systemd[1]: Started sshd@8-139.178.94.199:22-139.178.89.65:45422.service - OpenSSH per-connection server daemon (139.178.89.65:45422). Nov 1 01:35:45.292925 systemd[1]: sshd@7-139.178.94.199:22-139.178.89.65:45420.service: Deactivated successfully. Nov 1 01:35:45.294221 systemd-logind[1905]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:35:45.294299 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:35:45.294964 systemd-logind[1905]: Removed session 10. Nov 1 01:35:45.318096 sshd[2237]: Accepted publickey for core from 139.178.89.65 port 45422 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:35:45.318916 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:35:45.322146 systemd-logind[1905]: New session 11 of user core. Nov 1 01:35:45.334593 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 01:35:45.397609 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:35:45.398467 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:35:45.754557 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 01:35:45.754674 (dockerd)[2269]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 01:35:46.074208 dockerd[2269]: time="2025-11-01T01:35:46.074101123Z" level=info msg="Starting up" Nov 1 01:35:46.304262 dockerd[2269]: time="2025-11-01T01:35:46.304211450Z" level=info msg="Loading containers: start." Nov 1 01:35:46.382222 kernel: Initializing XFRM netlink socket Nov 1 01:35:46.456202 systemd-networkd[1561]: docker0: Link UP Nov 1 01:35:46.482283 dockerd[2269]: time="2025-11-01T01:35:46.482253831Z" level=info msg="Loading containers: done." Nov 1 01:35:46.493780 dockerd[2269]: time="2025-11-01T01:35:46.493732445Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:35:46.493854 dockerd[2269]: time="2025-11-01T01:35:46.493781696Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 01:35:46.493854 dockerd[2269]: time="2025-11-01T01:35:46.493839759Z" level=info msg="Daemon has completed initialization" Nov 1 01:35:46.494399 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2564725887-merged.mount: Deactivated successfully. Nov 1 01:35:46.507369 dockerd[2269]: time="2025-11-01T01:35:46.507333880Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:35:46.507453 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 01:35:47.428472 containerd[1914]: time="2025-11-01T01:35:47.428441110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 01:35:48.274923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667484726.mount: Deactivated successfully. Nov 1 01:35:49.059849 containerd[1914]: time="2025-11-01T01:35:49.059801714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:49.060137 containerd[1914]: time="2025-11-01T01:35:49.060105874Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 01:35:49.061024 containerd[1914]: time="2025-11-01T01:35:49.061013010Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:49.062705 containerd[1914]: time="2025-11-01T01:35:49.062669454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:49.063392 containerd[1914]: time="2025-11-01T01:35:49.063352435Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.634889571s" Nov 1 01:35:49.063392 containerd[1914]: time="2025-11-01T01:35:49.063368270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 01:35:49.063700 containerd[1914]: time="2025-11-01T01:35:49.063662480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 01:35:50.126709 containerd[1914]: time="2025-11-01T01:35:50.126686227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:50.126942 containerd[1914]: time="2025-11-01T01:35:50.126905528Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 01:35:50.127409 containerd[1914]: time="2025-11-01T01:35:50.127397605Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:50.129013 containerd[1914]: time="2025-11-01T01:35:50.128971307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:50.130131 containerd[1914]: time="2025-11-01T01:35:50.130084845Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.066405522s" Nov 1 01:35:50.130131 containerd[1914]: time="2025-11-01T01:35:50.130101255Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 01:35:50.130372 containerd[1914]: time="2025-11-01T01:35:50.130333190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 01:35:50.985970 containerd[1914]: time="2025-11-01T01:35:50.985945747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:50.986177 containerd[1914]: time="2025-11-01T01:35:50.986155608Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 01:35:50.986574 containerd[1914]: time="2025-11-01T01:35:50.986533732Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:50.989085 containerd[1914]: time="2025-11-01T01:35:50.989070796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:50.989669 containerd[1914]: time="2025-11-01T01:35:50.989654115Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 859.306608ms" Nov 1 01:35:50.989717 containerd[1914]: time="2025-11-01T01:35:50.989671704Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 01:35:50.989959 containerd[1914]: time="2025-11-01T01:35:50.989947338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 01:35:51.814704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376878662.mount: Deactivated successfully. Nov 1 01:35:52.003337 containerd[1914]: time="2025-11-01T01:35:52.003303490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:52.003685 containerd[1914]: time="2025-11-01T01:35:52.003601610Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 01:35:52.004125 containerd[1914]: time="2025-11-01T01:35:52.004111075Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:52.005217 containerd[1914]: time="2025-11-01T01:35:52.005201698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:52.005667 containerd[1914]: time="2025-11-01T01:35:52.005654173Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.015691834s" Nov 1 01:35:52.005704 containerd[1914]: time="2025-11-01T01:35:52.005669335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 01:35:52.005920 containerd[1914]: time="2025-11-01T01:35:52.005907576Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 01:35:52.536362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995606082.mount: Deactivated successfully. Nov 1 01:35:53.058595 containerd[1914]: time="2025-11-01T01:35:53.058544278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:53.058818 containerd[1914]: time="2025-11-01T01:35:53.058720822Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 01:35:53.059189 containerd[1914]: time="2025-11-01T01:35:53.059153682Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:53.060865 containerd[1914]: time="2025-11-01T01:35:53.060822538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:53.061583 containerd[1914]: time="2025-11-01T01:35:53.061541201Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.055615923s" Nov 1 01:35:53.061583 containerd[1914]: time="2025-11-01T01:35:53.061556825Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 01:35:53.061862 containerd[1914]: time="2025-11-01T01:35:53.061815901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:35:53.620482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651287264.mount: Deactivated successfully. Nov 1 01:35:53.621507 containerd[1914]: time="2025-11-01T01:35:53.621461574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:53.621691 containerd[1914]: time="2025-11-01T01:35:53.621658812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 01:35:53.621960 containerd[1914]: time="2025-11-01T01:35:53.621924678Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:53.623406 containerd[1914]: time="2025-11-01T01:35:53.623322640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:53.623920 containerd[1914]: time="2025-11-01T01:35:53.623887654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 562.05639ms" Nov 1 01:35:53.623920 containerd[1914]: time="2025-11-01T01:35:53.623917893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:35:53.624178 containerd[1914]: time="2025-11-01T01:35:53.624166832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 01:35:54.167420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:35:54.176444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:35:54.201638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004264954.mount: Deactivated successfully. Nov 1 01:35:54.420592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:35:54.423823 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:35:54.450659 kubelet[2582]: E1101 01:35:54.450622 2582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:35:54.452608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:35:54.452816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:35:55.503123 containerd[1914]: time="2025-11-01T01:35:55.503094313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:55.503356 containerd[1914]: time="2025-11-01T01:35:55.503275109Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 01:35:55.503750 containerd[1914]: time="2025-11-01T01:35:55.503738552Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:55.505538 containerd[1914]: time="2025-11-01T01:35:55.505494462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:35:55.506234 containerd[1914]: time="2025-11-01T01:35:55.506187154Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.882004362s" Nov 1 01:35:55.506234 containerd[1914]: time="2025-11-01T01:35:55.506205057Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 01:35:57.963129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:35:57.976569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:35:57.993808 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-11.scope)... Nov 1 01:35:57.993815 systemd[1]: Reloading... Nov 1 01:35:58.029269 zram_generator::config[2740]: No configuration found. Nov 1 01:35:58.099019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:35:58.158987 systemd[1]: Reloading finished in 164 ms. Nov 1 01:35:58.197318 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:35:58.197361 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:35:58.197495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:35:58.209631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:35:58.457518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:35:58.460502 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:35:58.483884 kubelet[2814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:35:58.483884 kubelet[2814]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:35:58.483884 kubelet[2814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:35:58.484116 kubelet[2814]: I1101 01:35:58.483915 2814 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:35:58.630677 kubelet[2814]: I1101 01:35:58.630663 2814 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:35:58.630677 kubelet[2814]: I1101 01:35:58.630675 2814 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:35:58.630831 kubelet[2814]: I1101 01:35:58.630797 2814 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:35:58.650184 kubelet[2814]: E1101 01:35:58.650148 2814 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.94.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:35:58.650871 kubelet[2814]: I1101 01:35:58.650832 2814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:35:58.656067 kubelet[2814]: E1101 01:35:58.656017 2814 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:35:58.656067 kubelet[2814]: I1101 01:35:58.656031 2814 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:35:58.664569 kubelet[2814]: I1101 01:35:58.664531 2814 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:35:58.666684 kubelet[2814]: I1101 01:35:58.666638 2814 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:35:58.666780 kubelet[2814]: I1101 01:35:58.666657 2814 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-4452d0b810","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:35:58.666780 kubelet[2814]: I1101 01:35:58.666751 2814 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:35:58.666780 kubelet[2814]: I1101 01:35:58.666757 2814 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:35:58.666873 kubelet[2814]: I1101 01:35:58.666820 2814 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:35:58.669894 kubelet[2814]: I1101 01:35:58.669851 2814 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:35:58.669894 kubelet[2814]: I1101 01:35:58.669870 2814 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:35:58.669894 kubelet[2814]: I1101 01:35:58.669880 2814 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:35:58.669894 kubelet[2814]: I1101 01:35:58.669886 2814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:35:58.672086 kubelet[2814]: I1101 01:35:58.672075 2814 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:35:58.672375 kubelet[2814]: W1101 01:35:58.672338 2814 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4452d0b810&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:35:58.672431 kubelet[2814]: I1101 01:35:58.672378 2814 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:35:58.672431 kubelet[2814]: E1101 01:35:58.672404 2814 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4452d0b810&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:35:58.673015 kubelet[2814]: W1101 01:35:58.672973 2814 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:35:58.673059 kubelet[2814]: E1101 01:35:58.673014 2814 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:35:58.673059 kubelet[2814]: W1101 01:35:58.673026 2814 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:35:58.674672 kubelet[2814]: I1101 01:35:58.674635 2814 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:35:58.674672 kubelet[2814]: I1101 01:35:58.674652 2814 server.go:1287] "Started kubelet" Nov 1 01:35:58.674718 kubelet[2814]: I1101 01:35:58.674703 2814 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:35:58.675593 kubelet[2814]: I1101 01:35:58.675584 2814 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:35:58.678960 kubelet[2814]: I1101 01:35:58.678864 2814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:35:58.679171 kubelet[2814]: I1101 01:35:58.679162 2814 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:35:58.679522 kubelet[2814]: E1101 01:35:58.679514 2814 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:35:58.679776 kubelet[2814]: I1101 01:35:58.679768 2814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:35:58.679806 kubelet[2814]: I1101 01:35:58.679780 2814 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:35:58.679859 kubelet[2814]: I1101 01:35:58.679850 2814 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:35:58.679936 kubelet[2814]: E1101 01:35:58.679875 2814 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4452d0b810\" not found" Nov 1 01:35:58.679936 kubelet[2814]: I1101 01:35:58.679883 2814 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:35:58.679936 kubelet[2814]: I1101 01:35:58.679914 2814 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:35:58.680099 kubelet[2814]: E1101 01:35:58.680078 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4452d0b810?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="200ms" Nov 1 01:35:58.680144 kubelet[2814]: W1101 01:35:58.680121 2814 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:35:58.680174 kubelet[2814]: E1101 01:35:58.680153 2814 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:35:58.680766 kubelet[2814]: I1101 01:35:58.680758 2814 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:35:58.680766 kubelet[2814]: I1101 01:35:58.680766 2814 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:35:58.680833 kubelet[2814]: I1101 01:35:58.680822 2814 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:35:58.681472 kubelet[2814]: E1101 01:35:58.680431 2814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.199:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-4452d0b810.1873be2819e8a483 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-4452d0b810,UID:ci-4081.3.6-n-4452d0b810,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-4452d0b810,},FirstTimestamp:2025-11-01 01:35:58.674642051 +0000 UTC m=+0.211747560,LastTimestamp:2025-11-01 01:35:58.674642051 +0000 UTC m=+0.211747560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-4452d0b810,}" Nov 1 01:35:58.687865 kubelet[2814]: I1101 01:35:58.687844 2814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:35:58.688380 kubelet[2814]: I1101 01:35:58.688372 2814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:35:58.688408 kubelet[2814]: I1101 01:35:58.688384 2814 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:35:58.688408 kubelet[2814]: I1101 01:35:58.688397 2814 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:35:58.688408 kubelet[2814]: I1101 01:35:58.688402 2814 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:35:58.688459 kubelet[2814]: E1101 01:35:58.688428 2814 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:35:58.688676 kubelet[2814]: W1101 01:35:58.688633 2814 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:35:58.688676 kubelet[2814]: E1101 01:35:58.688653 2814 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:35:58.704546 kubelet[2814]: I1101 01:35:58.704511 2814 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:35:58.704546 kubelet[2814]: I1101 01:35:58.704519 2814 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:35:58.704546 kubelet[2814]: I1101 01:35:58.704529 2814 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:35:58.705478 kubelet[2814]: I1101 01:35:58.705434 2814 policy_none.go:49] "None policy: Start" Nov 1 01:35:58.705478 kubelet[2814]: I1101 01:35:58.705444 2814 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:35:58.705478 kubelet[2814]: I1101 01:35:58.705451 2814 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:35:58.707926 kubelet[2814]: I1101 01:35:58.707882 2814 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:35:58.708014 kubelet[2814]: I1101 01:35:58.708007 2814 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:35:58.708053 kubelet[2814]: I1101 01:35:58.708017 2814 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:35:58.708285 kubelet[2814]: I1101 01:35:58.708162 2814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:35:58.708805 kubelet[2814]: E1101 01:35:58.708792 2814 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:35:58.708835 kubelet[2814]: E1101 01:35:58.708823 2814 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-4452d0b810\" not found" Nov 1 01:35:58.802382 kubelet[2814]: E1101 01:35:58.802267 2814 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4452d0b810\" not found" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.803196 kubelet[2814]: E1101 01:35:58.803186 2814 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4452d0b810\" not found" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.803950 kubelet[2814]: E1101 01:35:58.803942 2814 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4452d0b810\" not found" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.810142 kubelet[2814]: I1101 01:35:58.810133 2814 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.810346 kubelet[2814]: E1101 01:35:58.810310 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.881915 kubelet[2814]: E1101 01:35:58.881814 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4452d0b810?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="400ms" Nov 1 01:35:58.982403 kubelet[2814]: I1101 01:35:58.982166 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40342dd0f88337c9858a56d229d70ba6-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" (UID: \"40342dd0f88337c9858a56d229d70ba6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.982403 kubelet[2814]: I1101 01:35:58.982312 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40342dd0f88337c9858a56d229d70ba6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" (UID: \"40342dd0f88337c9858a56d229d70ba6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.982909 kubelet[2814]: I1101 01:35:58.982408 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.982909 kubelet[2814]: I1101 01:35:58.982514 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.982909 kubelet[2814]: I1101 01:35:58.982603 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5fef8753ee008a4f7279245d1886194-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-4452d0b810\" (UID: \"f5fef8753ee008a4f7279245d1886194\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.982909 kubelet[2814]: I1101 01:35:58.982690 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40342dd0f88337c9858a56d229d70ba6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" (UID: \"40342dd0f88337c9858a56d229d70ba6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.982909 kubelet[2814]: I1101 01:35:58.982786 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.983666 kubelet[2814]: I1101 01:35:58.982884 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:58.983666 kubelet[2814]: I1101 01:35:58.982981 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:35:59.015778 kubelet[2814]: I1101 01:35:59.015679 2814 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:59.016557 kubelet[2814]: E1101 01:35:59.016452 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:59.105467 containerd[1914]: time="2025-11-01T01:35:59.105341458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-4452d0b810,Uid:bc19ad76efff11d461f065a5e4255b75,Namespace:kube-system,Attempt:0,}" Nov 1 01:35:59.105467 containerd[1914]: time="2025-11-01T01:35:59.105408910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-4452d0b810,Uid:f5fef8753ee008a4f7279245d1886194,Namespace:kube-system,Attempt:0,}" Nov 1 01:35:59.106355 containerd[1914]: time="2025-11-01T01:35:59.105381830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-4452d0b810,Uid:40342dd0f88337c9858a56d229d70ba6,Namespace:kube-system,Attempt:0,}" Nov 1 01:35:59.284009 kubelet[2814]: E1101 01:35:59.283756 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4452d0b810?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="800ms" Nov 1 01:35:59.417957 kubelet[2814]: I1101 01:35:59.417910 2814 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:59.418164 kubelet[2814]: E1101 01:35:59.418127 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:35:59.558617 kubelet[2814]: W1101 01:35:59.558506 2814 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Nov 1 01:35:59.558617 kubelet[2814]: E1101 01:35:59.558547 2814 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:35:59.584809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401330802.mount: Deactivated successfully. Nov 1 01:35:59.586225 containerd[1914]: time="2025-11-01T01:35:59.586181568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:35:59.586483 containerd[1914]: time="2025-11-01T01:35:59.586469506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:35:59.587125 containerd[1914]: time="2025-11-01T01:35:59.587082572Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:35:59.587281 containerd[1914]: time="2025-11-01T01:35:59.587209248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 01:35:59.587665 containerd[1914]: time="2025-11-01T01:35:59.587628021Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:35:59.588253 containerd[1914]: time="2025-11-01T01:35:59.588215025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:35:59.588616 containerd[1914]: time="2025-11-01T01:35:59.588578489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:35:59.590357 containerd[1914]: time="2025-11-01T01:35:59.590314560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.756203ms" Nov 1 01:35:59.590948 containerd[1914]: time="2025-11-01T01:35:59.590906815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:35:59.591410 containerd[1914]: time="2025-11-01T01:35:59.591370104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.874346ms" Nov 1 01:35:59.592894 containerd[1914]: time="2025-11-01T01:35:59.592839509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.202029ms" Nov 1 01:35:59.685532 containerd[1914]: time="2025-11-01T01:35:59.685479412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:35:59.685532 containerd[1914]: time="2025-11-01T01:35:59.685513292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:35:59.685532 containerd[1914]: time="2025-11-01T01:35:59.685523304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:35:59.685678 containerd[1914]: time="2025-11-01T01:35:59.685573760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:35:59.685678 containerd[1914]: time="2025-11-01T01:35:59.685614209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:35:59.685678 containerd[1914]: time="2025-11-01T01:35:59.685642908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:35:59.685678 containerd[1914]: time="2025-11-01T01:35:59.685656387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:35:59.685793 containerd[1914]: time="2025-11-01T01:35:59.685745007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:35:59.685814 containerd[1914]: time="2025-11-01T01:35:59.685786728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:35:59.685829 containerd[1914]: time="2025-11-01T01:35:59.685817786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:35:59.685849 containerd[1914]: time="2025-11-01T01:35:59.685825887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:35:59.685878 containerd[1914]: time="2025-11-01T01:35:59.685867190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:35:59.733654 containerd[1914]: time="2025-11-01T01:35:59.733631933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-4452d0b810,Uid:f5fef8753ee008a4f7279245d1886194,Namespace:kube-system,Attempt:0,} returns sandbox id \"943d84c81282c0dfc7b0c3375a69e6e17bfd5793bb58bd38f72f1dd27bf98110\"" Nov 1 01:35:59.734994 containerd[1914]: time="2025-11-01T01:35:59.734981519Z" level=info msg="CreateContainer within sandbox \"943d84c81282c0dfc7b0c3375a69e6e17bfd5793bb58bd38f72f1dd27bf98110\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:35:59.739400 containerd[1914]: time="2025-11-01T01:35:59.739378905Z" level=info msg="CreateContainer within sandbox \"943d84c81282c0dfc7b0c3375a69e6e17bfd5793bb58bd38f72f1dd27bf98110\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"91a29d816d3df21a873c3225a6b1045ca82246f69ccd6847f3667cd6d1982a58\"" Nov 1 01:35:59.739684 containerd[1914]: time="2025-11-01T01:35:59.739671929Z" level=info msg="StartContainer for \"91a29d816d3df21a873c3225a6b1045ca82246f69ccd6847f3667cd6d1982a58\"" Nov 1 01:35:59.741901 containerd[1914]: time="2025-11-01T01:35:59.741879588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-4452d0b810,Uid:40342dd0f88337c9858a56d229d70ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a30862331cca5b667a8622950179c699f9ea9a642243c538b3c44dd16e89b21c\"" Nov 1 01:35:59.741994 containerd[1914]: time="2025-11-01T01:35:59.741884390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-4452d0b810,Uid:bc19ad76efff11d461f065a5e4255b75,Namespace:kube-system,Attempt:0,} returns sandbox id \"af19e7f89b526286b5c3b8b16392135a37bb95c35a33c6e242dd9833bbbc2e6b\"" Nov 1 01:35:59.742844 containerd[1914]: time="2025-11-01T01:35:59.742832710Z" level=info msg="CreateContainer within sandbox \"af19e7f89b526286b5c3b8b16392135a37bb95c35a33c6e242dd9833bbbc2e6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:35:59.742889 containerd[1914]: time="2025-11-01T01:35:59.742858076Z" level=info msg="CreateContainer within sandbox \"a30862331cca5b667a8622950179c699f9ea9a642243c538b3c44dd16e89b21c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:35:59.747437 containerd[1914]: time="2025-11-01T01:35:59.747372682Z" level=info msg="CreateContainer within sandbox \"af19e7f89b526286b5c3b8b16392135a37bb95c35a33c6e242dd9833bbbc2e6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"327126c000a1876b9c547e691167cbc3ca7d1dffbbf9cd4f87a793d3aaa27ea5\"" Nov 1 01:35:59.747682 containerd[1914]: time="2025-11-01T01:35:59.747669880Z" level=info msg="StartContainer for \"327126c000a1876b9c547e691167cbc3ca7d1dffbbf9cd4f87a793d3aaa27ea5\"" Nov 1 01:35:59.747893 containerd[1914]: time="2025-11-01T01:35:59.747879017Z" level=info msg="CreateContainer within sandbox \"a30862331cca5b667a8622950179c699f9ea9a642243c538b3c44dd16e89b21c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3356737bacbf58101256504f23a0f47e0e51d0027e0718151a296474e1f12ff\"" Nov 1 01:35:59.748046 containerd[1914]: time="2025-11-01T01:35:59.748034138Z" level=info msg="StartContainer for \"d3356737bacbf58101256504f23a0f47e0e51d0027e0718151a296474e1f12ff\"" Nov 1 01:35:59.788488 containerd[1914]: time="2025-11-01T01:35:59.788462282Z" level=info msg="StartContainer for \"91a29d816d3df21a873c3225a6b1045ca82246f69ccd6847f3667cd6d1982a58\" returns successfully" Nov 1 01:35:59.788574 containerd[1914]: time="2025-11-01T01:35:59.788462283Z" level=info msg="StartContainer for \"327126c000a1876b9c547e691167cbc3ca7d1dffbbf9cd4f87a793d3aaa27ea5\" returns successfully" Nov 1 01:35:59.788574 containerd[1914]: time="2025-11-01T01:35:59.788468132Z" level=info msg="StartContainer for \"d3356737bacbf58101256504f23a0f47e0e51d0027e0718151a296474e1f12ff\" returns successfully" Nov 1 01:36:00.221547 kubelet[2814]: I1101 01:36:00.220087 2814 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.369478 kubelet[2814]: E1101 01:36:00.369456 2814 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-4452d0b810\" not found" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.482279 kubelet[2814]: I1101 01:36:00.482102 2814 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.482279 kubelet[2814]: E1101 01:36:00.482166 2814 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-4452d0b810\": node \"ci-4081.3.6-n-4452d0b810\" not found" Nov 1 01:36:00.580916 kubelet[2814]: I1101 01:36:00.580829 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.591748 kubelet[2814]: E1101 01:36:00.591686 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.591748 kubelet[2814]: I1101 01:36:00.591738 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.597359 kubelet[2814]: E1101 01:36:00.597295 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.597359 kubelet[2814]: I1101 01:36:00.597353 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.601072 kubelet[2814]: E1101 01:36:00.600971 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4452d0b810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.672043 kubelet[2814]: I1101 01:36:00.671919 2814 apiserver.go:52] "Watching apiserver" Nov 1 01:36:00.680197 kubelet[2814]: I1101 01:36:00.680111 2814 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:36:00.696462 kubelet[2814]: I1101 01:36:00.696368 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.699381 kubelet[2814]: I1101 01:36:00.699330 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.700726 kubelet[2814]: E1101 01:36:00.700670 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.702044 kubelet[2814]: I1101 01:36:00.702009 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.703287 kubelet[2814]: E1101 01:36:00.703224 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4452d0b810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:00.705871 kubelet[2814]: E1101 01:36:00.705765 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:01.704703 kubelet[2814]: I1101 01:36:01.704643 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:01.705679 kubelet[2814]: I1101 01:36:01.704908 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:01.711370 kubelet[2814]: W1101 01:36:01.711314 2814 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:01.711370 kubelet[2814]: W1101 01:36:01.711336 2814 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:02.909238 systemd[1]: Reloading requested from client PID 3135 ('systemctl') (unit session-11.scope)... Nov 1 01:36:02.909246 systemd[1]: Reloading... Nov 1 01:36:02.942244 zram_generator::config[3174]: No configuration found. Nov 1 01:36:03.018272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:36:03.081963 systemd[1]: Reloading finished in 172 ms. Nov 1 01:36:03.108845 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:36:03.114989 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:36:03.115154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:36:03.129988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:36:03.408539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:36:03.411450 (kubelet)[3248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:36:03.432825 kubelet[3248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:36:03.432825 kubelet[3248]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:36:03.432825 kubelet[3248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:36:03.433068 kubelet[3248]: I1101 01:36:03.432866 3248 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:36:03.436167 kubelet[3248]: I1101 01:36:03.436126 3248 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:36:03.436167 kubelet[3248]: I1101 01:36:03.436137 3248 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:36:03.436302 kubelet[3248]: I1101 01:36:03.436267 3248 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:36:03.436938 kubelet[3248]: I1101 01:36:03.436903 3248 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 01:36:03.438884 kubelet[3248]: I1101 01:36:03.438848 3248 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:36:03.440869 kubelet[3248]: E1101 01:36:03.440854 3248 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:36:03.440902 kubelet[3248]: I1101 01:36:03.440870 3248 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:36:03.447396 kubelet[3248]: I1101 01:36:03.447362 3248 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:36:03.447633 kubelet[3248]: I1101 01:36:03.447591 3248 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:36:03.447727 kubelet[3248]: I1101 01:36:03.447605 3248 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-4452d0b810","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:36:03.447727 kubelet[3248]: I1101 01:36:03.447706 3248 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:36:03.447727 kubelet[3248]: I1101 01:36:03.447712 3248 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:36:03.447825 kubelet[3248]: I1101 01:36:03.447745 3248 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:36:03.447891 kubelet[3248]: I1101 01:36:03.447855 3248 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:36:03.447891 kubelet[3248]: I1101 01:36:03.447866 3248 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:36:03.447891 kubelet[3248]: I1101 01:36:03.447876 3248 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:36:03.447891 kubelet[3248]: I1101 01:36:03.447881 3248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:36:03.448131 kubelet[3248]: I1101 01:36:03.448120 3248 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:36:03.448388 kubelet[3248]: I1101 01:36:03.448383 3248 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:36:03.448666 kubelet[3248]: I1101 01:36:03.448660 3248 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:36:03.448683 kubelet[3248]: I1101 01:36:03.448680 3248 server.go:1287] "Started kubelet" Nov 1 01:36:03.448739 kubelet[3248]: I1101 01:36:03.448725 3248 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:36:03.449111 kubelet[3248]: I1101 01:36:03.448755 3248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:36:03.449275 kubelet[3248]: I1101 01:36:03.449264 3248 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:36:03.450272 kubelet[3248]: E1101 01:36:03.450261 3248 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:36:03.450578 kubelet[3248]: I1101 01:36:03.450571 3248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:36:03.450684 kubelet[3248]: I1101 01:36:03.450677 3248 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:36:03.450704 kubelet[3248]: I1101 01:36:03.450683 3248 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:36:03.451384 kubelet[3248]: E1101 01:36:03.451369 3248 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4452d0b810\" not found" Nov 1 01:36:03.451416 kubelet[3248]: I1101 01:36:03.451396 3248 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:36:03.451514 kubelet[3248]: I1101 01:36:03.451506 3248 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:36:03.451597 kubelet[3248]: I1101 01:36:03.451592 3248 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:36:03.451936 kubelet[3248]: I1101 01:36:03.451927 3248 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:36:03.452006 kubelet[3248]: I1101 01:36:03.451993 3248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:36:03.452545 kubelet[3248]: I1101 01:36:03.452537 3248 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:36:03.455794 kubelet[3248]: I1101 01:36:03.455764 3248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:36:03.456387 kubelet[3248]: I1101 01:36:03.456350 3248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:36:03.456387 kubelet[3248]: I1101 01:36:03.456368 3248 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:36:03.456387 kubelet[3248]: I1101 01:36:03.456385 3248 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:36:03.456482 kubelet[3248]: I1101 01:36:03.456391 3248 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:36:03.456482 kubelet[3248]: E1101 01:36:03.456427 3248 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:36:03.472458 kubelet[3248]: I1101 01:36:03.472413 3248 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:36:03.472458 kubelet[3248]: I1101 01:36:03.472423 3248 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:36:03.472458 kubelet[3248]: I1101 01:36:03.472433 3248 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:36:03.472556 kubelet[3248]: I1101 01:36:03.472523 3248 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:36:03.472556 kubelet[3248]: I1101 01:36:03.472530 3248 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:36:03.472556 kubelet[3248]: I1101 01:36:03.472542 3248 policy_none.go:49] "None policy: Start" Nov 1 01:36:03.472556 kubelet[3248]: I1101 01:36:03.472547 3248 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:36:03.472556 kubelet[3248]: I1101 01:36:03.472552 3248 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:36:03.472641 kubelet[3248]: I1101 01:36:03.472609 3248 state_mem.go:75] "Updated machine memory state" Nov 1 01:36:03.473200 kubelet[3248]: I1101 01:36:03.473187 3248 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:36:03.473292 kubelet[3248]: I1101 01:36:03.473285 3248 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:36:03.473326 kubelet[3248]: I1101 01:36:03.473291 3248 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:36:03.473411 kubelet[3248]: I1101 01:36:03.473403 3248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:36:03.473711 kubelet[3248]: E1101 01:36:03.473690 3248 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:36:03.557879 kubelet[3248]: I1101 01:36:03.557790 3248 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.557879 kubelet[3248]: I1101 01:36:03.557865 3248 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.558339 kubelet[3248]: I1101 01:36:03.558042 3248 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.576288 kubelet[3248]: W1101 01:36:03.576185 3248 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:03.576505 kubelet[3248]: W1101 01:36:03.576460 3248 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:03.576655 kubelet[3248]: E1101 01:36:03.576588 3248 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4452d0b810\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.577060 kubelet[3248]: W1101 01:36:03.577022 3248 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:03.577159 kubelet[3248]: E1101 01:36:03.577137 3248 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.581427 kubelet[3248]: I1101 01:36:03.581336 3248 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.590618 kubelet[3248]: I1101 01:36:03.590563 3248 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.590840 kubelet[3248]: I1101 01:36:03.590712 3248 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.753659 kubelet[3248]: I1101 01:36:03.753543 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.753659 kubelet[3248]: I1101 01:36:03.753649 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40342dd0f88337c9858a56d229d70ba6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" (UID: \"40342dd0f88337c9858a56d229d70ba6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754048 kubelet[3248]: I1101 01:36:03.753730 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754048 kubelet[3248]: I1101 01:36:03.753807 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754048 kubelet[3248]: I1101 01:36:03.753875 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754048 kubelet[3248]: I1101 01:36:03.753973 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5fef8753ee008a4f7279245d1886194-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-4452d0b810\" (UID: \"f5fef8753ee008a4f7279245d1886194\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754459 kubelet[3248]: I1101 01:36:03.754072 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40342dd0f88337c9858a56d229d70ba6-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" (UID: \"40342dd0f88337c9858a56d229d70ba6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754459 kubelet[3248]: I1101 01:36:03.754197 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40342dd0f88337c9858a56d229d70ba6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" (UID: \"40342dd0f88337c9858a56d229d70ba6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:03.754459 kubelet[3248]: I1101 01:36:03.754311 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc19ad76efff11d461f065a5e4255b75-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-4452d0b810\" (UID: \"bc19ad76efff11d461f065a5e4255b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:04.448413 kubelet[3248]: I1101 01:36:04.448315 3248 apiserver.go:52] "Watching apiserver" Nov 1 01:36:04.451719 kubelet[3248]: I1101 01:36:04.451606 3248 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:36:04.462431 kubelet[3248]: I1101 01:36:04.462333 3248 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:04.462640 kubelet[3248]: I1101 01:36:04.462465 3248 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:04.469525 kubelet[3248]: W1101 01:36:04.469468 3248 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:04.469779 kubelet[3248]: E1101 01:36:04.469576 3248 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4452d0b810\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:04.470025 kubelet[3248]: W1101 01:36:04.469976 3248 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:36:04.470289 kubelet[3248]: E1101 01:36:04.470088 3248 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4452d0b810\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" Nov 1 01:36:04.495163 kubelet[3248]: I1101 01:36:04.495117 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4452d0b810" podStartSLOduration=3.495104728 podStartE2EDuration="3.495104728s" podCreationTimestamp="2025-11-01 01:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:36:04.49506046 +0000 UTC m=+1.081195226" watchObservedRunningTime="2025-11-01 01:36:04.495104728 +0000 UTC m=+1.081239492" Nov 1 01:36:04.502779 kubelet[3248]: I1101 01:36:04.502735 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4452d0b810" podStartSLOduration=1.50272441 podStartE2EDuration="1.50272441s" podCreationTimestamp="2025-11-01 01:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:36:04.499351094 +0000 UTC m=+1.085485859" watchObservedRunningTime="2025-11-01 01:36:04.50272441 +0000 UTC m=+1.088859172" Nov 1 01:36:04.506884 kubelet[3248]: I1101 01:36:04.506864 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4452d0b810" podStartSLOduration=3.506855852 podStartE2EDuration="3.506855852s" podCreationTimestamp="2025-11-01 01:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:36:04.50279986 +0000 UTC m=+1.088934626" watchObservedRunningTime="2025-11-01 01:36:04.506855852 +0000 UTC m=+1.092990614" Nov 1 01:36:09.085021 kubelet[3248]: I1101 01:36:09.084913 3248 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:36:09.086182 kubelet[3248]: I1101 01:36:09.086121 3248 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:36:09.086384 containerd[1914]: time="2025-11-01T01:36:09.085656443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:36:10.097019 kubelet[3248]: I1101 01:36:10.096871 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc3b9b58-5693-41f5-90a5-250c5b6cfc9c-kube-proxy\") pod \"kube-proxy-l6st4\" (UID: \"fc3b9b58-5693-41f5-90a5-250c5b6cfc9c\") " pod="kube-system/kube-proxy-l6st4" Nov 1 01:36:10.097019 kubelet[3248]: I1101 01:36:10.096992 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3b9b58-5693-41f5-90a5-250c5b6cfc9c-xtables-lock\") pod \"kube-proxy-l6st4\" (UID: \"fc3b9b58-5693-41f5-90a5-250c5b6cfc9c\") " pod="kube-system/kube-proxy-l6st4" Nov 1 01:36:10.098047 kubelet[3248]: I1101 01:36:10.097066 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3b9b58-5693-41f5-90a5-250c5b6cfc9c-lib-modules\") pod \"kube-proxy-l6st4\" (UID: \"fc3b9b58-5693-41f5-90a5-250c5b6cfc9c\") " pod="kube-system/kube-proxy-l6st4" Nov 1 01:36:10.098047 kubelet[3248]: I1101 01:36:10.097123 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crgs\" (UniqueName: \"kubernetes.io/projected/fc3b9b58-5693-41f5-90a5-250c5b6cfc9c-kube-api-access-9crgs\") pod \"kube-proxy-l6st4\" (UID: \"fc3b9b58-5693-41f5-90a5-250c5b6cfc9c\") " pod="kube-system/kube-proxy-l6st4" Nov 1 01:36:10.197498 kubelet[3248]: I1101 01:36:10.197420 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/832d4918-cbae-4d6f-9295-3f3908c92671-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9ncnp\" (UID: \"832d4918-cbae-4d6f-9295-3f3908c92671\") " pod="tigera-operator/tigera-operator-7dcd859c48-9ncnp" Nov 1 01:36:10.197622 kubelet[3248]: I1101 01:36:10.197563 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4slw\" (UniqueName: \"kubernetes.io/projected/832d4918-cbae-4d6f-9295-3f3908c92671-kube-api-access-z4slw\") pod \"tigera-operator-7dcd859c48-9ncnp\" (UID: \"832d4918-cbae-4d6f-9295-3f3908c92671\") " pod="tigera-operator/tigera-operator-7dcd859c48-9ncnp" Nov 1 01:36:10.299411 containerd[1914]: time="2025-11-01T01:36:10.299332318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6st4,Uid:fc3b9b58-5693-41f5-90a5-250c5b6cfc9c,Namespace:kube-system,Attempt:0,}" Nov 1 01:36:10.309880 containerd[1914]: time="2025-11-01T01:36:10.309582910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:10.309880 containerd[1914]: time="2025-11-01T01:36:10.309841963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:10.309880 containerd[1914]: time="2025-11-01T01:36:10.309851051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:10.309982 containerd[1914]: time="2025-11-01T01:36:10.309901645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:10.382599 containerd[1914]: time="2025-11-01T01:36:10.382371837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6st4,Uid:fc3b9b58-5693-41f5-90a5-250c5b6cfc9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c7629b41468cf9276a5ce8a56e1b5c752fecdd411d26ec5326dc7ee79968707\"" Nov 1 01:36:10.388162 containerd[1914]: time="2025-11-01T01:36:10.388052103Z" level=info msg="CreateContainer within sandbox \"8c7629b41468cf9276a5ce8a56e1b5c752fecdd411d26ec5326dc7ee79968707\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:36:10.403331 containerd[1914]: time="2025-11-01T01:36:10.403315527Z" level=info msg="CreateContainer within sandbox \"8c7629b41468cf9276a5ce8a56e1b5c752fecdd411d26ec5326dc7ee79968707\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f11f95db4c0b5f9613636a0bf222a13efc75d06c96c2a9a8fd8444b797155457\"" Nov 1 01:36:10.403576 containerd[1914]: time="2025-11-01T01:36:10.403564562Z" level=info msg="StartContainer for \"f11f95db4c0b5f9613636a0bf222a13efc75d06c96c2a9a8fd8444b797155457\"" Nov 1 01:36:10.445181 containerd[1914]: time="2025-11-01T01:36:10.445156592Z" level=info msg="StartContainer for \"f11f95db4c0b5f9613636a0bf222a13efc75d06c96c2a9a8fd8444b797155457\" returns successfully" Nov 1 01:36:10.473656 containerd[1914]: time="2025-11-01T01:36:10.473625923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9ncnp,Uid:832d4918-cbae-4d6f-9295-3f3908c92671,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:36:10.480066 kubelet[3248]: I1101 01:36:10.480028 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l6st4" podStartSLOduration=1.48001793 podStartE2EDuration="1.48001793s" podCreationTimestamp="2025-11-01 01:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:36:10.479909867 +0000 UTC m=+7.066044637" watchObservedRunningTime="2025-11-01 01:36:10.48001793 +0000 UTC m=+7.066152692" Nov 1 01:36:10.483290 containerd[1914]: time="2025-11-01T01:36:10.483245624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:10.483359 containerd[1914]: time="2025-11-01T01:36:10.483299108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:10.483359 containerd[1914]: time="2025-11-01T01:36:10.483323345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:10.483572 containerd[1914]: time="2025-11-01T01:36:10.483559529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:10.525606 containerd[1914]: time="2025-11-01T01:36:10.525554214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9ncnp,Uid:832d4918-cbae-4d6f-9295-3f3908c92671,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c0654c3e3a309a2cd4d0af855aa563a8da7a9a0f29236919decda775b96cafa6\"" Nov 1 01:36:10.526546 containerd[1914]: time="2025-11-01T01:36:10.526531066Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:36:12.243806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224540947.mount: Deactivated successfully. Nov 1 01:36:12.475438 containerd[1914]: time="2025-11-01T01:36:12.475383498Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:12.475659 containerd[1914]: time="2025-11-01T01:36:12.475590688Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 01:36:12.475896 containerd[1914]: time="2025-11-01T01:36:12.475854258Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:12.477132 containerd[1914]: time="2025-11-01T01:36:12.477089369Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:12.477585 containerd[1914]: time="2025-11-01T01:36:12.477544219Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.950991492s" Nov 1 01:36:12.477585 containerd[1914]: time="2025-11-01T01:36:12.477559283Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:36:12.478572 containerd[1914]: time="2025-11-01T01:36:12.478530265Z" level=info msg="CreateContainer within sandbox \"c0654c3e3a309a2cd4d0af855aa563a8da7a9a0f29236919decda775b96cafa6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:36:12.481836 containerd[1914]: time="2025-11-01T01:36:12.481795138Z" level=info msg="CreateContainer within sandbox \"c0654c3e3a309a2cd4d0af855aa563a8da7a9a0f29236919decda775b96cafa6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3f881bccc89c2fa12c9dea0be74b9265f5edb1951686f42f9ddae804457ba698\"" Nov 1 01:36:12.482063 containerd[1914]: time="2025-11-01T01:36:12.482022125Z" level=info msg="StartContainer for \"3f881bccc89c2fa12c9dea0be74b9265f5edb1951686f42f9ddae804457ba698\"" Nov 1 01:36:12.516374 containerd[1914]: time="2025-11-01T01:36:12.516287610Z" level=info msg="StartContainer for \"3f881bccc89c2fa12c9dea0be74b9265f5edb1951686f42f9ddae804457ba698\" returns successfully" Nov 1 01:36:13.499972 kubelet[3248]: I1101 01:36:13.499818 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9ncnp" podStartSLOduration=1.548069379 podStartE2EDuration="3.499768086s" podCreationTimestamp="2025-11-01 01:36:10 +0000 UTC" firstStartedPulling="2025-11-01 01:36:10.526260324 +0000 UTC m=+7.112395093" lastFinishedPulling="2025-11-01 01:36:12.477959038 +0000 UTC m=+9.064093800" observedRunningTime="2025-11-01 01:36:13.499343537 +0000 UTC m=+10.085478376" watchObservedRunningTime="2025-11-01 01:36:13.499768086 +0000 UTC m=+10.085902914" Nov 1 01:36:16.881087 sudo[2243]: pam_unix(sudo:session): session closed for user root Nov 1 01:36:16.882095 sshd[2237]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:16.884583 systemd[1]: sshd@8-139.178.94.199:22-139.178.89.65:45422.service: Deactivated successfully. Nov 1 01:36:16.886000 systemd-logind[1905]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:36:16.886072 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:36:16.886696 systemd-logind[1905]: Removed session 11. Nov 1 01:36:17.228317 update_engine[1907]: I20251101 01:36:17.228269 1907 update_attempter.cc:509] Updating boot flags... Nov 1 01:36:17.261222 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3773) Nov 1 01:36:17.293221 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3776) Nov 1 01:36:21.073671 kubelet[3248]: I1101 01:36:21.073606 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3c318bd-38e7-45d3-8636-b2bbf2a7644c-tigera-ca-bundle\") pod \"calico-typha-7cdb459579-2xqwk\" (UID: \"b3c318bd-38e7-45d3-8636-b2bbf2a7644c\") " pod="calico-system/calico-typha-7cdb459579-2xqwk" Nov 1 01:36:21.074680 kubelet[3248]: I1101 01:36:21.073704 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptrq\" (UniqueName: \"kubernetes.io/projected/b3c318bd-38e7-45d3-8636-b2bbf2a7644c-kube-api-access-gptrq\") pod \"calico-typha-7cdb459579-2xqwk\" (UID: \"b3c318bd-38e7-45d3-8636-b2bbf2a7644c\") " pod="calico-system/calico-typha-7cdb459579-2xqwk" Nov 1 01:36:21.074680 kubelet[3248]: I1101 01:36:21.073904 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3c318bd-38e7-45d3-8636-b2bbf2a7644c-typha-certs\") pod \"calico-typha-7cdb459579-2xqwk\" (UID: \"b3c318bd-38e7-45d3-8636-b2bbf2a7644c\") " pod="calico-system/calico-typha-7cdb459579-2xqwk" Nov 1 01:36:21.275670 kubelet[3248]: I1101 01:36:21.275550 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-flexvol-driver-host\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.275670 kubelet[3248]: I1101 01:36:21.275651 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-var-run-calico\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276056 kubelet[3248]: I1101 01:36:21.275711 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a6e233c7-66c0-4774-9942-17ab9504cd80-node-certs\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276056 kubelet[3248]: I1101 01:36:21.275889 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-xtables-lock\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276056 kubelet[3248]: I1101 01:36:21.276011 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-cni-bin-dir\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276404 kubelet[3248]: I1101 01:36:21.276064 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-lib-modules\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276404 kubelet[3248]: I1101 01:36:21.276114 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-policysync\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276404 kubelet[3248]: I1101 01:36:21.276169 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6e233c7-66c0-4774-9942-17ab9504cd80-tigera-ca-bundle\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276404 kubelet[3248]: I1101 01:36:21.276280 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqfg\" (UniqueName: \"kubernetes.io/projected/a6e233c7-66c0-4774-9942-17ab9504cd80-kube-api-access-bjqfg\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276798 kubelet[3248]: I1101 01:36:21.276400 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-cni-log-dir\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276798 kubelet[3248]: I1101 01:36:21.276474 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-cni-net-dir\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.276798 kubelet[3248]: I1101 01:36:21.276535 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6e233c7-66c0-4774-9942-17ab9504cd80-var-lib-calico\") pod \"calico-node-nwlw6\" (UID: \"a6e233c7-66c0-4774-9942-17ab9504cd80\") " pod="calico-system/calico-node-nwlw6" Nov 1 01:36:21.277087 containerd[1914]: time="2025-11-01T01:36:21.276535996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cdb459579-2xqwk,Uid:b3c318bd-38e7-45d3-8636-b2bbf2a7644c,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:21.298543 containerd[1914]: time="2025-11-01T01:36:21.298324136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:21.298543 containerd[1914]: time="2025-11-01T01:36:21.298480075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:21.298543 containerd[1914]: time="2025-11-01T01:36:21.298490070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:21.298543 containerd[1914]: time="2025-11-01T01:36:21.298538787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:21.326695 kubelet[3248]: E1101 01:36:21.326597 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:21.343742 containerd[1914]: time="2025-11-01T01:36:21.343716880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cdb459579-2xqwk,Uid:b3c318bd-38e7-45d3-8636-b2bbf2a7644c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a1ac2c11994984b9c13f83ee703c5c5c71bfe1392d9a9c6b78d61a5dae27870\"" Nov 1 01:36:21.344520 containerd[1914]: time="2025-11-01T01:36:21.344501927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:36:21.377309 kubelet[3248]: I1101 01:36:21.377202 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b-kubelet-dir\") pod \"csi-node-driver-9grzl\" (UID: \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\") " pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:21.377633 kubelet[3248]: I1101 01:36:21.377457 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b-registration-dir\") pod \"csi-node-driver-9grzl\" (UID: \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\") " pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:21.377854 kubelet[3248]: I1101 01:36:21.377652 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b-socket-dir\") pod \"csi-node-driver-9grzl\" (UID: \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\") " pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:21.378088 kubelet[3248]: I1101 01:36:21.378010 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b-varrun\") pod \"csi-node-driver-9grzl\" (UID: \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\") " pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:21.378282 kubelet[3248]: I1101 01:36:21.378166 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pvx\" (UniqueName: \"kubernetes.io/projected/f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b-kube-api-access-j5pvx\") pod \"csi-node-driver-9grzl\" (UID: \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\") " pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:21.378946 kubelet[3248]: E1101 01:36:21.378891 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.378946 kubelet[3248]: W1101 01:36:21.378944 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.379357 kubelet[3248]: E1101 01:36:21.379030 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.379718 kubelet[3248]: E1101 01:36:21.379683 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.379851 kubelet[3248]: W1101 01:36:21.379722 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.379851 kubelet[3248]: E1101 01:36:21.379771 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.380405 kubelet[3248]: E1101 01:36:21.380373 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.380530 kubelet[3248]: W1101 01:36:21.380405 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.380530 kubelet[3248]: E1101 01:36:21.380442 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.381113 kubelet[3248]: E1101 01:36:21.381053 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.381113 kubelet[3248]: W1101 01:36:21.381099 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.381343 kubelet[3248]: E1101 01:36:21.381147 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.381729 kubelet[3248]: E1101 01:36:21.381672 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.381729 kubelet[3248]: W1101 01:36:21.381704 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.381959 kubelet[3248]: E1101 01:36:21.381743 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.382368 kubelet[3248]: E1101 01:36:21.382303 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.382368 kubelet[3248]: W1101 01:36:21.382342 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.382612 kubelet[3248]: E1101 01:36:21.382385 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.382883 kubelet[3248]: E1101 01:36:21.382836 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.382883 kubelet[3248]: W1101 01:36:21.382866 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.383090 kubelet[3248]: E1101 01:36:21.382905 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.383410 kubelet[3248]: E1101 01:36:21.383357 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.383410 kubelet[3248]: W1101 01:36:21.383384 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.383645 kubelet[3248]: E1101 01:36:21.383493 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.383913 kubelet[3248]: E1101 01:36:21.383856 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.383913 kubelet[3248]: W1101 01:36:21.383888 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.384128 kubelet[3248]: E1101 01:36:21.384007 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.384415 kubelet[3248]: E1101 01:36:21.384388 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.384525 kubelet[3248]: W1101 01:36:21.384418 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.384636 kubelet[3248]: E1101 01:36:21.384536 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.385014 kubelet[3248]: E1101 01:36:21.384983 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.385119 kubelet[3248]: W1101 01:36:21.385014 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.385119 kubelet[3248]: E1101 01:36:21.385050 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.385631 kubelet[3248]: E1101 01:36:21.385603 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.385751 kubelet[3248]: W1101 01:36:21.385639 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.385751 kubelet[3248]: E1101 01:36:21.385676 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.386184 kubelet[3248]: E1101 01:36:21.386152 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.386389 kubelet[3248]: W1101 01:36:21.386188 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.386389 kubelet[3248]: E1101 01:36:21.386257 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.386906 kubelet[3248]: E1101 01:36:21.386859 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.386906 kubelet[3248]: W1101 01:36:21.386890 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.387121 kubelet[3248]: E1101 01:36:21.386929 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.387465 kubelet[3248]: E1101 01:36:21.387417 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.387465 kubelet[3248]: W1101 01:36:21.387448 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.387696 kubelet[3248]: E1101 01:36:21.387512 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.387942 kubelet[3248]: E1101 01:36:21.387915 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.388046 kubelet[3248]: W1101 01:36:21.387945 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.388149 kubelet[3248]: E1101 01:36:21.388032 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.388423 kubelet[3248]: E1101 01:36:21.388388 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.388527 kubelet[3248]: W1101 01:36:21.388428 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.388618 kubelet[3248]: E1101 01:36:21.388523 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.388961 kubelet[3248]: E1101 01:36:21.388933 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.389075 kubelet[3248]: W1101 01:36:21.388963 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.389075 kubelet[3248]: E1101 01:36:21.389003 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.389596 kubelet[3248]: E1101 01:36:21.389567 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.389723 kubelet[3248]: W1101 01:36:21.389599 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.389723 kubelet[3248]: E1101 01:36:21.389642 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.390430 kubelet[3248]: E1101 01:36:21.390366 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.390430 kubelet[3248]: W1101 01:36:21.390403 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.390669 kubelet[3248]: E1101 01:36:21.390488 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.390930 kubelet[3248]: E1101 01:36:21.390882 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.390930 kubelet[3248]: W1101 01:36:21.390913 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.391148 kubelet[3248]: E1101 01:36:21.390994 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.391481 kubelet[3248]: E1101 01:36:21.391432 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.391481 kubelet[3248]: W1101 01:36:21.391462 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.391708 kubelet[3248]: E1101 01:36:21.391496 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.391981 kubelet[3248]: E1101 01:36:21.391951 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.392106 kubelet[3248]: W1101 01:36:21.391980 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.392106 kubelet[3248]: E1101 01:36:21.392010 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.392564 kubelet[3248]: E1101 01:36:21.392536 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.392675 kubelet[3248]: W1101 01:36:21.392566 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.392675 kubelet[3248]: E1101 01:36:21.392595 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.396407 kubelet[3248]: E1101 01:36:21.396331 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.396407 kubelet[3248]: W1101 01:36:21.396368 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.396407 kubelet[3248]: E1101 01:36:21.396403 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.471582 containerd[1914]: time="2025-11-01T01:36:21.471477067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwlw6,Uid:a6e233c7-66c0-4774-9942-17ab9504cd80,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:21.486899 kubelet[3248]: E1101 01:36:21.486884 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.486899 kubelet[3248]: W1101 01:36:21.486896 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.486991 kubelet[3248]: E1101 01:36:21.486909 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487013 kubelet[3248]: E1101 01:36:21.487004 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487013 kubelet[3248]: W1101 01:36:21.487008 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487052 kubelet[3248]: E1101 01:36:21.487015 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487096 kubelet[3248]: E1101 01:36:21.487091 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487115 kubelet[3248]: W1101 01:36:21.487096 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487115 kubelet[3248]: E1101 01:36:21.487101 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487208 kubelet[3248]: E1101 01:36:21.487201 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487231 kubelet[3248]: W1101 01:36:21.487213 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487231 kubelet[3248]: E1101 01:36:21.487223 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487338 kubelet[3248]: E1101 01:36:21.487304 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487338 kubelet[3248]: W1101 01:36:21.487309 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487338 kubelet[3248]: E1101 01:36:21.487316 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487413 kubelet[3248]: E1101 01:36:21.487405 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487413 kubelet[3248]: W1101 01:36:21.487410 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487454 kubelet[3248]: E1101 01:36:21.487418 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487533 kubelet[3248]: E1101 01:36:21.487528 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487533 kubelet[3248]: W1101 01:36:21.487533 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487571 kubelet[3248]: E1101 01:36:21.487540 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487626 kubelet[3248]: E1101 01:36:21.487621 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487643 kubelet[3248]: W1101 01:36:21.487626 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487643 kubelet[3248]: E1101 01:36:21.487632 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487734 kubelet[3248]: E1101 01:36:21.487727 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487755 kubelet[3248]: W1101 01:36:21.487734 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487755 kubelet[3248]: E1101 01:36:21.487742 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487831 kubelet[3248]: E1101 01:36:21.487826 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487848 kubelet[3248]: W1101 01:36:21.487831 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487848 kubelet[3248]: E1101 01:36:21.487837 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.487920 kubelet[3248]: E1101 01:36:21.487910 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.487920 kubelet[3248]: W1101 01:36:21.487916 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.487920 kubelet[3248]: E1101 01:36:21.487922 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488036 kubelet[3248]: E1101 01:36:21.487991 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488036 kubelet[3248]: W1101 01:36:21.487998 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488036 kubelet[3248]: E1101 01:36:21.488004 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488138 kubelet[3248]: E1101 01:36:21.488081 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488138 kubelet[3248]: W1101 01:36:21.488086 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488138 kubelet[3248]: E1101 01:36:21.488091 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488247 kubelet[3248]: E1101 01:36:21.488178 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488247 kubelet[3248]: W1101 01:36:21.488182 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488247 kubelet[3248]: E1101 01:36:21.488190 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488353 kubelet[3248]: E1101 01:36:21.488267 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488353 kubelet[3248]: W1101 01:36:21.488271 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488353 kubelet[3248]: E1101 01:36:21.488280 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488353 kubelet[3248]: E1101 01:36:21.488337 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488353 kubelet[3248]: W1101 01:36:21.488341 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488353 kubelet[3248]: E1101 01:36:21.488347 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488526 kubelet[3248]: E1101 01:36:21.488411 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488526 kubelet[3248]: W1101 01:36:21.488416 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488526 kubelet[3248]: E1101 01:36:21.488420 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488526 kubelet[3248]: E1101 01:36:21.488499 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488526 kubelet[3248]: W1101 01:36:21.488506 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488526 kubelet[3248]: E1101 01:36:21.488513 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488691 kubelet[3248]: E1101 01:36:21.488589 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488691 kubelet[3248]: W1101 01:36:21.488594 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488691 kubelet[3248]: E1101 01:36:21.488601 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488691 kubelet[3248]: E1101 01:36:21.488670 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488691 kubelet[3248]: W1101 01:36:21.488674 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488691 kubelet[3248]: E1101 01:36:21.488680 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488858 kubelet[3248]: E1101 01:36:21.488771 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488858 kubelet[3248]: W1101 01:36:21.488775 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488858 kubelet[3248]: E1101 01:36:21.488781 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.488948 kubelet[3248]: E1101 01:36:21.488926 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.488948 kubelet[3248]: W1101 01:36:21.488932 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.488948 kubelet[3248]: E1101 01:36:21.488939 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.489036 kubelet[3248]: E1101 01:36:21.489023 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.489036 kubelet[3248]: W1101 01:36:21.489028 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.489036 kubelet[3248]: E1101 01:36:21.489034 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.489136 kubelet[3248]: E1101 01:36:21.489128 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.489164 kubelet[3248]: W1101 01:36:21.489135 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.489164 kubelet[3248]: E1101 01:36:21.489143 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.489247 kubelet[3248]: E1101 01:36:21.489239 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.489247 kubelet[3248]: W1101 01:36:21.489244 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.489314 kubelet[3248]: E1101 01:36:21.489250 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.490904 containerd[1914]: time="2025-11-01T01:36:21.490649566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:21.490904 containerd[1914]: time="2025-11-01T01:36:21.490865319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:21.490904 containerd[1914]: time="2025-11-01T01:36:21.490873421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:21.490991 containerd[1914]: time="2025-11-01T01:36:21.490926304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:21.493367 kubelet[3248]: E1101 01:36:21.493354 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:21.493367 kubelet[3248]: W1101 01:36:21.493363 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:21.493475 kubelet[3248]: E1101 01:36:21.493373 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:21.517652 containerd[1914]: time="2025-11-01T01:36:21.517629706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwlw6,Uid:a6e233c7-66c0-4774-9942-17ab9504cd80,Namespace:calico-system,Attempt:0,} returns sandbox id \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\"" Nov 1 01:36:23.460946 kubelet[3248]: E1101 01:36:23.460113 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:23.532413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967183136.mount: Deactivated successfully. Nov 1 01:36:24.282780 containerd[1914]: time="2025-11-01T01:36:24.282724048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:24.283020 containerd[1914]: time="2025-11-01T01:36:24.282895617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 01:36:24.283183 containerd[1914]: time="2025-11-01T01:36:24.283172752Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:24.284559 containerd[1914]: time="2025-11-01T01:36:24.284518674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:24.284815 containerd[1914]: time="2025-11-01T01:36:24.284800936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.940276282s" Nov 1 01:36:24.284861 containerd[1914]: time="2025-11-01T01:36:24.284816121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:36:24.285330 containerd[1914]: time="2025-11-01T01:36:24.285317278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:36:24.288124 containerd[1914]: time="2025-11-01T01:36:24.288103890Z" level=info msg="CreateContainer within sandbox \"4a1ac2c11994984b9c13f83ee703c5c5c71bfe1392d9a9c6b78d61a5dae27870\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:36:24.291725 containerd[1914]: time="2025-11-01T01:36:24.291683470Z" level=info msg="CreateContainer within sandbox \"4a1ac2c11994984b9c13f83ee703c5c5c71bfe1392d9a9c6b78d61a5dae27870\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b458fec1d2776020227c911768443012388c28088cbb2913185cfe7c0d38403\"" Nov 1 01:36:24.291899 containerd[1914]: time="2025-11-01T01:36:24.291883343Z" level=info msg="StartContainer for \"2b458fec1d2776020227c911768443012388c28088cbb2913185cfe7c0d38403\"" Nov 1 01:36:24.393771 containerd[1914]: time="2025-11-01T01:36:24.393747653Z" level=info msg="StartContainer for \"2b458fec1d2776020227c911768443012388c28088cbb2913185cfe7c0d38403\" returns successfully" Nov 1 01:36:24.583356 kubelet[3248]: E1101 01:36:24.583196 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.583356 kubelet[3248]: W1101 01:36:24.583240 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.583356 kubelet[3248]: E1101 01:36:24.583268 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.584254 kubelet[3248]: E1101 01:36:24.583573 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.584254 kubelet[3248]: W1101 01:36:24.583592 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.584254 kubelet[3248]: E1101 01:36:24.583610 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.584254 kubelet[3248]: E1101 01:36:24.583907 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.584254 kubelet[3248]: W1101 01:36:24.583926 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.584254 kubelet[3248]: E1101 01:36:24.583945 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.584780 kubelet[3248]: E1101 01:36:24.584267 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.584780 kubelet[3248]: W1101 01:36:24.584282 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.584780 kubelet[3248]: E1101 01:36:24.584297 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.584780 kubelet[3248]: E1101 01:36:24.584577 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.584780 kubelet[3248]: W1101 01:36:24.584596 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.584780 kubelet[3248]: E1101 01:36:24.584614 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.585320 kubelet[3248]: E1101 01:36:24.584897 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.585320 kubelet[3248]: W1101 01:36:24.584916 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.585320 kubelet[3248]: E1101 01:36:24.584934 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.585320 kubelet[3248]: E1101 01:36:24.585236 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.585320 kubelet[3248]: W1101 01:36:24.585251 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.585320 kubelet[3248]: E1101 01:36:24.585267 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.585631 kubelet[3248]: E1101 01:36:24.585565 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.585631 kubelet[3248]: W1101 01:36:24.585580 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.585631 kubelet[3248]: E1101 01:36:24.585596 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.585980 kubelet[3248]: E1101 01:36:24.585928 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.585980 kubelet[3248]: W1101 01:36:24.585944 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.585980 kubelet[3248]: E1101 01:36:24.585963 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.586256 kubelet[3248]: E1101 01:36:24.586231 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.586256 kubelet[3248]: W1101 01:36:24.586253 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.586467 kubelet[3248]: E1101 01:36:24.586277 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.586707 kubelet[3248]: E1101 01:36:24.586653 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.586707 kubelet[3248]: W1101 01:36:24.586671 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.586707 kubelet[3248]: E1101 01:36:24.586687 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.586997 kubelet[3248]: E1101 01:36:24.586975 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.587075 kubelet[3248]: W1101 01:36:24.586997 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.587075 kubelet[3248]: E1101 01:36:24.587021 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.587338 kubelet[3248]: E1101 01:36:24.587321 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.587338 kubelet[3248]: W1101 01:36:24.587337 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.587485 kubelet[3248]: E1101 01:36:24.587352 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.587588 kubelet[3248]: E1101 01:36:24.587574 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.587639 kubelet[3248]: W1101 01:36:24.587587 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.587639 kubelet[3248]: E1101 01:36:24.587601 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.587872 kubelet[3248]: E1101 01:36:24.587858 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.587923 kubelet[3248]: W1101 01:36:24.587873 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.587923 kubelet[3248]: E1101 01:36:24.587887 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.610754 kubelet[3248]: E1101 01:36:24.610711 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.610754 kubelet[3248]: W1101 01:36:24.610751 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.611022 kubelet[3248]: E1101 01:36:24.610783 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.611455 kubelet[3248]: E1101 01:36:24.611411 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.611455 kubelet[3248]: W1101 01:36:24.611446 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.611788 kubelet[3248]: E1101 01:36:24.611486 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.612190 kubelet[3248]: E1101 01:36:24.612132 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.612190 kubelet[3248]: W1101 01:36:24.612184 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.612602 kubelet[3248]: E1101 01:36:24.612286 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.612905 kubelet[3248]: E1101 01:36:24.612839 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.612905 kubelet[3248]: W1101 01:36:24.612878 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.613133 kubelet[3248]: E1101 01:36:24.612921 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.613556 kubelet[3248]: E1101 01:36:24.613505 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.613556 kubelet[3248]: W1101 01:36:24.613535 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.613752 kubelet[3248]: E1101 01:36:24.613597 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.614024 kubelet[3248]: E1101 01:36:24.613975 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.614024 kubelet[3248]: W1101 01:36:24.614005 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.614429 kubelet[3248]: E1101 01:36:24.614062 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.614629 kubelet[3248]: E1101 01:36:24.614463 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.614629 kubelet[3248]: W1101 01:36:24.614491 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.614629 kubelet[3248]: E1101 01:36:24.614574 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.615113 kubelet[3248]: E1101 01:36:24.614934 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.615113 kubelet[3248]: W1101 01:36:24.614961 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.615113 kubelet[3248]: E1101 01:36:24.614995 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.615625 kubelet[3248]: E1101 01:36:24.615416 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.615625 kubelet[3248]: W1101 01:36:24.615442 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.615625 kubelet[3248]: E1101 01:36:24.615479 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.615988 kubelet[3248]: E1101 01:36:24.615926 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.615988 kubelet[3248]: W1101 01:36:24.615953 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.616250 kubelet[3248]: E1101 01:36:24.616030 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.616495 kubelet[3248]: E1101 01:36:24.616434 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.616495 kubelet[3248]: W1101 01:36:24.616469 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.616790 kubelet[3248]: E1101 01:36:24.616557 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.616976 kubelet[3248]: E1101 01:36:24.616923 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.616976 kubelet[3248]: W1101 01:36:24.616950 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.617199 kubelet[3248]: E1101 01:36:24.616991 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.617634 kubelet[3248]: E1101 01:36:24.617582 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.617634 kubelet[3248]: W1101 01:36:24.617613 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.617839 kubelet[3248]: E1101 01:36:24.617647 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.618268 kubelet[3248]: E1101 01:36:24.618187 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.618268 kubelet[3248]: W1101 01:36:24.618252 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.618542 kubelet[3248]: E1101 01:36:24.618307 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.618950 kubelet[3248]: E1101 01:36:24.618911 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.618950 kubelet[3248]: W1101 01:36:24.618945 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.619242 kubelet[3248]: E1101 01:36:24.619044 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.619524 kubelet[3248]: E1101 01:36:24.619465 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.619524 kubelet[3248]: W1101 01:36:24.619494 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.619768 kubelet[3248]: E1101 01:36:24.619572 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.620065 kubelet[3248]: E1101 01:36:24.620032 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.620181 kubelet[3248]: W1101 01:36:24.620066 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.620181 kubelet[3248]: E1101 01:36:24.620116 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:24.620762 kubelet[3248]: E1101 01:36:24.620715 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:24.620762 kubelet[3248]: W1101 01:36:24.620758 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:24.621156 kubelet[3248]: E1101 01:36:24.620803 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.457892 kubelet[3248]: E1101 01:36:25.457790 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:25.508586 kubelet[3248]: I1101 01:36:25.508484 3248 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:36:25.596071 kubelet[3248]: E1101 01:36:25.595976 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.596071 kubelet[3248]: W1101 01:36:25.596022 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.596071 kubelet[3248]: E1101 01:36:25.596066 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.597261 kubelet[3248]: E1101 01:36:25.596695 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.597261 kubelet[3248]: W1101 01:36:25.596731 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.597261 kubelet[3248]: E1101 01:36:25.596767 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.597571 kubelet[3248]: E1101 01:36:25.597313 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.597571 kubelet[3248]: W1101 01:36:25.597349 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.597571 kubelet[3248]: E1101 01:36:25.597379 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.597988 kubelet[3248]: E1101 01:36:25.597913 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.597988 kubelet[3248]: W1101 01:36:25.597943 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.597988 kubelet[3248]: E1101 01:36:25.597970 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.598619 kubelet[3248]: E1101 01:36:25.598532 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.598619 kubelet[3248]: W1101 01:36:25.598561 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.598619 kubelet[3248]: E1101 01:36:25.598597 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.599113 kubelet[3248]: E1101 01:36:25.599064 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.599113 kubelet[3248]: W1101 01:36:25.599092 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.599390 kubelet[3248]: E1101 01:36:25.599119 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.599703 kubelet[3248]: E1101 01:36:25.599635 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.599703 kubelet[3248]: W1101 01:36:25.599660 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.599703 kubelet[3248]: E1101 01:36:25.599685 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.600124 kubelet[3248]: E1101 01:36:25.600096 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.600124 kubelet[3248]: W1101 01:36:25.600122 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.600352 kubelet[3248]: E1101 01:36:25.600147 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.600730 kubelet[3248]: E1101 01:36:25.600637 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.600730 kubelet[3248]: W1101 01:36:25.600668 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.600730 kubelet[3248]: E1101 01:36:25.600704 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.601365 kubelet[3248]: E1101 01:36:25.601312 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.601365 kubelet[3248]: W1101 01:36:25.601339 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.601365 kubelet[3248]: E1101 01:36:25.601369 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.601953 kubelet[3248]: E1101 01:36:25.601894 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.601953 kubelet[3248]: W1101 01:36:25.601929 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.602180 kubelet[3248]: E1101 01:36:25.601965 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.602553 kubelet[3248]: E1101 01:36:25.602503 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.602553 kubelet[3248]: W1101 01:36:25.602531 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.602790 kubelet[3248]: E1101 01:36:25.602559 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.603121 kubelet[3248]: E1101 01:36:25.603056 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.603121 kubelet[3248]: W1101 01:36:25.603083 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.603121 kubelet[3248]: E1101 01:36:25.603110 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.603698 kubelet[3248]: E1101 01:36:25.603647 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.603698 kubelet[3248]: W1101 01:36:25.603678 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.603908 kubelet[3248]: E1101 01:36:25.603706 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.604160 kubelet[3248]: E1101 01:36:25.604132 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.604297 kubelet[3248]: W1101 01:36:25.604162 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.604297 kubelet[3248]: E1101 01:36:25.604190 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.621991 kubelet[3248]: E1101 01:36:25.621896 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.621991 kubelet[3248]: W1101 01:36:25.621937 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.621991 kubelet[3248]: E1101 01:36:25.621976 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.622779 kubelet[3248]: E1101 01:36:25.622686 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.622779 kubelet[3248]: W1101 01:36:25.622725 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.622779 kubelet[3248]: E1101 01:36:25.622776 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.623611 kubelet[3248]: E1101 01:36:25.623512 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.623611 kubelet[3248]: W1101 01:36:25.623555 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.623611 kubelet[3248]: E1101 01:36:25.623604 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.624376 kubelet[3248]: E1101 01:36:25.624285 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.624376 kubelet[3248]: W1101 01:36:25.624326 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.624376 kubelet[3248]: E1101 01:36:25.624374 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.625046 kubelet[3248]: E1101 01:36:25.624958 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.625046 kubelet[3248]: W1101 01:36:25.624990 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.625344 kubelet[3248]: E1101 01:36:25.625070 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.625675 kubelet[3248]: E1101 01:36:25.625587 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.625675 kubelet[3248]: W1101 01:36:25.625618 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.625938 kubelet[3248]: E1101 01:36:25.625695 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.626236 kubelet[3248]: E1101 01:36:25.626189 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.626362 kubelet[3248]: W1101 01:36:25.626250 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.626362 kubelet[3248]: E1101 01:36:25.626332 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.626893 kubelet[3248]: E1101 01:36:25.626799 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.626893 kubelet[3248]: W1101 01:36:25.626829 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.627157 kubelet[3248]: E1101 01:36:25.626906 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.627949 kubelet[3248]: E1101 01:36:25.627868 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.627949 kubelet[3248]: W1101 01:36:25.627938 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.628471 kubelet[3248]: E1101 01:36:25.628068 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.631331 kubelet[3248]: E1101 01:36:25.631205 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.632677 kubelet[3248]: W1101 01:36:25.632573 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.632677 kubelet[3248]: E1101 01:36:25.632660 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.633319 kubelet[3248]: E1101 01:36:25.633207 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.633319 kubelet[3248]: W1101 01:36:25.633280 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.633609 kubelet[3248]: E1101 01:36:25.633329 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.633922 kubelet[3248]: E1101 01:36:25.633852 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.633922 kubelet[3248]: W1101 01:36:25.633893 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.634150 kubelet[3248]: E1101 01:36:25.633938 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.634556 kubelet[3248]: E1101 01:36:25.634500 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.634556 kubelet[3248]: W1101 01:36:25.634539 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.634762 kubelet[3248]: E1101 01:36:25.634620 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.635151 kubelet[3248]: E1101 01:36:25.635097 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.635151 kubelet[3248]: W1101 01:36:25.635134 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.635403 kubelet[3248]: E1101 01:36:25.635178 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.635874 kubelet[3248]: E1101 01:36:25.635820 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.635874 kubelet[3248]: W1101 01:36:25.635857 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.636073 kubelet[3248]: E1101 01:36:25.635901 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.636627 kubelet[3248]: E1101 01:36:25.636544 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.636627 kubelet[3248]: W1101 01:36:25.636588 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.636627 kubelet[3248]: E1101 01:36:25.636632 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.637176 kubelet[3248]: E1101 01:36:25.637138 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.637176 kubelet[3248]: W1101 01:36:25.637170 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.637407 kubelet[3248]: E1101 01:36:25.637236 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.637798 kubelet[3248]: E1101 01:36:25.637714 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:36:25.637798 kubelet[3248]: W1101 01:36:25.637741 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:36:25.637798 kubelet[3248]: E1101 01:36:25.637768 3248 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:36:25.708425 containerd[1914]: time="2025-11-01T01:36:25.708364081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:25.708648 containerd[1914]: time="2025-11-01T01:36:25.708591410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 01:36:25.708923 containerd[1914]: time="2025-11-01T01:36:25.708913465Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:25.709911 containerd[1914]: time="2025-11-01T01:36:25.709898467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:25.710389 containerd[1914]: time="2025-11-01T01:36:25.710348103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.425014174s" Nov 1 01:36:25.710389 containerd[1914]: time="2025-11-01T01:36:25.710364630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:36:25.711304 containerd[1914]: time="2025-11-01T01:36:25.711293522Z" level=info msg="CreateContainer within sandbox \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:36:25.715860 containerd[1914]: time="2025-11-01T01:36:25.715845048Z" level=info msg="CreateContainer within sandbox \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73a55343067730d16dc045f48cb9a724224aafbeff6f112cfc7e3c574f77865a\"" Nov 1 01:36:25.716075 containerd[1914]: time="2025-11-01T01:36:25.716064243Z" level=info msg="StartContainer for \"73a55343067730d16dc045f48cb9a724224aafbeff6f112cfc7e3c574f77865a\"" Nov 1 01:36:25.781201 containerd[1914]: time="2025-11-01T01:36:25.781166869Z" level=info msg="StartContainer for \"73a55343067730d16dc045f48cb9a724224aafbeff6f112cfc7e3c574f77865a\" returns successfully" Nov 1 01:36:26.546599 kubelet[3248]: I1101 01:36:26.546435 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cdb459579-2xqwk" podStartSLOduration=3.605503038 podStartE2EDuration="6.546376268s" podCreationTimestamp="2025-11-01 01:36:20 +0000 UTC" firstStartedPulling="2025-11-01 01:36:21.344343242 +0000 UTC m=+17.930478013" lastFinishedPulling="2025-11-01 01:36:24.285216478 +0000 UTC m=+20.871351243" observedRunningTime="2025-11-01 01:36:24.517694779 +0000 UTC m=+21.103829588" watchObservedRunningTime="2025-11-01 01:36:26.546376268 +0000 UTC m=+23.132511116" Nov 1 01:36:26.629427 containerd[1914]: time="2025-11-01T01:36:26.629393296Z" level=info msg="shim disconnected" id=73a55343067730d16dc045f48cb9a724224aafbeff6f112cfc7e3c574f77865a namespace=k8s.io Nov 1 01:36:26.629427 containerd[1914]: time="2025-11-01T01:36:26.629425558Z" level=warning msg="cleaning up after shim disconnected" id=73a55343067730d16dc045f48cb9a724224aafbeff6f112cfc7e3c574f77865a namespace=k8s.io Nov 1 01:36:26.629427 containerd[1914]: time="2025-11-01T01:36:26.629431264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:36:26.721934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a55343067730d16dc045f48cb9a724224aafbeff6f112cfc7e3c574f77865a-rootfs.mount: Deactivated successfully. Nov 1 01:36:27.457936 kubelet[3248]: E1101 01:36:27.457801 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:27.522246 containerd[1914]: time="2025-11-01T01:36:27.522148792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:36:27.927478 kubelet[3248]: I1101 01:36:27.927420 3248 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:36:29.456673 kubelet[3248]: E1101 01:36:29.456616 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:29.643148 containerd[1914]: time="2025-11-01T01:36:29.643124134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:29.643381 containerd[1914]: time="2025-11-01T01:36:29.643361948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 01:36:29.643731 containerd[1914]: time="2025-11-01T01:36:29.643719395Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:29.644693 containerd[1914]: time="2025-11-01T01:36:29.644681488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:29.645129 containerd[1914]: time="2025-11-01T01:36:29.645118823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.12286605s" Nov 1 01:36:29.645153 containerd[1914]: time="2025-11-01T01:36:29.645133610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:36:29.646091 containerd[1914]: time="2025-11-01T01:36:29.646078138Z" level=info msg="CreateContainer within sandbox \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:36:29.650716 containerd[1914]: time="2025-11-01T01:36:29.650678172Z" level=info msg="CreateContainer within sandbox \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"675e5af1c9715ace75b3b99a80c2ba81f3c2bff8a1539ddc2e2b2a4281f3545a\"" Nov 1 01:36:29.650938 containerd[1914]: time="2025-11-01T01:36:29.650919953Z" level=info msg="StartContainer for \"675e5af1c9715ace75b3b99a80c2ba81f3c2bff8a1539ddc2e2b2a4281f3545a\"" Nov 1 01:36:29.695657 containerd[1914]: time="2025-11-01T01:36:29.695628488Z" level=info msg="StartContainer for \"675e5af1c9715ace75b3b99a80c2ba81f3c2bff8a1539ddc2e2b2a4281f3545a\" returns successfully" Nov 1 01:36:30.306011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-675e5af1c9715ace75b3b99a80c2ba81f3c2bff8a1539ddc2e2b2a4281f3545a-rootfs.mount: Deactivated successfully. Nov 1 01:36:30.394240 kubelet[3248]: I1101 01:36:30.394168 3248 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:36:30.463004 kubelet[3248]: I1101 01:36:30.462958 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5dd05a57-68a7-45f0-82e4-c02ae8d0fe49-calico-apiserver-certs\") pod \"calico-apiserver-65957f8fc6-nf7rw\" (UID: \"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49\") " pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" Nov 1 01:36:30.463475 kubelet[3248]: I1101 01:36:30.463023 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07d4f1da-121e-4217-9b76-751186743f3a-calico-apiserver-certs\") pod \"calico-apiserver-65957f8fc6-h4mgq\" (UID: \"07d4f1da-121e-4217-9b76-751186743f3a\") " pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" Nov 1 01:36:30.463475 kubelet[3248]: I1101 01:36:30.463066 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3dae49a-08cd-4d54-8b47-222aaaea72bd-tigera-ca-bundle\") pod \"calico-kube-controllers-6874c989b8-tpm2w\" (UID: \"c3dae49a-08cd-4d54-8b47-222aaaea72bd\") " pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" Nov 1 01:36:30.463475 kubelet[3248]: I1101 01:36:30.463110 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fd307cd-9e0a-4b4c-8e92-d2d598a36e63-config-volume\") pod \"coredns-668d6bf9bc-ds49c\" (UID: \"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63\") " pod="kube-system/coredns-668d6bf9bc-ds49c" Nov 1 01:36:30.463475 kubelet[3248]: I1101 01:36:30.463156 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cch\" (UniqueName: \"kubernetes.io/projected/c3dae49a-08cd-4d54-8b47-222aaaea72bd-kube-api-access-42cch\") pod \"calico-kube-controllers-6874c989b8-tpm2w\" (UID: \"c3dae49a-08cd-4d54-8b47-222aaaea72bd\") " pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" Nov 1 01:36:30.463475 kubelet[3248]: I1101 01:36:30.463250 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn2k\" (UniqueName: \"kubernetes.io/projected/7fd307cd-9e0a-4b4c-8e92-d2d598a36e63-kube-api-access-2fn2k\") pod \"coredns-668d6bf9bc-ds49c\" (UID: \"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63\") " pod="kube-system/coredns-668d6bf9bc-ds49c" Nov 1 01:36:30.463721 kubelet[3248]: I1101 01:36:30.463294 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q527\" (UniqueName: \"kubernetes.io/projected/bce1db93-db69-4f85-98fe-9ebedd24ee18-kube-api-access-9q527\") pod \"coredns-668d6bf9bc-zwxtd\" (UID: \"bce1db93-db69-4f85-98fe-9ebedd24ee18\") " pod="kube-system/coredns-668d6bf9bc-zwxtd" Nov 1 01:36:30.463721 kubelet[3248]: I1101 01:36:30.463332 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89872\" (UniqueName: \"kubernetes.io/projected/5dd05a57-68a7-45f0-82e4-c02ae8d0fe49-kube-api-access-89872\") pod \"calico-apiserver-65957f8fc6-nf7rw\" (UID: \"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49\") " pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" Nov 1 01:36:30.463721 kubelet[3248]: I1101 01:36:30.463393 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h2qg\" (UniqueName: \"kubernetes.io/projected/07d4f1da-121e-4217-9b76-751186743f3a-kube-api-access-7h2qg\") pod \"calico-apiserver-65957f8fc6-h4mgq\" (UID: \"07d4f1da-121e-4217-9b76-751186743f3a\") " pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" Nov 1 01:36:30.463721 kubelet[3248]: I1101 01:36:30.463441 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce1db93-db69-4f85-98fe-9ebedd24ee18-config-volume\") pod \"coredns-668d6bf9bc-zwxtd\" (UID: \"bce1db93-db69-4f85-98fe-9ebedd24ee18\") " pod="kube-system/coredns-668d6bf9bc-zwxtd" Nov 1 01:36:30.564801 kubelet[3248]: I1101 01:36:30.564557 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrb4z\" (UniqueName: \"kubernetes.io/projected/bf49cfe3-1409-4c2c-87ed-a818ec194a34-kube-api-access-zrb4z\") pod \"whisker-95f8498b7-jq29q\" (UID: \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\") " pod="calico-system/whisker-95f8498b7-jq29q" Nov 1 01:36:30.564801 kubelet[3248]: I1101 01:36:30.564665 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmc4z\" (UniqueName: \"kubernetes.io/projected/74c9c676-a5c3-4018-bd2b-2647288efe10-kube-api-access-lmc4z\") pod \"goldmane-666569f655-cwdx7\" (UID: \"74c9c676-a5c3-4018-bd2b-2647288efe10\") " pod="calico-system/goldmane-666569f655-cwdx7" Nov 1 01:36:30.565181 kubelet[3248]: I1101 01:36:30.564979 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c9c676-a5c3-4018-bd2b-2647288efe10-config\") pod \"goldmane-666569f655-cwdx7\" (UID: \"74c9c676-a5c3-4018-bd2b-2647288efe10\") " pod="calico-system/goldmane-666569f655-cwdx7" Nov 1 01:36:30.565536 kubelet[3248]: I1101 01:36:30.565460 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-backend-key-pair\") pod \"whisker-95f8498b7-jq29q\" (UID: \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\") " pod="calico-system/whisker-95f8498b7-jq29q" Nov 1 01:36:30.565772 kubelet[3248]: I1101 01:36:30.565585 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-ca-bundle\") pod \"whisker-95f8498b7-jq29q\" (UID: \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\") " pod="calico-system/whisker-95f8498b7-jq29q" Nov 1 01:36:30.565772 kubelet[3248]: I1101 01:36:30.565692 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/74c9c676-a5c3-4018-bd2b-2647288efe10-goldmane-key-pair\") pod \"goldmane-666569f655-cwdx7\" (UID: \"74c9c676-a5c3-4018-bd2b-2647288efe10\") " pod="calico-system/goldmane-666569f655-cwdx7" Nov 1 01:36:30.566204 kubelet[3248]: I1101 01:36:30.565872 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c9c676-a5c3-4018-bd2b-2647288efe10-goldmane-ca-bundle\") pod \"goldmane-666569f655-cwdx7\" (UID: \"74c9c676-a5c3-4018-bd2b-2647288efe10\") " pod="calico-system/goldmane-666569f655-cwdx7" Nov 1 01:36:30.682451 containerd[1914]: time="2025-11-01T01:36:30.682420281Z" level=info msg="shim disconnected" id=675e5af1c9715ace75b3b99a80c2ba81f3c2bff8a1539ddc2e2b2a4281f3545a namespace=k8s.io Nov 1 01:36:30.682451 containerd[1914]: time="2025-11-01T01:36:30.682449687Z" level=warning msg="cleaning up after shim disconnected" id=675e5af1c9715ace75b3b99a80c2ba81f3c2bff8a1539ddc2e2b2a4281f3545a namespace=k8s.io Nov 1 01:36:30.682451 containerd[1914]: time="2025-11-01T01:36:30.682454957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:36:30.747449 containerd[1914]: time="2025-11-01T01:36:30.747389198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds49c,Uid:7fd307cd-9e0a-4b4c-8e92-d2d598a36e63,Namespace:kube-system,Attempt:0,}" Nov 1 01:36:30.748735 containerd[1914]: time="2025-11-01T01:36:30.748721258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-nf7rw,Uid:5dd05a57-68a7-45f0-82e4-c02ae8d0fe49,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:36:30.751039 containerd[1914]: time="2025-11-01T01:36:30.751026677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxtd,Uid:bce1db93-db69-4f85-98fe-9ebedd24ee18,Namespace:kube-system,Attempt:0,}" Nov 1 01:36:30.752294 containerd[1914]: time="2025-11-01T01:36:30.752279138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-h4mgq,Uid:07d4f1da-121e-4217-9b76-751186743f3a,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:36:30.752337 containerd[1914]: time="2025-11-01T01:36:30.752309110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6874c989b8-tpm2w,Uid:c3dae49a-08cd-4d54-8b47-222aaaea72bd,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:30.754655 containerd[1914]: time="2025-11-01T01:36:30.754639992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95f8498b7-jq29q,Uid:bf49cfe3-1409-4c2c-87ed-a818ec194a34,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:30.754706 containerd[1914]: time="2025-11-01T01:36:30.754692693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwdx7,Uid:74c9c676-a5c3-4018-bd2b-2647288efe10,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:30.781226 containerd[1914]: time="2025-11-01T01:36:30.781168273Z" level=error msg="Failed to destroy network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.781450 containerd[1914]: time="2025-11-01T01:36:30.781428556Z" level=error msg="encountered an error cleaning up failed sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.781498 containerd[1914]: time="2025-11-01T01:36:30.781465263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds49c,Uid:7fd307cd-9e0a-4b4c-8e92-d2d598a36e63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.781674 kubelet[3248]: E1101 01:36:30.781639 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.781754 kubelet[3248]: E1101 01:36:30.781710 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ds49c" Nov 1 01:36:30.781754 kubelet[3248]: E1101 01:36:30.781729 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ds49c" Nov 1 01:36:30.781835 kubelet[3248]: E1101 01:36:30.781774 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ds49c_kube-system(7fd307cd-9e0a-4b4c-8e92-d2d598a36e63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ds49c_kube-system(7fd307cd-9e0a-4b4c-8e92-d2d598a36e63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ds49c" podUID="7fd307cd-9e0a-4b4c-8e92-d2d598a36e63" Nov 1 01:36:30.788644 containerd[1914]: time="2025-11-01T01:36:30.788600114Z" level=error msg="Failed to destroy network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.789098 containerd[1914]: time="2025-11-01T01:36:30.788918284Z" level=error msg="encountered an error cleaning up failed sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.789098 containerd[1914]: time="2025-11-01T01:36:30.788992556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-nf7rw,Uid:5dd05a57-68a7-45f0-82e4-c02ae8d0fe49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.789252 kubelet[3248]: E1101 01:36:30.789222 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.789305 kubelet[3248]: E1101 01:36:30.789271 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" Nov 1 01:36:30.789305 kubelet[3248]: E1101 01:36:30.789286 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" Nov 1 01:36:30.789372 kubelet[3248]: E1101 01:36:30.789320 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:36:30.790548 containerd[1914]: time="2025-11-01T01:36:30.790524806Z" level=error msg="Failed to destroy network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.790730 containerd[1914]: time="2025-11-01T01:36:30.790714225Z" level=error msg="encountered an error cleaning up failed sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.790778 containerd[1914]: time="2025-11-01T01:36:30.790742157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxtd,Uid:bce1db93-db69-4f85-98fe-9ebedd24ee18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.790986 kubelet[3248]: E1101 01:36:30.790873 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.790986 kubelet[3248]: E1101 01:36:30.790916 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxtd" Nov 1 01:36:30.790986 kubelet[3248]: E1101 01:36:30.790933 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxtd" Nov 1 01:36:30.791070 kubelet[3248]: E1101 01:36:30.790962 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwxtd_kube-system(bce1db93-db69-4f85-98fe-9ebedd24ee18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwxtd_kube-system(bce1db93-db69-4f85-98fe-9ebedd24ee18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwxtd" podUID="bce1db93-db69-4f85-98fe-9ebedd24ee18" Nov 1 01:36:30.793300 containerd[1914]: time="2025-11-01T01:36:30.793270786Z" level=error msg="Failed to destroy network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793376 containerd[1914]: time="2025-11-01T01:36:30.793308259Z" level=error msg="Failed to destroy network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793491 containerd[1914]: time="2025-11-01T01:36:30.793477765Z" level=error msg="encountered an error cleaning up failed sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793532 containerd[1914]: time="2025-11-01T01:36:30.793489316Z" level=error msg="encountered an error cleaning up failed sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793532 containerd[1914]: time="2025-11-01T01:36:30.793505086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-h4mgq,Uid:07d4f1da-121e-4217-9b76-751186743f3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793622 containerd[1914]: time="2025-11-01T01:36:30.793521506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwdx7,Uid:74c9c676-a5c3-4018-bd2b-2647288efe10,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793671 kubelet[3248]: E1101 01:36:30.793642 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793713 kubelet[3248]: E1101 01:36:30.793688 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cwdx7" Nov 1 01:36:30.793713 kubelet[3248]: E1101 01:36:30.793708 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cwdx7" Nov 1 01:36:30.793773 kubelet[3248]: E1101 01:36:30.793642 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.793773 kubelet[3248]: E1101 01:36:30.793741 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:36:30.793773 kubelet[3248]: E1101 01:36:30.793757 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" Nov 1 01:36:30.793850 kubelet[3248]: E1101 01:36:30.793769 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" Nov 1 01:36:30.793850 kubelet[3248]: E1101 01:36:30.793786 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:36:30.794827 containerd[1914]: time="2025-11-01T01:36:30.794814317Z" level=error msg="Failed to destroy network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.794960 containerd[1914]: time="2025-11-01T01:36:30.794948676Z" level=error msg="encountered an error cleaning up failed sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.794982 containerd[1914]: time="2025-11-01T01:36:30.794973473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6874c989b8-tpm2w,Uid:c3dae49a-08cd-4d54-8b47-222aaaea72bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.795055 kubelet[3248]: E1101 01:36:30.795042 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.795079 kubelet[3248]: E1101 01:36:30.795063 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" Nov 1 01:36:30.795099 kubelet[3248]: E1101 01:36:30.795077 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" Nov 1 01:36:30.795121 kubelet[3248]: E1101 01:36:30.795097 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:36:30.796073 containerd[1914]: time="2025-11-01T01:36:30.796060467Z" level=error msg="Failed to destroy network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.796195 containerd[1914]: time="2025-11-01T01:36:30.796181502Z" level=error msg="encountered an error cleaning up failed sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.796239 containerd[1914]: time="2025-11-01T01:36:30.796200957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95f8498b7-jq29q,Uid:bf49cfe3-1409-4c2c-87ed-a818ec194a34,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.796305 kubelet[3248]: E1101 01:36:30.796271 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:30.796305 kubelet[3248]: E1101 01:36:30.796294 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95f8498b7-jq29q" Nov 1 01:36:30.796361 kubelet[3248]: E1101 01:36:30.796305 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95f8498b7-jq29q" Nov 1 01:36:30.796361 kubelet[3248]: E1101 01:36:30.796325 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-95f8498b7-jq29q_calico-system(bf49cfe3-1409-4c2c-87ed-a818ec194a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-95f8498b7-jq29q_calico-system(bf49cfe3-1409-4c2c-87ed-a818ec194a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-95f8498b7-jq29q" podUID="bf49cfe3-1409-4c2c-87ed-a818ec194a34" Nov 1 01:36:31.462414 containerd[1914]: time="2025-11-01T01:36:31.462340121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9grzl,Uid:f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:31.489572 containerd[1914]: time="2025-11-01T01:36:31.489519306Z" level=error msg="Failed to destroy network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.489736 containerd[1914]: time="2025-11-01T01:36:31.489693590Z" level=error msg="encountered an error cleaning up failed sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.489736 containerd[1914]: time="2025-11-01T01:36:31.489722290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9grzl,Uid:f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.489924 kubelet[3248]: E1101 01:36:31.489870 3248 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.489924 kubelet[3248]: E1101 01:36:31.489913 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:31.490177 kubelet[3248]: E1101 01:36:31.489927 3248 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9grzl" Nov 1 01:36:31.490177 kubelet[3248]: E1101 01:36:31.489960 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:31.527401 kubelet[3248]: I1101 01:36:31.527380 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:36:31.527798 containerd[1914]: time="2025-11-01T01:36:31.527778114Z" level=info msg="StopPodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\"" Nov 1 01:36:31.527924 containerd[1914]: time="2025-11-01T01:36:31.527911581Z" level=info msg="Ensure that sandbox 324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa in task-service has been cleanup successfully" Nov 1 01:36:31.527989 kubelet[3248]: I1101 01:36:31.527974 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:36:31.528261 containerd[1914]: time="2025-11-01T01:36:31.528245543Z" level=info msg="StopPodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\"" Nov 1 01:36:31.528372 containerd[1914]: time="2025-11-01T01:36:31.528355976Z" level=info msg="Ensure that sandbox 662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd in task-service has been cleanup successfully" Nov 1 01:36:31.528645 kubelet[3248]: I1101 01:36:31.528628 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:36:31.528983 containerd[1914]: time="2025-11-01T01:36:31.528966285Z" level=info msg="StopPodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\"" Nov 1 01:36:31.529132 containerd[1914]: time="2025-11-01T01:36:31.529114228Z" level=info msg="Ensure that sandbox 89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e in task-service has been cleanup successfully" Nov 1 01:36:31.529310 kubelet[3248]: I1101 01:36:31.529295 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:36:31.529648 containerd[1914]: time="2025-11-01T01:36:31.529629459Z" level=info msg="StopPodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\"" Nov 1 01:36:31.529766 containerd[1914]: time="2025-11-01T01:36:31.529752110Z" level=info msg="Ensure that sandbox 3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc in task-service has been cleanup successfully" Nov 1 01:36:31.531638 kubelet[3248]: I1101 01:36:31.531604 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:36:31.531767 containerd[1914]: time="2025-11-01T01:36:31.531724219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:36:31.532118 containerd[1914]: time="2025-11-01T01:36:31.532095966Z" level=info msg="StopPodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\"" Nov 1 01:36:31.532314 containerd[1914]: time="2025-11-01T01:36:31.532298260Z" level=info msg="Ensure that sandbox 852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d in task-service has been cleanup successfully" Nov 1 01:36:31.532519 kubelet[3248]: I1101 01:36:31.532499 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:36:31.532851 containerd[1914]: time="2025-11-01T01:36:31.532828423Z" level=info msg="StopPodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\"" Nov 1 01:36:31.532986 containerd[1914]: time="2025-11-01T01:36:31.532969875Z" level=info msg="Ensure that sandbox 00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec in task-service has been cleanup successfully" Nov 1 01:36:31.533160 kubelet[3248]: I1101 01:36:31.533147 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:36:31.533526 containerd[1914]: time="2025-11-01T01:36:31.533506874Z" level=info msg="StopPodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\"" Nov 1 01:36:31.533636 containerd[1914]: time="2025-11-01T01:36:31.533624171Z" level=info msg="Ensure that sandbox 02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f in task-service has been cleanup successfully" Nov 1 01:36:31.533787 kubelet[3248]: I1101 01:36:31.533776 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:36:31.534173 containerd[1914]: time="2025-11-01T01:36:31.534150412Z" level=info msg="StopPodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\"" Nov 1 01:36:31.534308 containerd[1914]: time="2025-11-01T01:36:31.534294265Z" level=info msg="Ensure that sandbox a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f in task-service has been cleanup successfully" Nov 1 01:36:31.546102 containerd[1914]: time="2025-11-01T01:36:31.546063085Z" level=error msg="StopPodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" failed" error="failed to destroy network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.546254 kubelet[3248]: E1101 01:36:31.546230 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:36:31.546320 kubelet[3248]: E1101 01:36:31.546280 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd"} Nov 1 01:36:31.546353 kubelet[3248]: E1101 01:36:31.546328 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74c9c676-a5c3-4018-bd2b-2647288efe10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.546410 kubelet[3248]: E1101 01:36:31.546346 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74c9c676-a5c3-4018-bd2b-2647288efe10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:36:31.546891 containerd[1914]: time="2025-11-01T01:36:31.546868199Z" level=error msg="StopPodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" failed" error="failed to destroy network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.546987 kubelet[3248]: E1101 01:36:31.546974 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:36:31.547015 kubelet[3248]: E1101 01:36:31.546993 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa"} Nov 1 01:36:31.547015 kubelet[3248]: E1101 01:36:31.547010 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bce1db93-db69-4f85-98fe-9ebedd24ee18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.547067 kubelet[3248]: E1101 01:36:31.547021 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bce1db93-db69-4f85-98fe-9ebedd24ee18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwxtd" podUID="bce1db93-db69-4f85-98fe-9ebedd24ee18" Nov 1 01:36:31.547136 containerd[1914]: time="2025-11-01T01:36:31.547119861Z" level=error msg="StopPodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" failed" error="failed to destroy network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.547252 kubelet[3248]: E1101 01:36:31.547195 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:36:31.547252 kubelet[3248]: E1101 01:36:31.547227 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d"} Nov 1 01:36:31.547252 kubelet[3248]: E1101 01:36:31.547244 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.547343 kubelet[3248]: E1101 01:36:31.547260 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:36:31.548281 containerd[1914]: time="2025-11-01T01:36:31.548257189Z" level=error msg="StopPodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" failed" error="failed to destroy network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.548407 kubelet[3248]: E1101 01:36:31.548386 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:36:31.548439 kubelet[3248]: E1101 01:36:31.548418 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec"} Nov 1 01:36:31.548459 kubelet[3248]: E1101 01:36:31.548444 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.548494 kubelet[3248]: E1101 01:36:31.548462 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ds49c" podUID="7fd307cd-9e0a-4b4c-8e92-d2d598a36e63" Nov 1 01:36:31.549386 containerd[1914]: time="2025-11-01T01:36:31.549369391Z" level=error msg="StopPodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" failed" error="failed to destroy network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.549479 kubelet[3248]: E1101 01:36:31.549468 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:36:31.549503 kubelet[3248]: E1101 01:36:31.549482 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f"} Nov 1 01:36:31.549503 kubelet[3248]: E1101 01:36:31.549496 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.549567 kubelet[3248]: E1101 01:36:31.549507 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:31.549687 containerd[1914]: time="2025-11-01T01:36:31.549674983Z" level=error msg="StopPodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" failed" error="failed to destroy network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.549758 kubelet[3248]: E1101 01:36:31.549746 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:36:31.549795 kubelet[3248]: E1101 01:36:31.549762 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f"} Nov 1 01:36:31.549795 kubelet[3248]: E1101 01:36:31.549776 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.549841 kubelet[3248]: E1101 01:36:31.549802 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-95f8498b7-jq29q" podUID="bf49cfe3-1409-4c2c-87ed-a818ec194a34" Nov 1 01:36:31.550880 containerd[1914]: time="2025-11-01T01:36:31.550866881Z" level=error msg="StopPodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" failed" error="failed to destroy network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.550957 kubelet[3248]: E1101 01:36:31.550942 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:36:31.550986 kubelet[3248]: E1101 01:36:31.550961 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e"} Nov 1 01:36:31.550986 kubelet[3248]: E1101 01:36:31.550975 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3dae49a-08cd-4d54-8b47-222aaaea72bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.551037 kubelet[3248]: E1101 01:36:31.550995 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3dae49a-08cd-4d54-8b47-222aaaea72bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:36:31.551256 containerd[1914]: time="2025-11-01T01:36:31.551242410Z" level=error msg="StopPodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" failed" error="failed to destroy network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:36:31.551350 kubelet[3248]: E1101 01:36:31.551338 3248 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:36:31.551379 kubelet[3248]: E1101 01:36:31.551354 3248 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc"} Nov 1 01:36:31.551379 kubelet[3248]: E1101 01:36:31.551369 3248 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07d4f1da-121e-4217-9b76-751186743f3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:36:31.551427 kubelet[3248]: E1101 01:36:31.551379 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07d4f1da-121e-4217-9b76-751186743f3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:36:31.665024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec-shm.mount: Deactivated successfully. Nov 1 01:36:34.798100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653462297.mount: Deactivated successfully. Nov 1 01:36:34.831974 containerd[1914]: time="2025-11-01T01:36:34.831952470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:34.832176 containerd[1914]: time="2025-11-01T01:36:34.832155213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 01:36:34.832510 containerd[1914]: time="2025-11-01T01:36:34.832498309Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:34.833348 containerd[1914]: time="2025-11-01T01:36:34.833337457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:36:34.833724 containerd[1914]: time="2025-11-01T01:36:34.833712133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.301949254s" Nov 1 01:36:34.833746 containerd[1914]: time="2025-11-01T01:36:34.833729453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:36:34.837092 containerd[1914]: time="2025-11-01T01:36:34.837075341Z" level=info msg="CreateContainer within sandbox \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:36:34.841956 containerd[1914]: time="2025-11-01T01:36:34.841931736Z" level=info msg="CreateContainer within sandbox \"84cf84ab21000c599d28e45beeb942e13a93a45848b5b427218546e5c9e6b48d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc8e5a1ef0d48f56218197e1240f545f5b071cf536553d0b211660b826e263ed\"" Nov 1 01:36:34.842129 containerd[1914]: time="2025-11-01T01:36:34.842119421Z" level=info msg="StartContainer for \"bc8e5a1ef0d48f56218197e1240f545f5b071cf536553d0b211660b826e263ed\"" Nov 1 01:36:34.914553 containerd[1914]: time="2025-11-01T01:36:34.914515504Z" level=info msg="StartContainer for \"bc8e5a1ef0d48f56218197e1240f545f5b071cf536553d0b211660b826e263ed\" returns successfully" Nov 1 01:36:34.999066 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:36:34.999119 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:36:35.035408 containerd[1914]: time="2025-11-01T01:36:35.035379970Z" level=info msg="StopPodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\"" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.066 [INFO][4831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.066 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" iface="eth0" netns="/var/run/netns/cni-e5067bc8-82bd-1806-25d8-7a1774f79c93" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.067 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" iface="eth0" netns="/var/run/netns/cni-e5067bc8-82bd-1806-25d8-7a1774f79c93" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.067 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" iface="eth0" netns="/var/run/netns/cni-e5067bc8-82bd-1806-25d8-7a1774f79c93" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.067 [INFO][4831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.067 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.077 [INFO][4863] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.077 [INFO][4863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.077 [INFO][4863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.080 [WARNING][4863] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.080 [INFO][4863] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.081 [INFO][4863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:35.083885 containerd[1914]: 2025-11-01 01:36:35.082 [INFO][4831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:36:35.084243 containerd[1914]: time="2025-11-01T01:36:35.083921322Z" level=info msg="TearDown network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" successfully" Nov 1 01:36:35.084243 containerd[1914]: time="2025-11-01T01:36:35.083941749Z" level=info msg="StopPodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" returns successfully" Nov 1 01:36:35.198904 kubelet[3248]: I1101 01:36:35.198778 3248 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-ca-bundle\") pod \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\" (UID: \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\") " Nov 1 01:36:35.200151 kubelet[3248]: I1101 01:36:35.198977 3248 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrb4z\" (UniqueName: \"kubernetes.io/projected/bf49cfe3-1409-4c2c-87ed-a818ec194a34-kube-api-access-zrb4z\") pod \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\" (UID: \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\") " Nov 1 01:36:35.200151 kubelet[3248]: I1101 01:36:35.199098 3248 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-backend-key-pair\") pod \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\" (UID: \"bf49cfe3-1409-4c2c-87ed-a818ec194a34\") " Nov 1 01:36:35.200151 kubelet[3248]: I1101 01:36:35.199925 3248 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bf49cfe3-1409-4c2c-87ed-a818ec194a34" (UID: "bf49cfe3-1409-4c2c-87ed-a818ec194a34"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:36:35.205319 kubelet[3248]: I1101 01:36:35.205253 3248 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bf49cfe3-1409-4c2c-87ed-a818ec194a34" (UID: "bf49cfe3-1409-4c2c-87ed-a818ec194a34"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:36:35.205522 kubelet[3248]: I1101 01:36:35.205387 3248 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf49cfe3-1409-4c2c-87ed-a818ec194a34-kube-api-access-zrb4z" (OuterVolumeSpecName: "kube-api-access-zrb4z") pod "bf49cfe3-1409-4c2c-87ed-a818ec194a34" (UID: "bf49cfe3-1409-4c2c-87ed-a818ec194a34"). InnerVolumeSpecName "kube-api-access-zrb4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:36:35.300644 kubelet[3248]: I1101 01:36:35.300570 3248 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-ca-bundle\") on node \"ci-4081.3.6-n-4452d0b810\" DevicePath \"\"" Nov 1 01:36:35.300644 kubelet[3248]: I1101 01:36:35.300640 3248 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zrb4z\" (UniqueName: \"kubernetes.io/projected/bf49cfe3-1409-4c2c-87ed-a818ec194a34-kube-api-access-zrb4z\") on node \"ci-4081.3.6-n-4452d0b810\" DevicePath \"\"" Nov 1 01:36:35.301006 kubelet[3248]: I1101 01:36:35.300672 3248 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bf49cfe3-1409-4c2c-87ed-a818ec194a34-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-4452d0b810\" DevicePath \"\"" Nov 1 01:36:35.616868 kubelet[3248]: I1101 01:36:35.616739 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nwlw6" podStartSLOduration=1.300782884 podStartE2EDuration="14.616702932s" podCreationTimestamp="2025-11-01 01:36:21 +0000 UTC" firstStartedPulling="2025-11-01 01:36:21.518138291 +0000 UTC m=+18.104273056" lastFinishedPulling="2025-11-01 01:36:34.834058339 +0000 UTC m=+31.420193104" observedRunningTime="2025-11-01 01:36:35.61553316 +0000 UTC m=+32.201667985" watchObservedRunningTime="2025-11-01 01:36:35.616702932 +0000 UTC m=+32.202837738" Nov 1 01:36:35.703745 kubelet[3248]: I1101 01:36:35.703604 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9-whisker-ca-bundle\") pod \"whisker-649cb85dc5-gfz8g\" (UID: \"f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9\") " pod="calico-system/whisker-649cb85dc5-gfz8g" Nov 1 01:36:35.703745 kubelet[3248]: I1101 01:36:35.703757 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9-whisker-backend-key-pair\") pod \"whisker-649cb85dc5-gfz8g\" (UID: \"f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9\") " pod="calico-system/whisker-649cb85dc5-gfz8g" Nov 1 01:36:35.704121 kubelet[3248]: I1101 01:36:35.703836 3248 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4z8g\" (UniqueName: \"kubernetes.io/projected/f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9-kube-api-access-w4z8g\") pod \"whisker-649cb85dc5-gfz8g\" (UID: \"f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9\") " pod="calico-system/whisker-649cb85dc5-gfz8g" Nov 1 01:36:35.807397 systemd[1]: run-netns-cni\x2de5067bc8\x2d82bd\x2d1806\x2d25d8\x2d7a1774f79c93.mount: Deactivated successfully. Nov 1 01:36:35.807772 systemd[1]: var-lib-kubelet-pods-bf49cfe3\x2d1409\x2d4c2c\x2d87ed\x2da818ec194a34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrb4z.mount: Deactivated successfully. Nov 1 01:36:35.808079 systemd[1]: var-lib-kubelet-pods-bf49cfe3\x2d1409\x2d4c2c\x2d87ed\x2da818ec194a34-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:36:35.928834 containerd[1914]: time="2025-11-01T01:36:35.928727750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649cb85dc5-gfz8g,Uid:f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9,Namespace:calico-system,Attempt:0,}" Nov 1 01:36:35.991877 systemd-networkd[1561]: calie12f830ba76: Link UP Nov 1 01:36:35.992043 systemd-networkd[1561]: calie12f830ba76: Gained carrier Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.943 [INFO][4892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.950 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0 whisker-649cb85dc5- calico-system f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9 867 0 2025-11-01 01:36:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:649cb85dc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 whisker-649cb85dc5-gfz8g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie12f830ba76 [] [] }} ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.950 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.963 [INFO][4916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" HandleID="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.963 [INFO][4916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" HandleID="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4452d0b810", "pod":"whisker-649cb85dc5-gfz8g", "timestamp":"2025-11-01 01:36:35.963760798 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.963 [INFO][4916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.963 [INFO][4916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.963 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.968 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.971 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.975 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.976 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.977 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.977 [INFO][4916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.979 [INFO][4916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.982 [INFO][4916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.985 [INFO][4916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.193/26] block=192.168.76.192/26 handle="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.985 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.193/26] handle="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.985 [INFO][4916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:35.999186 containerd[1914]: 2025-11-01 01:36:35.985 [INFO][4916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.193/26] IPv6=[] ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" HandleID="k8s-pod-network.96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:35.999776 containerd[1914]: 2025-11-01 01:36:35.986 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0", GenerateName:"whisker-649cb85dc5-", Namespace:"calico-system", SelfLink:"", UID:"f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"649cb85dc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"whisker-649cb85dc5-gfz8g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie12f830ba76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:35.999776 containerd[1914]: 2025-11-01 01:36:35.986 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.193/32] ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:35.999776 containerd[1914]: 2025-11-01 01:36:35.986 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie12f830ba76 ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:35.999776 containerd[1914]: 2025-11-01 01:36:35.992 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:35.999776 containerd[1914]: 2025-11-01 01:36:35.992 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0", GenerateName:"whisker-649cb85dc5-", Namespace:"calico-system", SelfLink:"", UID:"f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"649cb85dc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f", Pod:"whisker-649cb85dc5-gfz8g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie12f830ba76", MAC:"56:b0:ed:2a:10:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:35.999776 containerd[1914]: 2025-11-01 01:36:35.997 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f" Namespace="calico-system" Pod="whisker-649cb85dc5-gfz8g" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--649cb85dc5--gfz8g-eth0" Nov 1 01:36:36.008170 containerd[1914]: time="2025-11-01T01:36:36.008104863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:36.008170 containerd[1914]: time="2025-11-01T01:36:36.008133217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:36.008170 containerd[1914]: time="2025-11-01T01:36:36.008140299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:36.008285 containerd[1914]: time="2025-11-01T01:36:36.008187583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:36.058755 containerd[1914]: time="2025-11-01T01:36:36.058727775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649cb85dc5-gfz8g,Uid:f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"96178f3ae779288be30f829d6bdafb5027ff88ac0df828b3a5fd75958b2bc79f\"" Nov 1 01:36:36.059728 containerd[1914]: time="2025-11-01T01:36:36.059707859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:36:36.423217 kernel: bpftool[5132]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 01:36:36.424584 containerd[1914]: time="2025-11-01T01:36:36.424536625Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:36.425103 containerd[1914]: time="2025-11-01T01:36:36.425082758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:36:36.425146 containerd[1914]: time="2025-11-01T01:36:36.425129115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:36:36.425270 kubelet[3248]: E1101 01:36:36.425248 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:36:36.425451 kubelet[3248]: E1101 01:36:36.425282 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:36:36.425482 kubelet[3248]: E1101 01:36:36.425382 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:580cd2697486438ea258991ade3b4df2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:36.426880 containerd[1914]: time="2025-11-01T01:36:36.426869341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:36:36.548094 kubelet[3248]: I1101 01:36:36.548050 3248 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:36:36.588280 systemd-networkd[1561]: vxlan.calico: Link UP Nov 1 01:36:36.588287 systemd-networkd[1561]: vxlan.calico: Gained carrier Nov 1 01:36:36.778691 containerd[1914]: time="2025-11-01T01:36:36.778613063Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:36.779091 containerd[1914]: time="2025-11-01T01:36:36.779045929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:36:36.779122 containerd[1914]: time="2025-11-01T01:36:36.779089165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:36:36.779185 kubelet[3248]: E1101 01:36:36.779165 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:36:36.779221 kubelet[3248]: E1101 01:36:36.779197 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:36:36.779290 kubelet[3248]: E1101 01:36:36.779269 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:36.780423 kubelet[3248]: E1101 01:36:36.780376 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:36:37.461886 kubelet[3248]: I1101 01:36:37.461781 3248 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf49cfe3-1409-4c2c-87ed-a818ec194a34" path="/var/lib/kubelet/pods/bf49cfe3-1409-4c2c-87ed-a818ec194a34/volumes" Nov 1 01:36:37.554478 kubelet[3248]: E1101 01:36:37.554382 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:36:37.591534 systemd-networkd[1561]: calie12f830ba76: Gained IPv6LL Nov 1 01:36:37.890922 kubelet[3248]: I1101 01:36:37.890648 3248 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:36:37.976384 systemd-networkd[1561]: vxlan.calico: Gained IPv6LL Nov 1 01:36:42.458897 containerd[1914]: time="2025-11-01T01:36:42.458802866Z" level=info msg="StopPodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\"" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.495 [INFO][5325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.495 [INFO][5325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" iface="eth0" netns="/var/run/netns/cni-90c4bd88-ca99-c3c6-50ce-e6d13ac92804" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.495 [INFO][5325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" iface="eth0" netns="/var/run/netns/cni-90c4bd88-ca99-c3c6-50ce-e6d13ac92804" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.495 [INFO][5325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" iface="eth0" netns="/var/run/netns/cni-90c4bd88-ca99-c3c6-50ce-e6d13ac92804" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.495 [INFO][5325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.495 [INFO][5325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.508 [INFO][5341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.508 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.508 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.512 [WARNING][5341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.512 [INFO][5341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.513 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:42.515993 containerd[1914]: 2025-11-01 01:36:42.514 [INFO][5325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:36:42.516388 containerd[1914]: time="2025-11-01T01:36:42.516045444Z" level=info msg="TearDown network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" successfully" Nov 1 01:36:42.516388 containerd[1914]: time="2025-11-01T01:36:42.516068285Z" level=info msg="StopPodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" returns successfully" Nov 1 01:36:42.516590 containerd[1914]: time="2025-11-01T01:36:42.516565518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6874c989b8-tpm2w,Uid:c3dae49a-08cd-4d54-8b47-222aaaea72bd,Namespace:calico-system,Attempt:1,}" Nov 1 01:36:42.518022 systemd[1]: run-netns-cni\x2d90c4bd88\x2dca99\x2dc3c6\x2d50ce\x2de6d13ac92804.mount: Deactivated successfully. Nov 1 01:36:42.628562 systemd-networkd[1561]: cali0c0cfb5a0eb: Link UP Nov 1 01:36:42.629203 systemd-networkd[1561]: cali0c0cfb5a0eb: Gained carrier Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.555 [INFO][5356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0 calico-kube-controllers-6874c989b8- calico-system c3dae49a-08cd-4d54-8b47-222aaaea72bd 901 0 2025-11-01 01:36:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6874c989b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 calico-kube-controllers-6874c989b8-tpm2w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c0cfb5a0eb [] [] }} ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.555 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.570 [INFO][5380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" HandleID="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.570 [INFO][5380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" HandleID="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019aea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4452d0b810", "pod":"calico-kube-controllers-6874c989b8-tpm2w", "timestamp":"2025-11-01 01:36:42.570437714 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.570 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.570 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.570 [INFO][5380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.575 [INFO][5380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.579 [INFO][5380] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.582 [INFO][5380] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.583 [INFO][5380] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.585 [INFO][5380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.585 [INFO][5380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.586 [INFO][5380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77 Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.608 [INFO][5380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.619 [INFO][5380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.194/26] block=192.168.76.192/26 handle="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.619 [INFO][5380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.194/26] handle="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.619 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:42.653653 containerd[1914]: 2025-11-01 01:36:42.620 [INFO][5380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.194/26] IPv6=[] ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" HandleID="k8s-pod-network.1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.655838 containerd[1914]: 2025-11-01 01:36:42.624 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0", GenerateName:"calico-kube-controllers-6874c989b8-", Namespace:"calico-system", SelfLink:"", UID:"c3dae49a-08cd-4d54-8b47-222aaaea72bd", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6874c989b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"calico-kube-controllers-6874c989b8-tpm2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c0cfb5a0eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:42.655838 containerd[1914]: 2025-11-01 01:36:42.624 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.194/32] ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.655838 containerd[1914]: 2025-11-01 01:36:42.625 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c0cfb5a0eb ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.655838 containerd[1914]: 2025-11-01 01:36:42.629 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.655838 containerd[1914]: 2025-11-01 01:36:42.630 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0", GenerateName:"calico-kube-controllers-6874c989b8-", Namespace:"calico-system", SelfLink:"", UID:"c3dae49a-08cd-4d54-8b47-222aaaea72bd", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6874c989b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77", Pod:"calico-kube-controllers-6874c989b8-tpm2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c0cfb5a0eb", MAC:"56:05:ed:d9:59:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:42.655838 containerd[1914]: 2025-11-01 01:36:42.648 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77" Namespace="calico-system" Pod="calico-kube-controllers-6874c989b8-tpm2w" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:36:42.666902 containerd[1914]: time="2025-11-01T01:36:42.666793437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:42.666902 containerd[1914]: time="2025-11-01T01:36:42.666821804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:42.666902 containerd[1914]: time="2025-11-01T01:36:42.666828701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:42.667008 containerd[1914]: time="2025-11-01T01:36:42.666871522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:42.706254 containerd[1914]: time="2025-11-01T01:36:42.706229215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6874c989b8-tpm2w,Uid:c3dae49a-08cd-4d54-8b47-222aaaea72bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77\"" Nov 1 01:36:42.706990 containerd[1914]: time="2025-11-01T01:36:42.706955981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:36:43.080853 containerd[1914]: time="2025-11-01T01:36:43.080703703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:43.081656 containerd[1914]: time="2025-11-01T01:36:43.081580537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:36:43.081686 containerd[1914]: time="2025-11-01T01:36:43.081647482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:36:43.081792 kubelet[3248]: E1101 01:36:43.081769 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:36:43.081996 kubelet[3248]: E1101 01:36:43.081800 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:36:43.081996 kubelet[3248]: E1101 01:36:43.081878 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42cch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:43.083620 kubelet[3248]: E1101 01:36:43.083602 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:36:43.460569 containerd[1914]: time="2025-11-01T01:36:43.460445882Z" level=info msg="StopPodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\"" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.532 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.532 [INFO][5459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" iface="eth0" netns="/var/run/netns/cni-41d21786-a71f-8355-ef69-53af05d5140b" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.532 [INFO][5459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" iface="eth0" netns="/var/run/netns/cni-41d21786-a71f-8355-ef69-53af05d5140b" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.533 [INFO][5459] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" iface="eth0" netns="/var/run/netns/cni-41d21786-a71f-8355-ef69-53af05d5140b" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.533 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.533 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.548 [INFO][5478] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.548 [INFO][5478] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.548 [INFO][5478] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.554 [WARNING][5478] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.554 [INFO][5478] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.555 [INFO][5478] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:43.557516 containerd[1914]: 2025-11-01 01:36:43.556 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:36:43.558247 containerd[1914]: time="2025-11-01T01:36:43.557587535Z" level=info msg="TearDown network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" successfully" Nov 1 01:36:43.558247 containerd[1914]: time="2025-11-01T01:36:43.557609250Z" level=info msg="StopPodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" returns successfully" Nov 1 01:36:43.558247 containerd[1914]: time="2025-11-01T01:36:43.558126537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9grzl,Uid:f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b,Namespace:calico-system,Attempt:1,}" Nov 1 01:36:43.559887 systemd[1]: run-netns-cni\x2d41d21786\x2da71f\x2d8355\x2def69\x2d53af05d5140b.mount: Deactivated successfully. Nov 1 01:36:43.568098 kubelet[3248]: E1101 01:36:43.568076 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:36:43.612998 systemd-networkd[1561]: calia192a6aa6df: Link UP Nov 1 01:36:43.613156 systemd-networkd[1561]: calia192a6aa6df: Gained carrier Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.581 [INFO][5496] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0 csi-node-driver- calico-system f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b 912 0 2025-11-01 01:36:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 csi-node-driver-9grzl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia192a6aa6df [] [] }} ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.581 [INFO][5496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.593 [INFO][5521] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" HandleID="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.593 [INFO][5521] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" HandleID="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026f7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4452d0b810", "pod":"csi-node-driver-9grzl", "timestamp":"2025-11-01 01:36:43.593206678 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.593 [INFO][5521] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.593 [INFO][5521] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.593 [INFO][5521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.597 [INFO][5521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.599 [INFO][5521] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.602 [INFO][5521] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.603 [INFO][5521] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.604 [INFO][5521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.604 [INFO][5521] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.605 [INFO][5521] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.607 [INFO][5521] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.611 [INFO][5521] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.195/26] block=192.168.76.192/26 handle="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.611 [INFO][5521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.195/26] handle="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.611 [INFO][5521] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:43.619881 containerd[1914]: 2025-11-01 01:36:43.611 [INFO][5521] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.195/26] IPv6=[] ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" HandleID="k8s-pod-network.e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.620326 containerd[1914]: 2025-11-01 01:36:43.612 [INFO][5496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"csi-node-driver-9grzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia192a6aa6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:43.620326 containerd[1914]: 2025-11-01 01:36:43.612 [INFO][5496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.195/32] ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.620326 containerd[1914]: 2025-11-01 01:36:43.612 [INFO][5496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia192a6aa6df ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.620326 containerd[1914]: 2025-11-01 01:36:43.613 [INFO][5496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.620326 containerd[1914]: 2025-11-01 01:36:43.613 [INFO][5496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee", Pod:"csi-node-driver-9grzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia192a6aa6df", MAC:"1e:3c:dd:95:67:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:43.620326 containerd[1914]: 2025-11-01 01:36:43.618 [INFO][5496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee" Namespace="calico-system" Pod="csi-node-driver-9grzl" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:36:43.627934 containerd[1914]: time="2025-11-01T01:36:43.627670147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:43.627999 containerd[1914]: time="2025-11-01T01:36:43.627943982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:43.627999 containerd[1914]: time="2025-11-01T01:36:43.627958878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:43.628036 containerd[1914]: time="2025-11-01T01:36:43.628012196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:43.647885 containerd[1914]: time="2025-11-01T01:36:43.647862284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9grzl,Uid:f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee\"" Nov 1 01:36:43.648515 containerd[1914]: time="2025-11-01T01:36:43.648503676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:36:44.024779 containerd[1914]: time="2025-11-01T01:36:44.024641555Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:44.025709 containerd[1914]: time="2025-11-01T01:36:44.025636891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:36:44.025742 containerd[1914]: time="2025-11-01T01:36:44.025696413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:36:44.025864 kubelet[3248]: E1101 01:36:44.025813 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:36:44.025864 kubelet[3248]: E1101 01:36:44.025843 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:36:44.025977 kubelet[3248]: E1101 01:36:44.025920 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:44.028196 containerd[1914]: time="2025-11-01T01:36:44.028146649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:36:44.392122 containerd[1914]: time="2025-11-01T01:36:44.391853644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:44.392968 containerd[1914]: time="2025-11-01T01:36:44.392867827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:36:44.393025 containerd[1914]: time="2025-11-01T01:36:44.392957988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:36:44.393172 kubelet[3248]: E1101 01:36:44.393135 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:36:44.393467 kubelet[3248]: E1101 01:36:44.393180 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:36:44.393467 kubelet[3248]: E1101 01:36:44.393290 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:44.394546 kubelet[3248]: E1101 01:36:44.394491 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:44.503554 systemd-networkd[1561]: cali0c0cfb5a0eb: Gained IPv6LL Nov 1 01:36:44.575379 kubelet[3248]: E1101 01:36:44.575257 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:36:44.576591 kubelet[3248]: E1101 01:36:44.576383 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:44.631556 systemd-networkd[1561]: calia192a6aa6df: Gained IPv6LL Nov 1 01:36:45.458509 containerd[1914]: time="2025-11-01T01:36:45.458390279Z" level=info msg="StopPodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\"" Nov 1 01:36:45.459418 containerd[1914]: time="2025-11-01T01:36:45.458627418Z" level=info msg="StopPodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\"" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5608] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" iface="eth0" netns="/var/run/netns/cni-f54274b7-94fa-4cb0-49d3-46670527fc44" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5608] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" iface="eth0" netns="/var/run/netns/cni-f54274b7-94fa-4cb0-49d3-46670527fc44" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5608] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" iface="eth0" netns="/var/run/netns/cni-f54274b7-94fa-4cb0-49d3-46670527fc44" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.501 [INFO][5639] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.501 [INFO][5639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.501 [INFO][5639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.505 [WARNING][5639] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.505 [INFO][5639] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.506 [INFO][5639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:45.508138 containerd[1914]: 2025-11-01 01:36:45.507 [INFO][5608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:36:45.508429 containerd[1914]: time="2025-11-01T01:36:45.508217198Z" level=info msg="TearDown network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" successfully" Nov 1 01:36:45.508429 containerd[1914]: time="2025-11-01T01:36:45.508233955Z" level=info msg="StopPodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" returns successfully" Nov 1 01:36:45.508660 containerd[1914]: time="2025-11-01T01:36:45.508643497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxtd,Uid:bce1db93-db69-4f85-98fe-9ebedd24ee18,Namespace:kube-system,Attempt:1,}" Nov 1 01:36:45.510110 systemd[1]: run-netns-cni\x2df54274b7\x2d94fa\x2d4cb0\x2d49d3\x2d46670527fc44.mount: Deactivated successfully. Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5609] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.491 [INFO][5609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" iface="eth0" netns="/var/run/netns/cni-fb0e2e36-b51b-2f43-8a59-e7635194e929" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.492 [INFO][5609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" iface="eth0" netns="/var/run/netns/cni-fb0e2e36-b51b-2f43-8a59-e7635194e929" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.492 [INFO][5609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" iface="eth0" netns="/var/run/netns/cni-fb0e2e36-b51b-2f43-8a59-e7635194e929" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.492 [INFO][5609] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.492 [INFO][5609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.501 [INFO][5641] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.501 [INFO][5641] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.506 [INFO][5641] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.510 [WARNING][5641] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.510 [INFO][5641] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.511 [INFO][5641] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:45.513975 containerd[1914]: 2025-11-01 01:36:45.513 [INFO][5609] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:36:45.514258 containerd[1914]: time="2025-11-01T01:36:45.514036727Z" level=info msg="TearDown network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" successfully" Nov 1 01:36:45.514258 containerd[1914]: time="2025-11-01T01:36:45.514051876Z" level=info msg="StopPodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" returns successfully" Nov 1 01:36:45.514399 containerd[1914]: time="2025-11-01T01:36:45.514386884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-nf7rw,Uid:5dd05a57-68a7-45f0-82e4-c02ae8d0fe49,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:36:45.516849 systemd[1]: run-netns-cni\x2dfb0e2e36\x2db51b\x2d2f43\x2d8a59\x2de7635194e929.mount: Deactivated successfully. Nov 1 01:36:45.570556 systemd-networkd[1561]: cali20b146f9ad2: Link UP Nov 1 01:36:45.570730 systemd-networkd[1561]: cali20b146f9ad2: Gained carrier Nov 1 01:36:45.575166 kubelet[3248]: E1101 01:36:45.575131 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.535 [INFO][5679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0 calico-apiserver-65957f8fc6- calico-apiserver 5dd05a57-68a7-45f0-82e4-c02ae8d0fe49 939 0 2025-11-01 01:36:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65957f8fc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 calico-apiserver-65957f8fc6-nf7rw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali20b146f9ad2 [] [] }} ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.536 [INFO][5679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.547 [INFO][5717] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" HandleID="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.547 [INFO][5717] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" HandleID="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfd10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-4452d0b810", "pod":"calico-apiserver-65957f8fc6-nf7rw", "timestamp":"2025-11-01 01:36:45.547878298 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.547 [INFO][5717] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.548 [INFO][5717] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.548 [INFO][5717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.552 [INFO][5717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.555 [INFO][5717] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.558 [INFO][5717] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.559 [INFO][5717] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.561 [INFO][5717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.561 [INFO][5717] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.562 [INFO][5717] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46 Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.565 [INFO][5717] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.568 [INFO][5717] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.196/26] block=192.168.76.192/26 handle="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.568 [INFO][5717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.196/26] handle="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.568 [INFO][5717] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:45.576820 containerd[1914]: 2025-11-01 01:36:45.568 [INFO][5717] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.196/26] IPv6=[] ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" HandleID="k8s-pod-network.858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.577273 containerd[1914]: 2025-11-01 01:36:45.569 [INFO][5679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"calico-apiserver-65957f8fc6-nf7rw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20b146f9ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:45.577273 containerd[1914]: 2025-11-01 01:36:45.569 [INFO][5679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.196/32] ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.577273 containerd[1914]: 2025-11-01 01:36:45.569 [INFO][5679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20b146f9ad2 ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.577273 containerd[1914]: 2025-11-01 01:36:45.570 [INFO][5679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.577273 containerd[1914]: 2025-11-01 01:36:45.571 [INFO][5679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46", Pod:"calico-apiserver-65957f8fc6-nf7rw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20b146f9ad2", MAC:"82:95:2f:ea:c3:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:45.577273 containerd[1914]: 2025-11-01 01:36:45.575 [INFO][5679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-nf7rw" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:36:45.585140 containerd[1914]: time="2025-11-01T01:36:45.585062583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:45.585140 containerd[1914]: time="2025-11-01T01:36:45.585112220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:45.585372 containerd[1914]: time="2025-11-01T01:36:45.585316549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:45.585408 containerd[1914]: time="2025-11-01T01:36:45.585370275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:45.628364 containerd[1914]: time="2025-11-01T01:36:45.628341506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-nf7rw,Uid:5dd05a57-68a7-45f0-82e4-c02ae8d0fe49,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46\"" Nov 1 01:36:45.629104 containerd[1914]: time="2025-11-01T01:36:45.629087879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:36:45.719604 systemd-networkd[1561]: cali2a577988a5c: Link UP Nov 1 01:36:45.720355 systemd-networkd[1561]: cali2a577988a5c: Gained carrier Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.535 [INFO][5670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0 coredns-668d6bf9bc- kube-system bce1db93-db69-4f85-98fe-9ebedd24ee18 940 0 2025-11-01 01:36:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 coredns-668d6bf9bc-zwxtd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a577988a5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.535 [INFO][5670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.547 [INFO][5715] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" HandleID="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.547 [INFO][5715] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" HandleID="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385ed0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-4452d0b810", "pod":"coredns-668d6bf9bc-zwxtd", "timestamp":"2025-11-01 01:36:45.547877552 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.547 [INFO][5715] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.568 [INFO][5715] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.568 [INFO][5715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.654 [INFO][5715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.666 [INFO][5715] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.679 [INFO][5715] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.683 [INFO][5715] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.688 [INFO][5715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.688 [INFO][5715] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.691 [INFO][5715] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0 Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.698 [INFO][5715] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.710 [INFO][5715] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.197/26] block=192.168.76.192/26 handle="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.710 [INFO][5715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.197/26] handle="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.710 [INFO][5715] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:45.752411 containerd[1914]: 2025-11-01 01:36:45.710 [INFO][5715] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.197/26] IPv6=[] ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" HandleID="k8s-pod-network.158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.754962 containerd[1914]: 2025-11-01 01:36:45.714 [INFO][5670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bce1db93-db69-4f85-98fe-9ebedd24ee18", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"coredns-668d6bf9bc-zwxtd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a577988a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:45.754962 containerd[1914]: 2025-11-01 01:36:45.715 [INFO][5670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.197/32] ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.754962 containerd[1914]: 2025-11-01 01:36:45.715 [INFO][5670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a577988a5c ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.754962 containerd[1914]: 2025-11-01 01:36:45.721 [INFO][5670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.754962 containerd[1914]: 2025-11-01 01:36:45.722 [INFO][5670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bce1db93-db69-4f85-98fe-9ebedd24ee18", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0", Pod:"coredns-668d6bf9bc-zwxtd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a577988a5c", MAC:"22:75:cf:8b:29:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:45.754962 containerd[1914]: 2025-11-01 01:36:45.748 [INFO][5670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxtd" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:36:45.770679 containerd[1914]: time="2025-11-01T01:36:45.770107344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:45.770679 containerd[1914]: time="2025-11-01T01:36:45.770597533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:45.770679 containerd[1914]: time="2025-11-01T01:36:45.770615303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:45.770852 containerd[1914]: time="2025-11-01T01:36:45.770704629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:45.844634 containerd[1914]: time="2025-11-01T01:36:45.844577543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxtd,Uid:bce1db93-db69-4f85-98fe-9ebedd24ee18,Namespace:kube-system,Attempt:1,} returns sandbox id \"158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0\"" Nov 1 01:36:45.846361 containerd[1914]: time="2025-11-01T01:36:45.846312182Z" level=info msg="CreateContainer within sandbox \"158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:36:45.850820 containerd[1914]: time="2025-11-01T01:36:45.850777706Z" level=info msg="CreateContainer within sandbox \"158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6df7e89116c7e4db80e68656d5d80b0200f7678afc6317b9598cea2453892484\"" Nov 1 01:36:45.851000 containerd[1914]: time="2025-11-01T01:36:45.850985261Z" level=info msg="StartContainer for \"6df7e89116c7e4db80e68656d5d80b0200f7678afc6317b9598cea2453892484\"" Nov 1 01:36:45.891888 containerd[1914]: time="2025-11-01T01:36:45.891831916Z" level=info msg="StartContainer for \"6df7e89116c7e4db80e68656d5d80b0200f7678afc6317b9598cea2453892484\" returns successfully" Nov 1 01:36:46.024104 containerd[1914]: time="2025-11-01T01:36:46.023840809Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:46.024762 containerd[1914]: time="2025-11-01T01:36:46.024696681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:36:46.024804 containerd[1914]: time="2025-11-01T01:36:46.024753036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:36:46.024898 kubelet[3248]: E1101 01:36:46.024847 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:36:46.024898 kubelet[3248]: E1101 01:36:46.024880 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:36:46.025011 kubelet[3248]: E1101 01:36:46.024960 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:46.026119 kubelet[3248]: E1101 01:36:46.026072 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:36:46.458537 containerd[1914]: time="2025-11-01T01:36:46.458428001Z" level=info msg="StopPodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\"" Nov 1 01:36:46.459450 containerd[1914]: time="2025-11-01T01:36:46.458529197Z" level=info msg="StopPodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\"" Nov 1 01:36:46.459450 containerd[1914]: time="2025-11-01T01:36:46.458558512Z" level=info msg="StopPodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\"" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.520 [INFO][5932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.521 [INFO][5932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" iface="eth0" netns="/var/run/netns/cni-7d97c64a-2bf8-97a7-5c70-1c6c8a33d00d" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.521 [INFO][5932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" iface="eth0" netns="/var/run/netns/cni-7d97c64a-2bf8-97a7-5c70-1c6c8a33d00d" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.521 [INFO][5932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" iface="eth0" netns="/var/run/netns/cni-7d97c64a-2bf8-97a7-5c70-1c6c8a33d00d" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.521 [INFO][5932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.521 [INFO][5932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5983] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5983] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5983] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.538 [WARNING][5983] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.538 [INFO][5983] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.539 [INFO][5983] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:46.540722 containerd[1914]: 2025-11-01 01:36:46.540 [INFO][5932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:36:46.541002 containerd[1914]: time="2025-11-01T01:36:46.540790741Z" level=info msg="TearDown network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" successfully" Nov 1 01:36:46.541002 containerd[1914]: time="2025-11-01T01:36:46.540810318Z" level=info msg="StopPodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" returns successfully" Nov 1 01:36:46.541200 containerd[1914]: time="2025-11-01T01:36:46.541189262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-h4mgq,Uid:07d4f1da-121e-4217-9b76-751186743f3a,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:36:46.542922 systemd[1]: run-netns-cni\x2d7d97c64a\x2d2bf8\x2d97a7\x2d5c70\x2d1c6c8a33d00d.mount: Deactivated successfully. Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" iface="eth0" netns="/var/run/netns/cni-4cb16a2c-3388-0fac-9723-b0adc179620e" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" iface="eth0" netns="/var/run/netns/cni-4cb16a2c-3388-0fac-9723-b0adc179620e" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" iface="eth0" netns="/var/run/netns/cni-4cb16a2c-3388-0fac-9723-b0adc179620e" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.539 [INFO][5989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.542 [WARNING][5989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.542 [INFO][5989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.543 [INFO][5989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:46.545107 containerd[1914]: 2025-11-01 01:36:46.544 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:36:46.545347 containerd[1914]: time="2025-11-01T01:36:46.545167420Z" level=info msg="TearDown network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" successfully" Nov 1 01:36:46.545347 containerd[1914]: time="2025-11-01T01:36:46.545181779Z" level=info msg="StopPodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" returns successfully" Nov 1 01:36:46.545525 containerd[1914]: time="2025-11-01T01:36:46.545515846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwdx7,Uid:74c9c676-a5c3-4018-bd2b-2647288efe10,Namespace:calico-system,Attempt:1,}" Nov 1 01:36:46.550139 systemd[1]: run-netns-cni\x2d4cb16a2c\x2d3388\x2d0fac\x2d9723\x2db0adc179620e.mount: Deactivated successfully. Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5933] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" iface="eth0" netns="/var/run/netns/cni-1b1dc8e7-7899-f4dc-d3de-31e504c1e30c" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5933] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" iface="eth0" netns="/var/run/netns/cni-1b1dc8e7-7899-f4dc-d3de-31e504c1e30c" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5933] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" iface="eth0" netns="/var/run/netns/cni-1b1dc8e7-7899-f4dc-d3de-31e504c1e30c" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.523 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5991] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.534 [INFO][5991] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.543 [INFO][5991] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.547 [WARNING][5991] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.547 [INFO][5991] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.548 [INFO][5991] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:46.551959 containerd[1914]: 2025-11-01 01:36:46.549 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:36:46.552440 containerd[1914]: time="2025-11-01T01:36:46.552069710Z" level=info msg="TearDown network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" successfully" Nov 1 01:36:46.552440 containerd[1914]: time="2025-11-01T01:36:46.552090722Z" level=info msg="StopPodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" returns successfully" Nov 1 01:36:46.552541 containerd[1914]: time="2025-11-01T01:36:46.552523012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds49c,Uid:7fd307cd-9e0a-4b4c-8e92-d2d598a36e63,Namespace:kube-system,Attempt:1,}" Nov 1 01:36:46.576527 kubelet[3248]: E1101 01:36:46.576501 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:36:46.586473 kubelet[3248]: I1101 01:36:46.586430 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwxtd" podStartSLOduration=36.586413916 podStartE2EDuration="36.586413916s" podCreationTimestamp="2025-11-01 01:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:36:46.58628991 +0000 UTC m=+43.172424679" watchObservedRunningTime="2025-11-01 01:36:46.586413916 +0000 UTC m=+43.172548682" Nov 1 01:36:46.596708 systemd-networkd[1561]: calic2a4559d7dc: Link UP Nov 1 01:36:46.596877 systemd-networkd[1561]: calic2a4559d7dc: Gained carrier Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.563 [INFO][6035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0 calico-apiserver-65957f8fc6- calico-apiserver 07d4f1da-121e-4217-9b76-751186743f3a 962 0 2025-11-01 01:36:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65957f8fc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 calico-apiserver-65957f8fc6-h4mgq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2a4559d7dc [] [] }} ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.563 [INFO][6035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.577 [INFO][6101] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" HandleID="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.577 [INFO][6101] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" HandleID="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-4452d0b810", "pod":"calico-apiserver-65957f8fc6-h4mgq", "timestamp":"2025-11-01 01:36:46.577305908 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.577 [INFO][6101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.577 [INFO][6101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.577 [INFO][6101] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.581 [INFO][6101] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.584 [INFO][6101] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.586 [INFO][6101] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.587 [INFO][6101] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.589 [INFO][6101] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.589 [INFO][6101] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.590 [INFO][6101] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.591 [INFO][6101] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.594 [INFO][6101] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.198/26] block=192.168.76.192/26 handle="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.594 [INFO][6101] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.198/26] handle="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.594 [INFO][6101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:46.602715 containerd[1914]: 2025-11-01 01:36:46.595 [INFO][6101] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.198/26] IPv6=[] ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" HandleID="k8s-pod-network.3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.603128 containerd[1914]: 2025-11-01 01:36:46.595 [INFO][6035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"07d4f1da-121e-4217-9b76-751186743f3a", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"calico-apiserver-65957f8fc6-h4mgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2a4559d7dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:46.603128 containerd[1914]: 2025-11-01 01:36:46.595 [INFO][6035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.198/32] ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.603128 containerd[1914]: 2025-11-01 01:36:46.595 [INFO][6035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2a4559d7dc ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.603128 containerd[1914]: 2025-11-01 01:36:46.597 [INFO][6035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.603128 containerd[1914]: 2025-11-01 01:36:46.597 [INFO][6035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"07d4f1da-121e-4217-9b76-751186743f3a", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc", Pod:"calico-apiserver-65957f8fc6-h4mgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2a4559d7dc", MAC:"0e:e3:cc:dc:87:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:46.603128 containerd[1914]: 2025-11-01 01:36:46.601 [INFO][6035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc" Namespace="calico-apiserver" Pod="calico-apiserver-65957f8fc6-h4mgq" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:36:46.610704 containerd[1914]: time="2025-11-01T01:36:46.610635479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:46.610704 containerd[1914]: time="2025-11-01T01:36:46.610666449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:46.610704 containerd[1914]: time="2025-11-01T01:36:46.610676099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:46.610952 containerd[1914]: time="2025-11-01T01:36:46.610905362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:46.651548 containerd[1914]: time="2025-11-01T01:36:46.651528273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65957f8fc6-h4mgq,Uid:07d4f1da-121e-4217-9b76-751186743f3a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc\"" Nov 1 01:36:46.652205 containerd[1914]: time="2025-11-01T01:36:46.652193951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:36:46.725396 systemd-networkd[1561]: calie1473bbcf7f: Link UP Nov 1 01:36:46.725552 systemd-networkd[1561]: calie1473bbcf7f: Gained carrier Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.566 [INFO][6047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0 goldmane-666569f655- calico-system 74c9c676-a5c3-4018-bd2b-2647288efe10 963 0 2025-11-01 01:36:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 goldmane-666569f655-cwdx7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie1473bbcf7f [] [] }} ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.566 [INFO][6047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.579 [INFO][6107] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" HandleID="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.579 [INFO][6107] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" HandleID="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050b430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4452d0b810", "pod":"goldmane-666569f655-cwdx7", "timestamp":"2025-11-01 01:36:46.5798412 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.579 [INFO][6107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.595 [INFO][6107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.595 [INFO][6107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.688 [INFO][6107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.697 [INFO][6107] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.711 [INFO][6107] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.713 [INFO][6107] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.715 [INFO][6107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.715 [INFO][6107] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.716 [INFO][6107] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93 Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.719 [INFO][6107] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.723 [INFO][6107] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.199/26] block=192.168.76.192/26 handle="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.723 [INFO][6107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.199/26] handle="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.723 [INFO][6107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:46.732554 containerd[1914]: 2025-11-01 01:36:46.723 [INFO][6107] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.199/26] IPv6=[] ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" HandleID="k8s-pod-network.6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.733124 containerd[1914]: 2025-11-01 01:36:46.724 [INFO][6047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"74c9c676-a5c3-4018-bd2b-2647288efe10", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"goldmane-666569f655-cwdx7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1473bbcf7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:46.733124 containerd[1914]: 2025-11-01 01:36:46.724 [INFO][6047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.199/32] ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.733124 containerd[1914]: 2025-11-01 01:36:46.724 [INFO][6047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1473bbcf7f ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.733124 containerd[1914]: 2025-11-01 01:36:46.725 [INFO][6047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.733124 containerd[1914]: 2025-11-01 01:36:46.725 [INFO][6047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"74c9c676-a5c3-4018-bd2b-2647288efe10", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93", Pod:"goldmane-666569f655-cwdx7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1473bbcf7f", MAC:"d6:09:6b:6b:51:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:46.733124 containerd[1914]: 2025-11-01 01:36:46.731 [INFO][6047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93" Namespace="calico-system" Pod="goldmane-666569f655-cwdx7" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:36:46.741463 containerd[1914]: time="2025-11-01T01:36:46.741406925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:46.741724 containerd[1914]: time="2025-11-01T01:36:46.741642717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:46.741724 containerd[1914]: time="2025-11-01T01:36:46.741657876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:46.741793 containerd[1914]: time="2025-11-01T01:36:46.741747024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:46.776379 containerd[1914]: time="2025-11-01T01:36:46.776332577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwdx7,Uid:74c9c676-a5c3-4018-bd2b-2647288efe10,Namespace:calico-system,Attempt:1,} returns sandbox id \"6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93\"" Nov 1 01:36:46.813976 systemd-networkd[1561]: cali900532390ac: Link UP Nov 1 01:36:46.814155 systemd-networkd[1561]: cali900532390ac: Gained carrier Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.572 [INFO][6074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0 coredns-668d6bf9bc- kube-system 7fd307cd-9e0a-4b4c-8e92-d2d598a36e63 964 0 2025-11-01 01:36:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-4452d0b810 coredns-668d6bf9bc-ds49c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali900532390ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.572 [INFO][6074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.585 [INFO][6127] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" HandleID="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.585 [INFO][6127] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" HandleID="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a8280), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-4452d0b810", "pod":"coredns-668d6bf9bc-ds49c", "timestamp":"2025-11-01 01:36:46.585828713 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4452d0b810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.585 [INFO][6127] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.723 [INFO][6127] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.723 [INFO][6127] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4452d0b810' Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.782 [INFO][6127] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.797 [INFO][6127] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.802 [INFO][6127] ipam/ipam.go 511: Trying affinity for 192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.803 [INFO][6127] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.804 [INFO][6127] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.804 [INFO][6127] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.805 [INFO][6127] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373 Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.807 [INFO][6127] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.811 [INFO][6127] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.200/26] block=192.168.76.192/26 handle="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.811 [INFO][6127] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.200/26] handle="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" host="ci-4081.3.6-n-4452d0b810" Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.811 [INFO][6127] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:36:46.822676 containerd[1914]: 2025-11-01 01:36:46.811 [INFO][6127] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.200/26] IPv6=[] ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" HandleID="k8s-pod-network.c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.823341 containerd[1914]: 2025-11-01 01:36:46.812 [INFO][6074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"", Pod:"coredns-668d6bf9bc-ds49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali900532390ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:46.823341 containerd[1914]: 2025-11-01 01:36:46.812 [INFO][6074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.200/32] ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.823341 containerd[1914]: 2025-11-01 01:36:46.812 [INFO][6074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali900532390ac ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.823341 containerd[1914]: 2025-11-01 01:36:46.814 [INFO][6074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.823341 containerd[1914]: 2025-11-01 01:36:46.814 [INFO][6074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373", Pod:"coredns-668d6bf9bc-ds49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali900532390ac", MAC:"2e:66:78:90:60:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:36:46.823341 containerd[1914]: 2025-11-01 01:36:46.820 [INFO][6074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds49c" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:36:46.831929 containerd[1914]: time="2025-11-01T01:36:46.831694537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:36:46.832000 containerd[1914]: time="2025-11-01T01:36:46.831940757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:36:46.832000 containerd[1914]: time="2025-11-01T01:36:46.831954271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:46.832035 containerd[1914]: time="2025-11-01T01:36:46.832003291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:36:46.889442 containerd[1914]: time="2025-11-01T01:36:46.889411399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds49c,Uid:7fd307cd-9e0a-4b4c-8e92-d2d598a36e63,Namespace:kube-system,Attempt:1,} returns sandbox id \"c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373\"" Nov 1 01:36:46.891002 containerd[1914]: time="2025-11-01T01:36:46.890979864Z" level=info msg="CreateContainer within sandbox \"c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:36:46.895279 containerd[1914]: time="2025-11-01T01:36:46.895265032Z" level=info msg="CreateContainer within sandbox \"c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20aa7d41f6ace449bba5bd7281036b83abe377e9e645aed019ea46f89edfed65\"" Nov 1 01:36:46.895468 containerd[1914]: time="2025-11-01T01:36:46.895434266Z" level=info msg="StartContainer for \"20aa7d41f6ace449bba5bd7281036b83abe377e9e645aed019ea46f89edfed65\"" Nov 1 01:36:46.928944 containerd[1914]: time="2025-11-01T01:36:46.928899588Z" level=info msg="StartContainer for \"20aa7d41f6ace449bba5bd7281036b83abe377e9e645aed019ea46f89edfed65\" returns successfully" Nov 1 01:36:47.041183 containerd[1914]: time="2025-11-01T01:36:47.040934045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:47.041905 containerd[1914]: time="2025-11-01T01:36:47.041876707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:36:47.041942 containerd[1914]: time="2025-11-01T01:36:47.041915825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:36:47.042027 kubelet[3248]: E1101 01:36:47.041975 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:36:47.042027 kubelet[3248]: E1101 01:36:47.042004 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:36:47.042162 kubelet[3248]: E1101 01:36:47.042139 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:47.042243 containerd[1914]: time="2025-11-01T01:36:47.042162100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:36:47.043258 kubelet[3248]: E1101 01:36:47.043229 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:36:47.413734 containerd[1914]: time="2025-11-01T01:36:47.413630277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:47.414063 containerd[1914]: time="2025-11-01T01:36:47.413978019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:36:47.414104 containerd[1914]: time="2025-11-01T01:36:47.414015727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:36:47.414162 kubelet[3248]: E1101 01:36:47.414142 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:36:47.414190 kubelet[3248]: E1101 01:36:47.414172 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:36:47.414364 kubelet[3248]: E1101 01:36:47.414288 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:47.415464 kubelet[3248]: E1101 01:36:47.415445 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:36:47.448333 systemd-networkd[1561]: cali20b146f9ad2: Gained IPv6LL Nov 1 01:36:47.512451 systemd-networkd[1561]: cali2a577988a5c: Gained IPv6LL Nov 1 01:36:47.521578 systemd[1]: run-netns-cni\x2d1b1dc8e7\x2d7899\x2df4dc\x2dd3de\x2d31e504c1e30c.mount: Deactivated successfully. Nov 1 01:36:47.581449 kubelet[3248]: E1101 01:36:47.581392 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:36:47.585876 kubelet[3248]: E1101 01:36:47.585834 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:36:47.586100 kubelet[3248]: E1101 01:36:47.585900 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:36:47.645545 kubelet[3248]: I1101 01:36:47.645454 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ds49c" podStartSLOduration=37.645416651 podStartE2EDuration="37.645416651s" podCreationTimestamp="2025-11-01 01:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:36:47.632784549 +0000 UTC m=+44.218919343" watchObservedRunningTime="2025-11-01 01:36:47.645416651 +0000 UTC m=+44.231551440" Nov 1 01:36:48.087488 systemd-networkd[1561]: cali900532390ac: Gained IPv6LL Nov 1 01:36:48.215490 systemd-networkd[1561]: calie1473bbcf7f: Gained IPv6LL Nov 1 01:36:48.280409 systemd-networkd[1561]: calic2a4559d7dc: Gained IPv6LL Nov 1 01:36:48.589737 kubelet[3248]: E1101 01:36:48.589649 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:36:48.590802 kubelet[3248]: E1101 01:36:48.589890 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:36:50.459331 containerd[1914]: time="2025-11-01T01:36:50.459188471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:36:50.810347 containerd[1914]: time="2025-11-01T01:36:50.810059623Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:50.811019 containerd[1914]: time="2025-11-01T01:36:50.810945677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:36:50.811053 containerd[1914]: time="2025-11-01T01:36:50.811006430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:36:50.811084 kubelet[3248]: E1101 01:36:50.811065 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:36:50.811272 kubelet[3248]: E1101 01:36:50.811092 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:36:50.811272 kubelet[3248]: E1101 01:36:50.811155 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:580cd2697486438ea258991ade3b4df2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:50.812571 containerd[1914]: time="2025-11-01T01:36:50.812559302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:36:51.197512 containerd[1914]: time="2025-11-01T01:36:51.197374682Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:51.198217 containerd[1914]: time="2025-11-01T01:36:51.198146661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:36:51.198267 containerd[1914]: time="2025-11-01T01:36:51.198204776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:36:51.198344 kubelet[3248]: E1101 01:36:51.198319 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:36:51.198375 kubelet[3248]: E1101 01:36:51.198352 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:36:51.198439 kubelet[3248]: E1101 01:36:51.198419 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:51.199610 kubelet[3248]: E1101 01:36:51.199592 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:36:56.459432 containerd[1914]: time="2025-11-01T01:36:56.459354950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:36:56.804965 containerd[1914]: time="2025-11-01T01:36:56.804721792Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:36:56.805684 containerd[1914]: time="2025-11-01T01:36:56.805638894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:36:56.805738 containerd[1914]: time="2025-11-01T01:36:56.805711590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:36:56.805811 kubelet[3248]: E1101 01:36:56.805785 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:36:56.806004 kubelet[3248]: E1101 01:36:56.805822 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:36:56.806004 kubelet[3248]: E1101 01:36:56.805893 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42cch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:36:56.807045 kubelet[3248]: E1101 01:36:56.807030 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:37:00.459570 containerd[1914]: time="2025-11-01T01:37:00.459412404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:37:00.811353 containerd[1914]: time="2025-11-01T01:37:00.811103383Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:00.812206 containerd[1914]: time="2025-11-01T01:37:00.812124580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:37:00.812206 containerd[1914]: time="2025-11-01T01:37:00.812193513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:37:00.812365 kubelet[3248]: E1101 01:37:00.812311 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:00.812365 kubelet[3248]: E1101 01:37:00.812342 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:00.812697 kubelet[3248]: E1101 01:37:00.812519 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:00.812801 containerd[1914]: time="2025-11-01T01:37:00.812585663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:37:00.813670 kubelet[3248]: E1101 01:37:00.813652 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:37:01.147612 containerd[1914]: time="2025-11-01T01:37:01.147362397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:01.148306 containerd[1914]: time="2025-11-01T01:37:01.148207102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:37:01.148306 containerd[1914]: time="2025-11-01T01:37:01.148257874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:37:01.148438 kubelet[3248]: E1101 01:37:01.148405 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:37:01.148467 kubelet[3248]: E1101 01:37:01.148441 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:37:01.148564 kubelet[3248]: E1101 01:37:01.148508 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:01.151123 containerd[1914]: time="2025-11-01T01:37:01.151112419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:37:01.491795 containerd[1914]: time="2025-11-01T01:37:01.491662372Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:01.492706 containerd[1914]: time="2025-11-01T01:37:01.492648734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:37:01.492743 containerd[1914]: time="2025-11-01T01:37:01.492713041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:37:01.492867 kubelet[3248]: E1101 01:37:01.492821 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:37:01.492867 kubelet[3248]: E1101 01:37:01.492853 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:37:01.493031 kubelet[3248]: E1101 01:37:01.492984 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:01.493093 containerd[1914]: time="2025-11-01T01:37:01.493019096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:37:01.494156 kubelet[3248]: E1101 01:37:01.494113 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:37:01.849096 containerd[1914]: time="2025-11-01T01:37:01.848874974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:01.849888 containerd[1914]: time="2025-11-01T01:37:01.849806285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:37:01.849927 containerd[1914]: time="2025-11-01T01:37:01.849894826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:37:01.850058 kubelet[3248]: E1101 01:37:01.849998 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:37:01.850058 kubelet[3248]: E1101 01:37:01.850032 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:37:01.850423 kubelet[3248]: E1101 01:37:01.850231 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:01.850553 containerd[1914]: time="2025-11-01T01:37:01.850245610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:37:01.851433 kubelet[3248]: E1101 01:37:01.851399 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:37:02.227662 containerd[1914]: time="2025-11-01T01:37:02.227588415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:02.228339 containerd[1914]: time="2025-11-01T01:37:02.228254410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:37:02.228339 containerd[1914]: time="2025-11-01T01:37:02.228328040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:37:02.228524 kubelet[3248]: E1101 01:37:02.228472 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:02.228524 kubelet[3248]: E1101 01:37:02.228501 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:02.228630 kubelet[3248]: E1101 01:37:02.228572 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:02.229983 kubelet[3248]: E1101 01:37:02.229918 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:37:03.457115 containerd[1914]: time="2025-11-01T01:37:03.456535920Z" level=info msg="StopPodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\"" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.474 [WARNING][6397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee", Pod:"csi-node-driver-9grzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia192a6aa6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.474 [INFO][6397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.474 [INFO][6397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" iface="eth0" netns="" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.474 [INFO][6397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.474 [INFO][6397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.484 [INFO][6413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.484 [INFO][6413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.484 [INFO][6413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.488 [WARNING][6413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.488 [INFO][6413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.490 [INFO][6413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.491610 containerd[1914]: 2025-11-01 01:37:03.490 [INFO][6397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.491933 containerd[1914]: time="2025-11-01T01:37:03.491628959Z" level=info msg="TearDown network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" successfully" Nov 1 01:37:03.491933 containerd[1914]: time="2025-11-01T01:37:03.491646370Z" level=info msg="StopPodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" returns successfully" Nov 1 01:37:03.491933 containerd[1914]: time="2025-11-01T01:37:03.491870352Z" level=info msg="RemovePodSandbox for \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\"" Nov 1 01:37:03.491933 containerd[1914]: time="2025-11-01T01:37:03.491888946Z" level=info msg="Forcibly stopping sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\"" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.508 [WARNING][6440] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"e2d8159f8ecc7d66083f06acab705bbd71e2e8c20461d75f0974e216c58e33ee", Pod:"csi-node-driver-9grzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia192a6aa6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.508 [INFO][6440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.508 [INFO][6440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" iface="eth0" netns="" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.508 [INFO][6440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.508 [INFO][6440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.518 [INFO][6455] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.518 [INFO][6455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.518 [INFO][6455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.522 [WARNING][6455] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.522 [INFO][6455] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" HandleID="k8s-pod-network.02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Workload="ci--4081.3.6--n--4452d0b810-k8s-csi--node--driver--9grzl-eth0" Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.523 [INFO][6455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.524597 containerd[1914]: 2025-11-01 01:37:03.523 [INFO][6440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f" Nov 1 01:37:03.524597 containerd[1914]: time="2025-11-01T01:37:03.524588427Z" level=info msg="TearDown network for sandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" successfully" Nov 1 01:37:03.526483 containerd[1914]: time="2025-11-01T01:37:03.526442193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:03.526483 containerd[1914]: time="2025-11-01T01:37:03.526470329Z" level=info msg="RemovePodSandbox \"02ff93888d8d2b69a48d44c6a11c9b468a108c5e24e8a3c2752407e71187392f\" returns successfully" Nov 1 01:37:03.526793 containerd[1914]: time="2025-11-01T01:37:03.526743962Z" level=info msg="StopPodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\"" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.543 [WARNING][6478] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373", Pod:"coredns-668d6bf9bc-ds49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali900532390ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.543 [INFO][6478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.543 [INFO][6478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" iface="eth0" netns="" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.543 [INFO][6478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.543 [INFO][6478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.554 [INFO][6493] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.554 [INFO][6493] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.554 [INFO][6493] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.558 [WARNING][6493] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.558 [INFO][6493] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.560 [INFO][6493] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.561875 containerd[1914]: 2025-11-01 01:37:03.561 [INFO][6478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.562284 containerd[1914]: time="2025-11-01T01:37:03.561876089Z" level=info msg="TearDown network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" successfully" Nov 1 01:37:03.562284 containerd[1914]: time="2025-11-01T01:37:03.561893502Z" level=info msg="StopPodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" returns successfully" Nov 1 01:37:03.562284 containerd[1914]: time="2025-11-01T01:37:03.562223224Z" level=info msg="RemovePodSandbox for \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\"" Nov 1 01:37:03.562284 containerd[1914]: time="2025-11-01T01:37:03.562245873Z" level=info msg="Forcibly stopping sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\"" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.585 [WARNING][6519] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7fd307cd-9e0a-4b4c-8e92-d2d598a36e63", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"c58f405e84d00a9553d48fa0f4e4e0b61386a2096daf95fa6441b07ae8439373", Pod:"coredns-668d6bf9bc-ds49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali900532390ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.586 [INFO][6519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.586 [INFO][6519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" iface="eth0" netns="" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.586 [INFO][6519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.586 [INFO][6519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.600 [INFO][6536] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.600 [INFO][6536] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.600 [INFO][6536] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.605 [WARNING][6536] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.605 [INFO][6536] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" HandleID="k8s-pod-network.00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--ds49c-eth0" Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.606 [INFO][6536] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.608616 containerd[1914]: 2025-11-01 01:37:03.607 [INFO][6519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec" Nov 1 01:37:03.609327 containerd[1914]: time="2025-11-01T01:37:03.608649693Z" level=info msg="TearDown network for sandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" successfully" Nov 1 01:37:03.610543 containerd[1914]: time="2025-11-01T01:37:03.610526472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:03.610582 containerd[1914]: time="2025-11-01T01:37:03.610559097Z" level=info msg="RemovePodSandbox \"00d443e4266c162d13e4f030a6ca27f0684acf44857f4446caac3b06900c9dec\" returns successfully" Nov 1 01:37:03.610872 containerd[1914]: time="2025-11-01T01:37:03.610860805Z" level=info msg="StopPodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\"" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.628 [WARNING][6561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"07d4f1da-121e-4217-9b76-751186743f3a", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc", Pod:"calico-apiserver-65957f8fc6-h4mgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2a4559d7dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.628 [INFO][6561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.628 [INFO][6561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" iface="eth0" netns="" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.628 [INFO][6561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.628 [INFO][6561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.638 [INFO][6578] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.638 [INFO][6578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.638 [INFO][6578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.642 [WARNING][6578] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.642 [INFO][6578] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.643 [INFO][6578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.645512 containerd[1914]: 2025-11-01 01:37:03.644 [INFO][6561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.645846 containerd[1914]: time="2025-11-01T01:37:03.645528642Z" level=info msg="TearDown network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" successfully" Nov 1 01:37:03.645846 containerd[1914]: time="2025-11-01T01:37:03.645542708Z" level=info msg="StopPodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" returns successfully" Nov 1 01:37:03.645846 containerd[1914]: time="2025-11-01T01:37:03.645713473Z" level=info msg="RemovePodSandbox for \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\"" Nov 1 01:37:03.645846 containerd[1914]: time="2025-11-01T01:37:03.645736572Z" level=info msg="Forcibly stopping sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\"" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.665 [WARNING][6603] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"07d4f1da-121e-4217-9b76-751186743f3a", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"3a99d77352ac54671872f909348a7ebf71c72fbe975bfa893b2442a1cddc52dc", Pod:"calico-apiserver-65957f8fc6-h4mgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2a4559d7dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.666 [INFO][6603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.666 [INFO][6603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" iface="eth0" netns="" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.666 [INFO][6603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.666 [INFO][6603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.676 [INFO][6620] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.676 [INFO][6620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.676 [INFO][6620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.680 [WARNING][6620] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.680 [INFO][6620] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" HandleID="k8s-pod-network.3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--h4mgq-eth0" Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.681 [INFO][6620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.683019 containerd[1914]: 2025-11-01 01:37:03.682 [INFO][6603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc" Nov 1 01:37:03.683318 containerd[1914]: time="2025-11-01T01:37:03.683041406Z" level=info msg="TearDown network for sandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" successfully" Nov 1 01:37:03.684402 containerd[1914]: time="2025-11-01T01:37:03.684389733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:03.684433 containerd[1914]: time="2025-11-01T01:37:03.684417295Z" level=info msg="RemovePodSandbox \"3b7174d8fff7d51186e61fac7be0254fb60c9dbaf23e4f5c08e0728fad3d9fdc\" returns successfully" Nov 1 01:37:03.684712 containerd[1914]: time="2025-11-01T01:37:03.684691060Z" level=info msg="StopPodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\"" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.702 [WARNING][6646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"74c9c676-a5c3-4018-bd2b-2647288efe10", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93", Pod:"goldmane-666569f655-cwdx7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1473bbcf7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.702 [INFO][6646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.702 [INFO][6646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" iface="eth0" netns="" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.702 [INFO][6646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.702 [INFO][6646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.712 [INFO][6663] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.712 [INFO][6663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.712 [INFO][6663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.716 [WARNING][6663] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.716 [INFO][6663] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.717 [INFO][6663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.718716 containerd[1914]: 2025-11-01 01:37:03.717 [INFO][6646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.718716 containerd[1914]: time="2025-11-01T01:37:03.718680132Z" level=info msg="TearDown network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" successfully" Nov 1 01:37:03.718716 containerd[1914]: time="2025-11-01T01:37:03.718694788Z" level=info msg="StopPodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" returns successfully" Nov 1 01:37:03.719026 containerd[1914]: time="2025-11-01T01:37:03.718991675Z" level=info msg="RemovePodSandbox for \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\"" Nov 1 01:37:03.719026 containerd[1914]: time="2025-11-01T01:37:03.719007384Z" level=info msg="Forcibly stopping sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\"" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.736 [WARNING][6685] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"74c9c676-a5c3-4018-bd2b-2647288efe10", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"6157dfce738c89d570be9014e8169269d5a3312d6f057d000da02670980eec93", Pod:"goldmane-666569f655-cwdx7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1473bbcf7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.736 [INFO][6685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.736 [INFO][6685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" iface="eth0" netns="" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.736 [INFO][6685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.736 [INFO][6685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.746 [INFO][6702] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.746 [INFO][6702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.746 [INFO][6702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.751 [WARNING][6702] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.751 [INFO][6702] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" HandleID="k8s-pod-network.662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Workload="ci--4081.3.6--n--4452d0b810-k8s-goldmane--666569f655--cwdx7-eth0" Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.752 [INFO][6702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.754607 containerd[1914]: 2025-11-01 01:37:03.753 [INFO][6685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd" Nov 1 01:37:03.754933 containerd[1914]: time="2025-11-01T01:37:03.754633920Z" level=info msg="TearDown network for sandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" successfully" Nov 1 01:37:03.756114 containerd[1914]: time="2025-11-01T01:37:03.756100719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:03.756154 containerd[1914]: time="2025-11-01T01:37:03.756123329Z" level=info msg="RemovePodSandbox \"662e7d58e74330bf48ecbb168fa034d30f2558a48b75c67b66bc989ff33cdffd\" returns successfully" Nov 1 01:37:03.756377 containerd[1914]: time="2025-11-01T01:37:03.756364544Z" level=info msg="StopPodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\"" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.774 [WARNING][6725] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46", Pod:"calico-apiserver-65957f8fc6-nf7rw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20b146f9ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.774 [INFO][6725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.774 [INFO][6725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" iface="eth0" netns="" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.774 [INFO][6725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.774 [INFO][6725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.787 [INFO][6740] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.787 [INFO][6740] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.787 [INFO][6740] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.792 [WARNING][6740] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.792 [INFO][6740] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.793 [INFO][6740] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.795524 containerd[1914]: 2025-11-01 01:37:03.794 [INFO][6725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.795952 containerd[1914]: time="2025-11-01T01:37:03.795534112Z" level=info msg="TearDown network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" successfully" Nov 1 01:37:03.795952 containerd[1914]: time="2025-11-01T01:37:03.795556612Z" level=info msg="StopPodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" returns successfully" Nov 1 01:37:03.795952 containerd[1914]: time="2025-11-01T01:37:03.795912417Z" level=info msg="RemovePodSandbox for \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\"" Nov 1 01:37:03.795952 containerd[1914]: time="2025-11-01T01:37:03.795932616Z" level=info msg="Forcibly stopping sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\"" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.826 [WARNING][6764] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0", GenerateName:"calico-apiserver-65957f8fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dd05a57-68a7-45f0-82e4-c02ae8d0fe49", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65957f8fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"858c3bc0e439e57bdaa7476669e7538d2fc5894bcbcb00255e723ac01df9db46", Pod:"calico-apiserver-65957f8fc6-nf7rw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20b146f9ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.827 [INFO][6764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.827 [INFO][6764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" iface="eth0" netns="" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.827 [INFO][6764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.828 [INFO][6764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.862 [INFO][6783] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.862 [INFO][6783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.862 [INFO][6783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.868 [WARNING][6783] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.868 [INFO][6783] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" HandleID="k8s-pod-network.852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--apiserver--65957f8fc6--nf7rw-eth0" Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.870 [INFO][6783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.872571 containerd[1914]: 2025-11-01 01:37:03.871 [INFO][6764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d" Nov 1 01:37:03.873009 containerd[1914]: time="2025-11-01T01:37:03.872599681Z" level=info msg="TearDown network for sandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" successfully" Nov 1 01:37:03.874400 containerd[1914]: time="2025-11-01T01:37:03.874385273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:03.874434 containerd[1914]: time="2025-11-01T01:37:03.874410114Z" level=info msg="RemovePodSandbox \"852ba39486e9461af2a375ea88112216039043ecd4d8553f8ab2922ae995852d\" returns successfully" Nov 1 01:37:03.874683 containerd[1914]: time="2025-11-01T01:37:03.874642150Z" level=info msg="StopPodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\"" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.892 [WARNING][6809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0", GenerateName:"calico-kube-controllers-6874c989b8-", Namespace:"calico-system", SelfLink:"", UID:"c3dae49a-08cd-4d54-8b47-222aaaea72bd", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6874c989b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77", Pod:"calico-kube-controllers-6874c989b8-tpm2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c0cfb5a0eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.892 [INFO][6809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.892 [INFO][6809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" iface="eth0" netns="" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.892 [INFO][6809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.892 [INFO][6809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.902 [INFO][6825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.902 [INFO][6825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.902 [INFO][6825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.906 [WARNING][6825] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.906 [INFO][6825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.908 [INFO][6825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.909590 containerd[1914]: 2025-11-01 01:37:03.908 [INFO][6809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.909590 containerd[1914]: time="2025-11-01T01:37:03.909581065Z" level=info msg="TearDown network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" successfully" Nov 1 01:37:03.909590 containerd[1914]: time="2025-11-01T01:37:03.909597507Z" level=info msg="StopPodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" returns successfully" Nov 1 01:37:03.909993 containerd[1914]: time="2025-11-01T01:37:03.909927714Z" level=info msg="RemovePodSandbox for \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\"" Nov 1 01:37:03.909993 containerd[1914]: time="2025-11-01T01:37:03.909946137Z" level=info msg="Forcibly stopping sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\"" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.930 [WARNING][6848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0", GenerateName:"calico-kube-controllers-6874c989b8-", Namespace:"calico-system", SelfLink:"", UID:"c3dae49a-08cd-4d54-8b47-222aaaea72bd", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6874c989b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"1d6012c4a20b6abcaf8b4704f93da6fe04b900609a4f4c8e162d233d6344ea77", Pod:"calico-kube-controllers-6874c989b8-tpm2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c0cfb5a0eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.930 [INFO][6848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.930 [INFO][6848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" iface="eth0" netns="" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.930 [INFO][6848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.930 [INFO][6848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.942 [INFO][6864] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.942 [INFO][6864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.942 [INFO][6864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.947 [WARNING][6864] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.947 [INFO][6864] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" HandleID="k8s-pod-network.89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Workload="ci--4081.3.6--n--4452d0b810-k8s-calico--kube--controllers--6874c989b8--tpm2w-eth0" Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.948 [INFO][6864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.950392 containerd[1914]: 2025-11-01 01:37:03.949 [INFO][6848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e" Nov 1 01:37:03.950737 containerd[1914]: time="2025-11-01T01:37:03.950398897Z" level=info msg="TearDown network for sandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" successfully" Nov 1 01:37:03.952080 containerd[1914]: time="2025-11-01T01:37:03.952039660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:03.952080 containerd[1914]: time="2025-11-01T01:37:03.952067509Z" level=info msg="RemovePodSandbox \"89c77f44fa84496ffea7c3a3f36568a5fd1b5080e959a9f91198750c165b3a0e\" returns successfully" Nov 1 01:37:03.952353 containerd[1914]: time="2025-11-01T01:37:03.952308974Z" level=info msg="StopPodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\"" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.969 [WARNING][6890] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.969 [INFO][6890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.969 [INFO][6890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" iface="eth0" netns="" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.969 [INFO][6890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.969 [INFO][6890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.979 [INFO][6906] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.979 [INFO][6906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.979 [INFO][6906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.983 [WARNING][6906] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.983 [INFO][6906] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.984 [INFO][6906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:03.986090 containerd[1914]: 2025-11-01 01:37:03.985 [INFO][6890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:03.986090 containerd[1914]: time="2025-11-01T01:37:03.986076300Z" level=info msg="TearDown network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" successfully" Nov 1 01:37:03.986090 containerd[1914]: time="2025-11-01T01:37:03.986093068Z" level=info msg="StopPodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" returns successfully" Nov 1 01:37:03.986492 containerd[1914]: time="2025-11-01T01:37:03.986365350Z" level=info msg="RemovePodSandbox for \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\"" Nov 1 01:37:03.986492 containerd[1914]: time="2025-11-01T01:37:03.986383277Z" level=info msg="Forcibly stopping sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\"" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.003 [WARNING][6933] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" WorkloadEndpoint="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.003 [INFO][6933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.003 [INFO][6933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" iface="eth0" netns="" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.004 [INFO][6933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.004 [INFO][6933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.013 [INFO][6947] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.014 [INFO][6947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.014 [INFO][6947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.018 [WARNING][6947] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.018 [INFO][6947] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" HandleID="k8s-pod-network.a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Workload="ci--4081.3.6--n--4452d0b810-k8s-whisker--95f8498b7--jq29q-eth0" Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.019 [INFO][6947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:04.021227 containerd[1914]: 2025-11-01 01:37:04.020 [INFO][6933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f" Nov 1 01:37:04.021473 containerd[1914]: time="2025-11-01T01:37:04.021250236Z" level=info msg="TearDown network for sandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" successfully" Nov 1 01:37:04.022551 containerd[1914]: time="2025-11-01T01:37:04.022511781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:04.022551 containerd[1914]: time="2025-11-01T01:37:04.022534149Z" level=info msg="RemovePodSandbox \"a4fa1e88240286b7bbd4abf5ef1fdc65cb3ec05dd207869797174fc5194fb05f\" returns successfully" Nov 1 01:37:04.022826 containerd[1914]: time="2025-11-01T01:37:04.022807467Z" level=info msg="StopPodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\"" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.040 [WARNING][6971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bce1db93-db69-4f85-98fe-9ebedd24ee18", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0", Pod:"coredns-668d6bf9bc-zwxtd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a577988a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.040 [INFO][6971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.040 [INFO][6971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" iface="eth0" netns="" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.040 [INFO][6971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.040 [INFO][6971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.050 [INFO][6986] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.050 [INFO][6986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.050 [INFO][6986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.054 [WARNING][6986] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.054 [INFO][6986] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.055 [INFO][6986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:04.057105 containerd[1914]: 2025-11-01 01:37:04.056 [INFO][6971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.057427 containerd[1914]: time="2025-11-01T01:37:04.057127195Z" level=info msg="TearDown network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" successfully" Nov 1 01:37:04.057427 containerd[1914]: time="2025-11-01T01:37:04.057142895Z" level=info msg="StopPodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" returns successfully" Nov 1 01:37:04.057427 containerd[1914]: time="2025-11-01T01:37:04.057382597Z" level=info msg="RemovePodSandbox for \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\"" Nov 1 01:37:04.057427 containerd[1914]: time="2025-11-01T01:37:04.057396609Z" level=info msg="Forcibly stopping sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\"" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.079 [WARNING][7013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bce1db93-db69-4f85-98fe-9ebedd24ee18", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4452d0b810", ContainerID:"158bdd262aca13c5a052f938f72d7b7b5c7162913928411963c3d2c28f4a34d0", Pod:"coredns-668d6bf9bc-zwxtd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a577988a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.079 [INFO][7013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.079 [INFO][7013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" iface="eth0" netns="" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.080 [INFO][7013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.080 [INFO][7013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.122 [INFO][7027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.122 [INFO][7027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.122 [INFO][7027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.128 [WARNING][7027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.128 [INFO][7027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" HandleID="k8s-pod-network.324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Workload="ci--4081.3.6--n--4452d0b810-k8s-coredns--668d6bf9bc--zwxtd-eth0" Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.130 [INFO][7027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:37:04.132794 containerd[1914]: 2025-11-01 01:37:04.131 [INFO][7013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa" Nov 1 01:37:04.133312 containerd[1914]: time="2025-11-01T01:37:04.132828703Z" level=info msg="TearDown network for sandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" successfully" Nov 1 01:37:04.134664 containerd[1914]: time="2025-11-01T01:37:04.134624470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:37:04.134664 containerd[1914]: time="2025-11-01T01:37:04.134647847Z" level=info msg="RemovePodSandbox \"324c966ea7e4d95aef7ed3efaebd965087cb8160c5f12d9678bf6298c6527baa\" returns successfully" Nov 1 01:37:04.460896 kubelet[3248]: E1101 01:37:04.460757 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:37:08.459245 kubelet[3248]: E1101 01:37:08.459117 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:37:12.459855 kubelet[3248]: E1101 01:37:12.459749 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:37:14.458157 kubelet[3248]: E1101 01:37:14.458119 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:37:15.458598 kubelet[3248]: E1101 01:37:15.458530 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:37:15.459331 containerd[1914]: time="2025-11-01T01:37:15.458651176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:37:15.808478 containerd[1914]: time="2025-11-01T01:37:15.808176881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:15.809129 containerd[1914]: time="2025-11-01T01:37:15.809107448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:37:15.809192 containerd[1914]: time="2025-11-01T01:37:15.809174375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:37:15.809378 kubelet[3248]: E1101 01:37:15.809297 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:37:15.809378 kubelet[3248]: E1101 01:37:15.809378 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:37:15.809479 kubelet[3248]: E1101 01:37:15.809444 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:580cd2697486438ea258991ade3b4df2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:15.811094 containerd[1914]: time="2025-11-01T01:37:15.811083224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:37:16.161536 containerd[1914]: time="2025-11-01T01:37:16.161447667Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:16.161985 containerd[1914]: time="2025-11-01T01:37:16.161949810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:37:16.162051 containerd[1914]: time="2025-11-01T01:37:16.162029283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:37:16.162141 kubelet[3248]: E1101 01:37:16.162124 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:37:16.162168 kubelet[3248]: E1101 01:37:16.162149 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:37:16.162236 kubelet[3248]: E1101 01:37:16.162217 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:16.163386 kubelet[3248]: E1101 01:37:16.163368 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:37:16.458349 kubelet[3248]: E1101 01:37:16.458300 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:37:23.459034 containerd[1914]: time="2025-11-01T01:37:23.458985544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:37:23.802466 containerd[1914]: time="2025-11-01T01:37:23.802299392Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:23.803015 containerd[1914]: time="2025-11-01T01:37:23.802990586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:37:23.803076 containerd[1914]: time="2025-11-01T01:37:23.803059365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:37:23.803159 kubelet[3248]: E1101 01:37:23.803134 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:37:23.803355 kubelet[3248]: E1101 01:37:23.803169 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:37:23.803355 kubelet[3248]: E1101 01:37:23.803267 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42cch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:23.804391 kubelet[3248]: E1101 01:37:23.804377 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:37:25.460123 containerd[1914]: time="2025-11-01T01:37:25.460014488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:37:25.823367 containerd[1914]: time="2025-11-01T01:37:25.823287368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:25.832293 containerd[1914]: time="2025-11-01T01:37:25.832236050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:37:25.832293 containerd[1914]: time="2025-11-01T01:37:25.832274325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:37:25.832428 kubelet[3248]: E1101 01:37:25.832366 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:37:25.832428 kubelet[3248]: E1101 01:37:25.832410 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:37:25.832641 kubelet[3248]: E1101 01:37:25.832507 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:25.834930 containerd[1914]: time="2025-11-01T01:37:25.834918274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:37:26.218603 containerd[1914]: time="2025-11-01T01:37:26.218482513Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:26.219498 containerd[1914]: time="2025-11-01T01:37:26.219429155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:37:26.219531 containerd[1914]: time="2025-11-01T01:37:26.219498511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:37:26.219617 kubelet[3248]: E1101 01:37:26.219572 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:37:26.219617 kubelet[3248]: E1101 01:37:26.219604 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:37:26.219740 kubelet[3248]: E1101 01:37:26.219687 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:26.220875 kubelet[3248]: E1101 01:37:26.220827 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:37:27.466679 kubelet[3248]: E1101 01:37:27.466626 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:37:28.462953 containerd[1914]: time="2025-11-01T01:37:28.462854840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:37:28.827131 containerd[1914]: time="2025-11-01T01:37:28.826885782Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:28.827878 containerd[1914]: time="2025-11-01T01:37:28.827853623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:37:28.827945 containerd[1914]: time="2025-11-01T01:37:28.827917880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:37:28.828084 kubelet[3248]: E1101 01:37:28.828062 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:28.828398 kubelet[3248]: E1101 01:37:28.828095 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:28.828398 kubelet[3248]: E1101 01:37:28.828351 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:28.828476 containerd[1914]: time="2025-11-01T01:37:28.828396594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:37:28.829492 kubelet[3248]: E1101 01:37:28.829478 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:37:29.167684 containerd[1914]: time="2025-11-01T01:37:29.167638672Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:29.168138 containerd[1914]: time="2025-11-01T01:37:29.168115992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:37:29.168181 containerd[1914]: time="2025-11-01T01:37:29.168163173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:37:29.168352 kubelet[3248]: E1101 01:37:29.168302 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:29.168352 kubelet[3248]: E1101 01:37:29.168332 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:37:29.168419 kubelet[3248]: E1101 01:37:29.168399 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:29.169614 kubelet[3248]: E1101 01:37:29.169577 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:37:29.457899 containerd[1914]: time="2025-11-01T01:37:29.457787683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:37:29.816269 containerd[1914]: time="2025-11-01T01:37:29.816030771Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:37:29.817150 containerd[1914]: time="2025-11-01T01:37:29.817076531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:37:29.817190 containerd[1914]: time="2025-11-01T01:37:29.817142292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:37:29.817342 kubelet[3248]: E1101 01:37:29.817294 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:37:29.817342 kubelet[3248]: E1101 01:37:29.817323 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:37:29.817483 kubelet[3248]: E1101 01:37:29.817399 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:37:29.818598 kubelet[3248]: E1101 01:37:29.818556 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:37:35.458616 kubelet[3248]: E1101 01:37:35.458569 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:37:39.458964 kubelet[3248]: E1101 01:37:39.458918 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:37:40.457839 kubelet[3248]: E1101 01:37:40.457782 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:37:40.457839 kubelet[3248]: E1101 01:37:40.457782 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:37:41.459871 kubelet[3248]: E1101 01:37:41.459726 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:37:42.457785 kubelet[3248]: E1101 01:37:42.457712 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:37:48.457287 kubelet[3248]: E1101 01:37:48.457262 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:37:52.458667 kubelet[3248]: E1101 01:37:52.458614 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:37:53.459072 kubelet[3248]: E1101 01:37:53.458993 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:37:54.457557 kubelet[3248]: E1101 01:37:54.457527 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:37:54.457827 kubelet[3248]: E1101 01:37:54.457804 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:37:55.459115 kubelet[3248]: E1101 01:37:55.459043 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:38:03.458985 kubelet[3248]: E1101 01:38:03.458937 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:38:03.459773 kubelet[3248]: E1101 01:38:03.459186 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:38:05.459259 kubelet[3248]: E1101 01:38:05.459142 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:38:05.460568 kubelet[3248]: E1101 01:38:05.460153 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:38:08.459391 kubelet[3248]: E1101 01:38:08.459195 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:38:08.460675 containerd[1914]: time="2025-11-01T01:38:08.459688659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:38:08.840783 containerd[1914]: time="2025-11-01T01:38:08.840649201Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:08.859038 containerd[1914]: time="2025-11-01T01:38:08.858981799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:38:08.859136 containerd[1914]: time="2025-11-01T01:38:08.859050258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:38:08.859196 kubelet[3248]: E1101 01:38:08.859150 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:38:08.859196 kubelet[3248]: E1101 01:38:08.859189 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:38:08.859317 kubelet[3248]: E1101 01:38:08.859288 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:580cd2697486438ea258991ade3b4df2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:08.860874 containerd[1914]: time="2025-11-01T01:38:08.860858084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:38:09.235046 containerd[1914]: time="2025-11-01T01:38:09.234985448Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:09.249889 containerd[1914]: time="2025-11-01T01:38:09.249823218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:38:09.249889 containerd[1914]: time="2025-11-01T01:38:09.249871231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:38:09.249991 kubelet[3248]: E1101 01:38:09.249969 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:38:09.250027 kubelet[3248]: E1101 01:38:09.250000 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:38:09.250094 kubelet[3248]: E1101 01:38:09.250069 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:09.251273 kubelet[3248]: E1101 01:38:09.251223 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:38:14.457650 containerd[1914]: time="2025-11-01T01:38:14.457620319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:38:14.879032 containerd[1914]: time="2025-11-01T01:38:14.878760171Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:14.879604 containerd[1914]: time="2025-11-01T01:38:14.879577073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:38:14.879645 containerd[1914]: time="2025-11-01T01:38:14.879623214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:38:14.879788 kubelet[3248]: E1101 01:38:14.879736 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:38:14.879788 kubelet[3248]: E1101 01:38:14.879768 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:38:14.880014 kubelet[3248]: E1101 01:38:14.879845 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:14.881163 kubelet[3248]: E1101 01:38:14.881056 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:38:16.459996 containerd[1914]: time="2025-11-01T01:38:16.459905400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:38:16.844907 containerd[1914]: time="2025-11-01T01:38:16.844753511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:16.845417 containerd[1914]: time="2025-11-01T01:38:16.845339817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:38:16.845417 containerd[1914]: time="2025-11-01T01:38:16.845382748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:38:16.845527 kubelet[3248]: E1101 01:38:16.845481 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:38:16.845527 kubelet[3248]: E1101 01:38:16.845505 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:38:16.845785 kubelet[3248]: E1101 01:38:16.845571 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:16.847386 kubelet[3248]: E1101 01:38:16.847343 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:38:18.457816 containerd[1914]: time="2025-11-01T01:38:18.457762291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:38:18.822084 containerd[1914]: time="2025-11-01T01:38:18.821999219Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:18.822550 containerd[1914]: time="2025-11-01T01:38:18.822530336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:38:18.822601 containerd[1914]: time="2025-11-01T01:38:18.822579451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:38:18.822796 kubelet[3248]: E1101 01:38:18.822716 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:38:18.822796 kubelet[3248]: E1101 01:38:18.822773 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:38:18.823036 kubelet[3248]: E1101 01:38:18.822949 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42cch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:18.823103 containerd[1914]: time="2025-11-01T01:38:18.822973763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:38:18.824108 kubelet[3248]: E1101 01:38:18.824067 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:38:19.208554 containerd[1914]: time="2025-11-01T01:38:19.208411683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:19.209436 containerd[1914]: time="2025-11-01T01:38:19.209365769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:38:19.209473 containerd[1914]: time="2025-11-01T01:38:19.209432767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:38:19.209538 kubelet[3248]: E1101 01:38:19.209513 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:38:19.209603 kubelet[3248]: E1101 01:38:19.209546 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:38:19.209633 kubelet[3248]: E1101 01:38:19.209614 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:19.211166 containerd[1914]: time="2025-11-01T01:38:19.211153885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:38:19.567573 containerd[1914]: time="2025-11-01T01:38:19.567340508Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:19.568286 containerd[1914]: time="2025-11-01T01:38:19.568169905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:38:19.568286 containerd[1914]: time="2025-11-01T01:38:19.568232448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:38:19.568383 kubelet[3248]: E1101 01:38:19.568358 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:38:19.568417 kubelet[3248]: E1101 01:38:19.568390 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:38:19.568492 kubelet[3248]: E1101 01:38:19.568464 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:19.569595 kubelet[3248]: E1101 01:38:19.569578 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:38:20.459186 containerd[1914]: time="2025-11-01T01:38:20.459115287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:38:20.872047 containerd[1914]: time="2025-11-01T01:38:20.871831915Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:38:20.872850 containerd[1914]: time="2025-11-01T01:38:20.872826854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:38:20.872904 containerd[1914]: time="2025-11-01T01:38:20.872889149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:38:20.873005 kubelet[3248]: E1101 01:38:20.872985 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:38:20.873141 kubelet[3248]: E1101 01:38:20.873011 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:38:20.873141 kubelet[3248]: E1101 01:38:20.873085 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:38:20.874215 kubelet[3248]: E1101 01:38:20.874196 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:38:22.458670 kubelet[3248]: E1101 01:38:22.458613 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:38:25.457992 kubelet[3248]: E1101 01:38:25.457959 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:38:27.459431 kubelet[3248]: E1101 01:38:27.459303 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:38:32.460110 kubelet[3248]: E1101 01:38:32.459885 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:38:33.462836 kubelet[3248]: E1101 01:38:33.462761 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:38:33.463781 kubelet[3248]: E1101 01:38:33.463276 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:38:36.457249 kubelet[3248]: E1101 01:38:36.457201 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:38:38.459025 kubelet[3248]: E1101 01:38:38.458883 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:38:42.458294 kubelet[3248]: E1101 01:38:42.458267 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:38:44.458075 kubelet[3248]: E1101 01:38:44.458046 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:38:45.458081 kubelet[3248]: E1101 01:38:45.458052 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:38:46.458703 kubelet[3248]: E1101 01:38:46.458627 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:38:50.457635 kubelet[3248]: E1101 01:38:50.457597 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:38:52.458901 kubelet[3248]: E1101 01:38:52.458818 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:38:55.458204 kubelet[3248]: E1101 01:38:55.458181 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:38:55.458658 kubelet[3248]: E1101 01:38:55.458326 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:38:56.460046 kubelet[3248]: E1101 01:38:56.459949 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:39:00.457906 kubelet[3248]: E1101 01:39:00.457847 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:39:02.458096 kubelet[3248]: E1101 01:39:02.458024 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:39:07.458696 kubelet[3248]: E1101 01:39:07.458623 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:39:07.459495 kubelet[3248]: E1101 01:39:07.459056 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:39:09.459039 kubelet[3248]: E1101 01:39:09.458930 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:39:10.459562 kubelet[3248]: E1101 01:39:10.459481 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:39:13.458517 kubelet[3248]: E1101 01:39:13.458484 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:39:17.459574 kubelet[3248]: E1101 01:39:17.459457 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:39:18.459888 kubelet[3248]: E1101 01:39:18.459757 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:39:21.458515 kubelet[3248]: E1101 01:39:21.458463 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:39:21.458515 kubelet[3248]: E1101 01:39:21.458471 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:39:22.457379 kubelet[3248]: E1101 01:39:22.457319 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:39:28.459832 kubelet[3248]: E1101 01:39:28.459704 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:39:30.457781 kubelet[3248]: E1101 01:39:30.457720 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:39:32.457282 kubelet[3248]: E1101 01:39:32.457217 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:39:32.457702 kubelet[3248]: E1101 01:39:32.457460 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:39:34.458735 kubelet[3248]: E1101 01:39:34.458109 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:39:35.458807 containerd[1914]: time="2025-11-01T01:39:35.458753503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:39:35.834460 containerd[1914]: time="2025-11-01T01:39:35.834298542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:35.834991 containerd[1914]: time="2025-11-01T01:39:35.834919143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:39:35.835028 containerd[1914]: time="2025-11-01T01:39:35.834990228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:39:35.835089 kubelet[3248]: E1101 01:39:35.835065 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:39:35.835290 kubelet[3248]: E1101 01:39:35.835100 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:39:35.835290 kubelet[3248]: E1101 01:39:35.835168 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:580cd2697486438ea258991ade3b4df2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:35.836673 containerd[1914]: time="2025-11-01T01:39:35.836632694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:39:36.220120 containerd[1914]: time="2025-11-01T01:39:36.220047618Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:36.220684 containerd[1914]: time="2025-11-01T01:39:36.220639737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:39:36.220758 containerd[1914]: time="2025-11-01T01:39:36.220677272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:39:36.220784 kubelet[3248]: E1101 01:39:36.220760 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:39:36.220813 kubelet[3248]: E1101 01:39:36.220793 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:39:36.220938 kubelet[3248]: E1101 01:39:36.220867 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:36.222094 kubelet[3248]: E1101 01:39:36.222054 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:39:41.458752 containerd[1914]: time="2025-11-01T01:39:41.458684804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:39:41.836860 containerd[1914]: time="2025-11-01T01:39:41.836776427Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:41.837319 containerd[1914]: time="2025-11-01T01:39:41.837297391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:39:41.837372 containerd[1914]: time="2025-11-01T01:39:41.837348850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:39:41.837469 kubelet[3248]: E1101 01:39:41.837443 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:39:41.837812 kubelet[3248]: E1101 01:39:41.837476 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:39:41.837812 kubelet[3248]: E1101 01:39:41.837552 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42cch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:41.838806 kubelet[3248]: E1101 01:39:41.838764 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:39:43.458067 containerd[1914]: time="2025-11-01T01:39:43.458044476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:39:43.882831 containerd[1914]: time="2025-11-01T01:39:43.882685536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:43.883163 containerd[1914]: time="2025-11-01T01:39:43.883147244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:39:43.883216 containerd[1914]: time="2025-11-01T01:39:43.883193261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:39:43.883355 kubelet[3248]: E1101 01:39:43.883297 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:39:43.883355 kubelet[3248]: E1101 01:39:43.883339 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:39:43.883596 kubelet[3248]: E1101 01:39:43.883442 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:43.884630 kubelet[3248]: E1101 01:39:43.884586 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:39:44.460332 containerd[1914]: time="2025-11-01T01:39:44.460204544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:39:44.817292 containerd[1914]: time="2025-11-01T01:39:44.817189002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:44.817859 containerd[1914]: time="2025-11-01T01:39:44.817811534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:39:44.817906 containerd[1914]: time="2025-11-01T01:39:44.817857046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:39:44.817977 kubelet[3248]: E1101 01:39:44.817951 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:39:44.818022 kubelet[3248]: E1101 01:39:44.817991 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:39:44.818121 kubelet[3248]: E1101 01:39:44.818091 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:44.819335 kubelet[3248]: E1101 01:39:44.819280 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:39:46.462594 containerd[1914]: time="2025-11-01T01:39:46.462569220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:39:46.861975 containerd[1914]: time="2025-11-01T01:39:46.861759806Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:46.862523 containerd[1914]: time="2025-11-01T01:39:46.862507051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:39:46.862568 containerd[1914]: time="2025-11-01T01:39:46.862551961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:39:46.862680 kubelet[3248]: E1101 01:39:46.862657 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:39:46.862927 kubelet[3248]: E1101 01:39:46.862690 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:39:46.862927 kubelet[3248]: E1101 01:39:46.862858 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:46.863046 containerd[1914]: time="2025-11-01T01:39:46.862907031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:39:46.863985 kubelet[3248]: E1101 01:39:46.863967 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:39:47.228079 containerd[1914]: time="2025-11-01T01:39:47.227961311Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:47.229030 containerd[1914]: time="2025-11-01T01:39:47.228958253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:39:47.229030 containerd[1914]: time="2025-11-01T01:39:47.229012598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:39:47.229127 kubelet[3248]: E1101 01:39:47.229106 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:39:47.229159 kubelet[3248]: E1101 01:39:47.229136 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:39:47.229278 kubelet[3248]: E1101 01:39:47.229205 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:47.230796 containerd[1914]: time="2025-11-01T01:39:47.230754807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:39:47.619157 containerd[1914]: time="2025-11-01T01:39:47.618914482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:39:47.619940 containerd[1914]: time="2025-11-01T01:39:47.619846490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:39:47.619940 containerd[1914]: time="2025-11-01T01:39:47.619911602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:39:47.620040 kubelet[3248]: E1101 01:39:47.619992 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:39:47.620040 kubelet[3248]: E1101 01:39:47.620024 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:39:47.620118 kubelet[3248]: E1101 01:39:47.620093 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:39:47.621348 kubelet[3248]: E1101 01:39:47.621286 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:39:49.460653 kubelet[3248]: E1101 01:39:49.460537 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:39:52.459577 kubelet[3248]: E1101 01:39:52.459473 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:39:56.457274 kubelet[3248]: E1101 01:39:56.457245 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:39:57.459373 kubelet[3248]: E1101 01:39:57.459205 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:39:58.459309 kubelet[3248]: E1101 01:39:58.459168 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:39:59.457640 kubelet[3248]: E1101 01:39:59.457607 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:40:04.457491 kubelet[3248]: E1101 01:40:04.457446 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:40:04.457977 kubelet[3248]: E1101 01:40:04.457734 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:40:08.459546 kubelet[3248]: E1101 01:40:08.459429 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:40:09.457273 kubelet[3248]: E1101 01:40:09.457204 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:40:13.460804 kubelet[3248]: E1101 01:40:13.460687 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:40:13.462494 kubelet[3248]: E1101 01:40:13.461358 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:40:15.457650 kubelet[3248]: E1101 01:40:15.457589 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:40:17.459679 kubelet[3248]: E1101 01:40:17.459552 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:40:23.457970 kubelet[3248]: E1101 01:40:23.457929 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:40:23.458347 kubelet[3248]: E1101 01:40:23.458059 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:40:25.458021 kubelet[3248]: E1101 01:40:25.457963 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:40:26.457786 kubelet[3248]: E1101 01:40:26.457744 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:40:29.458190 kubelet[3248]: E1101 01:40:29.458158 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:40:29.458874 kubelet[3248]: E1101 01:40:29.458314 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:40:35.458240 kubelet[3248]: E1101 01:40:35.458170 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:40:38.458180 kubelet[3248]: E1101 01:40:38.458123 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:40:38.458180 kubelet[3248]: E1101 01:40:38.458135 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:40:38.458880 kubelet[3248]: E1101 01:40:38.458662 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:40:41.457163 kubelet[3248]: E1101 01:40:41.457140 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:40:42.457828 kubelet[3248]: E1101 01:40:42.457790 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:40:49.458183 kubelet[3248]: E1101 01:40:49.458143 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:40:50.457725 kubelet[3248]: E1101 01:40:50.457663 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:40:52.463223 kubelet[3248]: E1101 01:40:52.459641 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:40:53.460957 kubelet[3248]: E1101 01:40:53.460841 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:40:56.459456 kubelet[3248]: E1101 01:40:56.459299 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:40:56.460793 kubelet[3248]: E1101 01:40:56.460685 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:41:03.457269 kubelet[3248]: E1101 01:41:03.457247 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:41:05.459261 kubelet[3248]: E1101 01:41:05.459145 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:41:06.457726 kubelet[3248]: E1101 01:41:06.457695 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:41:07.460070 kubelet[3248]: E1101 01:41:07.460031 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:41:07.460534 kubelet[3248]: E1101 01:41:07.460487 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:41:08.457913 kubelet[3248]: E1101 01:41:08.457878 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:41:17.459440 kubelet[3248]: E1101 01:41:17.459347 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:41:19.457840 kubelet[3248]: E1101 01:41:19.457794 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:41:19.457840 kubelet[3248]: E1101 01:41:19.457795 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:41:22.457647 kubelet[3248]: E1101 01:41:22.457616 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:41:22.458002 kubelet[3248]: E1101 01:41:22.457855 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:41:22.458002 kubelet[3248]: E1101 01:41:22.457869 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:41:30.458136 kubelet[3248]: E1101 01:41:30.458060 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:41:32.459080 kubelet[3248]: E1101 01:41:32.458993 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:41:34.458673 kubelet[3248]: E1101 01:41:34.458559 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:41:35.457914 kubelet[3248]: E1101 01:41:35.457824 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:41:37.458954 kubelet[3248]: E1101 01:41:37.458895 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:41:37.458954 kubelet[3248]: E1101 01:41:37.458905 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:41:41.458922 kubelet[3248]: E1101 01:41:41.458821 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:41:45.458158 kubelet[3248]: E1101 01:41:45.458130 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:41:46.457267 kubelet[3248]: E1101 01:41:46.457206 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:41:48.458060 kubelet[3248]: E1101 01:41:48.458006 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:41:50.458803 kubelet[3248]: E1101 01:41:50.458739 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:41:51.458024 kubelet[3248]: E1101 01:41:51.457997 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:41:52.459581 kubelet[3248]: E1101 01:41:52.459442 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:41:58.458073 kubelet[3248]: E1101 01:41:58.458015 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:41:59.457867 kubelet[3248]: E1101 01:41:59.457832 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:42:00.459353 kubelet[3248]: E1101 01:42:00.459262 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:42:02.457848 kubelet[3248]: E1101 01:42:02.457784 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:42:02.458184 kubelet[3248]: E1101 01:42:02.457850 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:42:04.123619 systemd[1]: Started sshd@9-139.178.94.199:22-139.178.89.65:40364.service - OpenSSH per-connection server daemon (139.178.89.65:40364). Nov 1 01:42:04.144856 sshd[7508]: Accepted publickey for core from 139.178.89.65 port 40364 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:04.145683 sshd[7508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:04.148552 systemd-logind[1905]: New session 12 of user core. Nov 1 01:42:04.162420 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 01:42:04.244158 sshd[7508]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:04.245790 systemd[1]: sshd@9-139.178.94.199:22-139.178.89.65:40364.service: Deactivated successfully. Nov 1 01:42:04.247224 systemd-logind[1905]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:42:04.247331 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:42:04.247913 systemd-logind[1905]: Removed session 12. Nov 1 01:42:06.458398 kubelet[3248]: E1101 01:42:06.458348 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:42:09.256459 systemd[1]: Started sshd@10-139.178.94.199:22-139.178.89.65:59448.service - OpenSSH per-connection server daemon (139.178.89.65:59448). Nov 1 01:42:09.277624 sshd[7570]: Accepted publickey for core from 139.178.89.65 port 59448 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:09.278357 sshd[7570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:09.280945 systemd-logind[1905]: New session 13 of user core. Nov 1 01:42:09.293417 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 01:42:09.384406 sshd[7570]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:09.386236 systemd[1]: sshd@10-139.178.94.199:22-139.178.89.65:59448.service: Deactivated successfully. Nov 1 01:42:09.387899 systemd-logind[1905]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:42:09.388068 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:42:09.388828 systemd-logind[1905]: Removed session 13. Nov 1 01:42:10.458891 kubelet[3248]: E1101 01:42:10.458803 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:42:12.458825 kubelet[3248]: E1101 01:42:12.458765 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:42:12.458825 kubelet[3248]: E1101 01:42:12.458773 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:42:14.399982 systemd[1]: Started sshd@11-139.178.94.199:22-139.178.89.65:59458.service - OpenSSH per-connection server daemon (139.178.89.65:59458). Nov 1 01:42:14.454086 sshd[7602]: Accepted publickey for core from 139.178.89.65 port 59458 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:14.457638 sshd[7602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:14.469291 systemd-logind[1905]: New session 14 of user core. Nov 1 01:42:14.488110 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 01:42:14.594893 sshd[7602]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:14.609904 systemd[1]: Started sshd@12-139.178.94.199:22-139.178.89.65:59462.service - OpenSSH per-connection server daemon (139.178.89.65:59462). Nov 1 01:42:14.611871 systemd[1]: sshd@11-139.178.94.199:22-139.178.89.65:59458.service: Deactivated successfully. Nov 1 01:42:14.616240 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:42:14.619830 systemd-logind[1905]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:42:14.622823 systemd-logind[1905]: Removed session 14. Nov 1 01:42:14.653802 sshd[7629]: Accepted publickey for core from 139.178.89.65 port 59462 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:14.655055 sshd[7629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:14.659578 systemd-logind[1905]: New session 15 of user core. Nov 1 01:42:14.668536 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 01:42:14.806278 sshd[7629]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:14.818426 systemd[1]: Started sshd@13-139.178.94.199:22-139.178.89.65:59468.service - OpenSSH per-connection server daemon (139.178.89.65:59468). Nov 1 01:42:14.818759 systemd[1]: sshd@12-139.178.94.199:22-139.178.89.65:59462.service: Deactivated successfully. Nov 1 01:42:14.819657 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:42:14.820423 systemd-logind[1905]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:42:14.821044 systemd-logind[1905]: Removed session 15. Nov 1 01:42:14.839687 sshd[7655]: Accepted publickey for core from 139.178.89.65 port 59468 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:14.840402 sshd[7655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:14.843151 systemd-logind[1905]: New session 16 of user core. Nov 1 01:42:14.843770 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 01:42:14.970690 sshd[7655]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:14.972333 systemd[1]: sshd@13-139.178.94.199:22-139.178.89.65:59468.service: Deactivated successfully. Nov 1 01:42:14.973710 systemd-logind[1905]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:42:14.973892 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:42:14.974501 systemd-logind[1905]: Removed session 16. Nov 1 01:42:17.458471 containerd[1914]: time="2025-11-01T01:42:17.458423504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:42:17.459020 kubelet[3248]: E1101 01:42:17.458508 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:42:17.827374 containerd[1914]: time="2025-11-01T01:42:17.827109111Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:17.828171 containerd[1914]: time="2025-11-01T01:42:17.828147281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:42:17.828255 containerd[1914]: time="2025-11-01T01:42:17.828212261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:42:17.828389 kubelet[3248]: E1101 01:42:17.828345 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:42:17.828437 kubelet[3248]: E1101 01:42:17.828399 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:42:17.828510 kubelet[3248]: E1101 01:42:17.828488 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:580cd2697486438ea258991ade3b4df2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:17.830167 containerd[1914]: time="2025-11-01T01:42:17.830156945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:42:18.193136 containerd[1914]: time="2025-11-01T01:42:18.193089641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:18.193819 containerd[1914]: time="2025-11-01T01:42:18.193795396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:42:18.193885 containerd[1914]: time="2025-11-01T01:42:18.193865533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:42:18.193977 kubelet[3248]: E1101 01:42:18.193958 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:42:18.194005 kubelet[3248]: E1101 01:42:18.193986 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:42:18.194070 kubelet[3248]: E1101 01:42:18.194050 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4z8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-649cb85dc5-gfz8g_calico-system(f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:18.195207 kubelet[3248]: E1101 01:42:18.195188 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:42:19.458260 kubelet[3248]: E1101 01:42:19.458177 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:42:19.981470 systemd[1]: Started sshd@14-139.178.94.199:22-139.178.89.65:38906.service - OpenSSH per-connection server daemon (139.178.89.65:38906). Nov 1 01:42:20.005999 sshd[7686]: Accepted publickey for core from 139.178.89.65 port 38906 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:20.007040 sshd[7686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:20.010286 systemd-logind[1905]: New session 17 of user core. Nov 1 01:42:20.020452 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 01:42:20.100338 sshd[7686]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:20.101780 systemd[1]: sshd@14-139.178.94.199:22-139.178.89.65:38906.service: Deactivated successfully. Nov 1 01:42:20.103181 systemd-logind[1905]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:42:20.103342 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:42:20.103945 systemd-logind[1905]: Removed session 17. Nov 1 01:42:23.457731 kubelet[3248]: E1101 01:42:23.457700 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:42:23.458156 containerd[1914]: time="2025-11-01T01:42:23.457942805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:42:23.795387 containerd[1914]: time="2025-11-01T01:42:23.795148218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:23.796163 containerd[1914]: time="2025-11-01T01:42:23.796139953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:42:23.796232 containerd[1914]: time="2025-11-01T01:42:23.796214722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:42:23.796308 kubelet[3248]: E1101 01:42:23.796285 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:42:23.796342 kubelet[3248]: E1101 01:42:23.796318 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:42:23.796425 kubelet[3248]: E1101 01:42:23.796396 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42cch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6874c989b8-tpm2w_calico-system(c3dae49a-08cd-4d54-8b47-222aaaea72bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:23.797590 kubelet[3248]: E1101 01:42:23.797574 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:42:25.110430 systemd[1]: Started sshd@15-139.178.94.199:22-139.178.89.65:38910.service - OpenSSH per-connection server daemon (139.178.89.65:38910). Nov 1 01:42:25.131420 sshd[7717]: Accepted publickey for core from 139.178.89.65 port 38910 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:25.132296 sshd[7717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:25.134768 systemd-logind[1905]: New session 18 of user core. Nov 1 01:42:25.143448 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 01:42:25.222187 sshd[7717]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:25.223597 systemd[1]: sshd@15-139.178.94.199:22-139.178.89.65:38910.service: Deactivated successfully. Nov 1 01:42:25.224999 systemd-logind[1905]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:42:25.225077 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:42:25.225736 systemd-logind[1905]: Removed session 18. Nov 1 01:42:25.458181 kubelet[3248]: E1101 01:42:25.458142 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:42:29.461904 containerd[1914]: time="2025-11-01T01:42:29.461633637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:42:29.827094 containerd[1914]: time="2025-11-01T01:42:29.826995133Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:29.827453 containerd[1914]: time="2025-11-01T01:42:29.827407784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:42:29.827496 containerd[1914]: time="2025-11-01T01:42:29.827442415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:42:29.827561 kubelet[3248]: E1101 01:42:29.827515 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:42:29.827561 kubelet[3248]: E1101 01:42:29.827550 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:42:29.827774 kubelet[3248]: E1101 01:42:29.827627 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:29.829115 containerd[1914]: time="2025-11-01T01:42:29.829105881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:42:30.164445 containerd[1914]: time="2025-11-01T01:42:30.164238497Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:30.165055 containerd[1914]: time="2025-11-01T01:42:30.165027482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:42:30.165116 containerd[1914]: time="2025-11-01T01:42:30.165038758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:42:30.165179 kubelet[3248]: E1101 01:42:30.165158 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:42:30.165216 kubelet[3248]: E1101 01:42:30.165188 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:42:30.165309 kubelet[3248]: E1101 01:42:30.165275 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5pvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9grzl_calico-system(f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:30.166435 kubelet[3248]: E1101 01:42:30.166391 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:42:30.234642 systemd[1]: Started sshd@16-139.178.94.199:22-139.178.89.65:56304.service - OpenSSH per-connection server daemon (139.178.89.65:56304). Nov 1 01:42:30.282373 sshd[7744]: Accepted publickey for core from 139.178.89.65 port 56304 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:30.283386 sshd[7744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:30.286669 systemd-logind[1905]: New session 19 of user core. Nov 1 01:42:30.296514 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 01:42:30.443453 sshd[7744]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:30.457581 containerd[1914]: time="2025-11-01T01:42:30.457563480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:42:30.461488 systemd[1]: Started sshd@17-139.178.94.199:22-139.178.89.65:56314.service - OpenSSH per-connection server daemon (139.178.89.65:56314). Nov 1 01:42:30.461895 systemd[1]: sshd@16-139.178.94.199:22-139.178.89.65:56304.service: Deactivated successfully. Nov 1 01:42:30.462814 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:42:30.463600 systemd-logind[1905]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:42:30.464204 systemd-logind[1905]: Removed session 19. Nov 1 01:42:30.482429 sshd[7768]: Accepted publickey for core from 139.178.89.65 port 56314 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:30.483114 sshd[7768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:30.485747 systemd-logind[1905]: New session 20 of user core. Nov 1 01:42:30.486284 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 01:42:30.604658 sshd[7768]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:30.618449 systemd[1]: Started sshd@18-139.178.94.199:22-139.178.89.65:56322.service - OpenSSH per-connection server daemon (139.178.89.65:56322). Nov 1 01:42:30.618927 systemd[1]: sshd@17-139.178.94.199:22-139.178.89.65:56314.service: Deactivated successfully. Nov 1 01:42:30.620705 systemd-logind[1905]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:42:30.620887 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:42:30.621472 systemd-logind[1905]: Removed session 20. Nov 1 01:42:30.642737 sshd[7793]: Accepted publickey for core from 139.178.89.65 port 56322 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:30.643702 sshd[7793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:30.647335 systemd-logind[1905]: New session 21 of user core. Nov 1 01:42:30.665681 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 01:42:30.792463 containerd[1914]: time="2025-11-01T01:42:30.792345703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:30.792922 containerd[1914]: time="2025-11-01T01:42:30.792845895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:42:30.792922 containerd[1914]: time="2025-11-01T01:42:30.792903584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:42:30.793072 kubelet[3248]: E1101 01:42:30.793023 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:42:30.793072 kubelet[3248]: E1101 01:42:30.793057 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:42:30.793162 kubelet[3248]: E1101 01:42:30.793138 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89872,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-nf7rw_calico-apiserver(5dd05a57-68a7-45f0-82e4-c02ae8d0fe49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:30.794449 kubelet[3248]: E1101 01:42:30.794395 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:42:31.377484 sshd[7793]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:31.389412 systemd[1]: Started sshd@19-139.178.94.199:22-139.178.89.65:56332.service - OpenSSH per-connection server daemon (139.178.89.65:56332). Nov 1 01:42:31.389727 systemd[1]: sshd@18-139.178.94.199:22-139.178.89.65:56322.service: Deactivated successfully. Nov 1 01:42:31.390760 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:42:31.391551 systemd-logind[1905]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:42:31.392091 systemd-logind[1905]: Removed session 21. Nov 1 01:42:31.410567 sshd[7824]: Accepted publickey for core from 139.178.89.65 port 56332 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:31.411352 sshd[7824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:31.414036 systemd-logind[1905]: New session 22 of user core. Nov 1 01:42:31.432432 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 01:42:31.457726 kubelet[3248]: E1101 01:42:31.457704 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:42:31.563549 sshd[7824]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:31.575536 systemd[1]: Started sshd@20-139.178.94.199:22-139.178.89.65:56348.service - OpenSSH per-connection server daemon (139.178.89.65:56348). Nov 1 01:42:31.576661 systemd[1]: sshd@19-139.178.94.199:22-139.178.89.65:56332.service: Deactivated successfully. Nov 1 01:42:31.577891 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:42:31.578573 systemd-logind[1905]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:42:31.579114 systemd-logind[1905]: Removed session 22. Nov 1 01:42:31.597364 sshd[7852]: Accepted publickey for core from 139.178.89.65 port 56348 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:31.598293 sshd[7852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:31.600950 systemd-logind[1905]: New session 23 of user core. Nov 1 01:42:31.601626 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 01:42:31.678413 sshd[7852]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:31.679918 systemd[1]: sshd@20-139.178.94.199:22-139.178.89.65:56348.service: Deactivated successfully. Nov 1 01:42:31.681343 systemd-logind[1905]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:42:31.681400 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:42:31.681978 systemd-logind[1905]: Removed session 23. Nov 1 01:42:34.457745 kubelet[3248]: E1101 01:42:34.457681 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:42:34.458110 containerd[1914]: time="2025-11-01T01:42:34.457862700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:42:34.845495 containerd[1914]: time="2025-11-01T01:42:34.845387416Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:34.856412 containerd[1914]: time="2025-11-01T01:42:34.856337116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:42:34.856473 containerd[1914]: time="2025-11-01T01:42:34.856404476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:42:34.856564 kubelet[3248]: E1101 01:42:34.856506 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:42:34.856564 kubelet[3248]: E1101 01:42:34.856536 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:42:34.856634 kubelet[3248]: E1101 01:42:34.856612 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwdx7_calico-system(74c9c676-a5c3-4018-bd2b-2647288efe10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:34.857821 kubelet[3248]: E1101 01:42:34.857773 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10" Nov 1 01:42:36.705946 systemd[1]: Started sshd@21-139.178.94.199:22-139.178.89.65:41338.service - OpenSSH per-connection server daemon (139.178.89.65:41338). Nov 1 01:42:36.766811 sshd[7883]: Accepted publickey for core from 139.178.89.65 port 41338 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:36.769323 sshd[7883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:36.777416 systemd-logind[1905]: New session 24 of user core. Nov 1 01:42:36.801918 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 01:42:36.920629 sshd[7883]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:36.922934 systemd[1]: sshd@21-139.178.94.199:22-139.178.89.65:41338.service: Deactivated successfully. Nov 1 01:42:36.924167 systemd-logind[1905]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:42:36.924270 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:42:36.924977 systemd-logind[1905]: Removed session 24. Nov 1 01:42:39.459032 containerd[1914]: time="2025-11-01T01:42:39.459009846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:42:39.804615 containerd[1914]: time="2025-11-01T01:42:39.804380646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:42:39.805231 containerd[1914]: time="2025-11-01T01:42:39.805158608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:42:39.805262 containerd[1914]: time="2025-11-01T01:42:39.805240511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:42:39.805390 kubelet[3248]: E1101 01:42:39.805336 3248 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:42:39.805390 kubelet[3248]: E1101 01:42:39.805371 3248 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:42:39.805620 kubelet[3248]: E1101 01:42:39.805446 3248 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65957f8fc6-h4mgq_calico-apiserver(07d4f1da-121e-4217-9b76-751186743f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:42:39.806623 kubelet[3248]: E1101 01:42:39.806573 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-h4mgq" podUID="07d4f1da-121e-4217-9b76-751186743f3a" Nov 1 01:42:41.940910 systemd[1]: Started sshd@22-139.178.94.199:22-139.178.89.65:41350.service - OpenSSH per-connection server daemon (139.178.89.65:41350). Nov 1 01:42:41.986527 sshd[7946]: Accepted publickey for core from 139.178.89.65 port 41350 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:41.988004 sshd[7946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:41.992861 systemd-logind[1905]: New session 25 of user core. Nov 1 01:42:42.009543 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 01:42:42.140076 sshd[7946]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:42.141639 systemd[1]: sshd@22-139.178.94.199:22-139.178.89.65:41350.service: Deactivated successfully. Nov 1 01:42:42.143055 systemd-logind[1905]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:42:42.143214 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:42:42.143782 systemd-logind[1905]: Removed session 25. Nov 1 01:42:42.458102 kubelet[3248]: E1101 01:42:42.458073 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9grzl" podUID="f1594ca4-f4b9-4bd5-a5ea-30623fb4e42b" Nov 1 01:42:44.458081 kubelet[3248]: E1101 01:42:44.458029 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65957f8fc6-nf7rw" podUID="5dd05a57-68a7-45f0-82e4-c02ae8d0fe49" Nov 1 01:42:44.458466 kubelet[3248]: E1101 01:42:44.458259 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649cb85dc5-gfz8g" podUID="f0eea4e4-1f5d-465a-aad6-ce4cfbcde2c9" Nov 1 01:42:47.157511 systemd[1]: Started sshd@23-139.178.94.199:22-139.178.89.65:57088.service - OpenSSH per-connection server daemon (139.178.89.65:57088). Nov 1 01:42:47.179587 sshd[7973]: Accepted publickey for core from 139.178.89.65 port 57088 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:42:47.180538 sshd[7973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:42:47.184061 systemd-logind[1905]: New session 26 of user core. Nov 1 01:42:47.185042 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 01:42:47.283384 sshd[7973]: pam_unix(sshd:session): session closed for user core Nov 1 01:42:47.285316 systemd[1]: sshd@23-139.178.94.199:22-139.178.89.65:57088.service: Deactivated successfully. Nov 1 01:42:47.286387 systemd-logind[1905]: Session 26 logged out. Waiting for processes to exit. Nov 1 01:42:47.286442 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 01:42:47.287039 systemd-logind[1905]: Removed session 26. Nov 1 01:42:47.457948 kubelet[3248]: E1101 01:42:47.457892 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6874c989b8-tpm2w" podUID="c3dae49a-08cd-4d54-8b47-222aaaea72bd" Nov 1 01:42:48.458913 kubelet[3248]: E1101 01:42:48.458780 3248 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwdx7" podUID="74c9c676-a5c3-4018-bd2b-2647288efe10"